CN109254579A - A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method - Google Patents

A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method Download PDF

Info

Publication number
CN109254579A
CN109254579A CN201710576935.8A CN201710576935A CN109254579A CN 109254579 A CN109254579 A CN 109254579A CN 201710576935 A CN201710576935 A CN 201710576935A CN 109254579 A CN109254579 A CN 109254579A
Authority
CN
China
Prior art keywords
camera
scene rebuilding
binocular vision
image
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710576935.8A
Other languages
Chinese (zh)
Other versions
CN109254579B (en
Inventor
刘玉
叶卉
刘元伟
鲍凤卿
卢远志
刘奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Motor Corp Ltd
Original Assignee
SAIC Motor Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Motor Corp Ltd filed Critical SAIC Motor Corp Ltd
Priority to CN201710576935.8A priority Critical patent/CN109254579B/en
Publication of CN109254579A publication Critical patent/CN109254579A/en
Application granted granted Critical
Publication of CN109254579B publication Critical patent/CN109254579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of binocular vision camera hardware systems, 3 D scene rebuilding system and method, 3 D scene rebuilding system includes ECU, binocular vision camera hardware system and GPS module, when the navigation information that ECU is passed back according to GPS module, when determining that the distance between vehicle and front turning position reach pre-determined distance, ECU controls binocular vision camera and back and forth rotates synchronously in default rotating range, acquire image data in 180 ° of visual fields of vehicle front, and based on exercise recovery structural principle and instant positioning and map structuring principle, image data carries out 3 D scene rebuilding in the 180 ° of visual fields of vehicle front acquired respectively to left camera and right camera, obtain the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front.The present invention can be realized only with a set of binocular vision camera with one-dimensional rotation function to the Image Acquisition in 180 ° of angulars field of view of vehicle front, to reduce the hardware cost of 3 D scene rebuilding.

Description

A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method
Technical field
The present invention relates to 3 D scene rebuilding technical fields, more specifically, being related to a kind of binocular vision camera hardware system System, 3 D scene rebuilding system and method.
Background technique
Autonomous driving vehicle, also known as pilotless automobile, computer driving or wheeled mobile robot are a kind of logical It crosses computer system and realizes unpiloted intelligent automobile.With the continuous development of automatic Pilot technology, autonomous driving vehicle is to week The requirement of collarette border sensing capability is higher and higher.
Currently, autonomous driving vehicle, which is based primarily upon, realizes vehicle automatic turning function to the three-dimensional scenic that current environment is rebuild Can, there are many technologies for realizing 3 D scene rebuilding, and (Real-time kinematic, dynamic is poor in real time by such as high-precision RTK Point-score) technology, high-cost laser radar technique or machine vision technique.In view of the mode based on machine vision technique at This relative moderate, therefore, many vehicle enterprises use to be rebuild based on three-dimensional scenic of the machine vision technique to vehicle current environment, So that vehicle is based on the three-dimensional scenic and realizes automatic turning.
When carrying out 3 D scene rebuilding based on machine vision technique, it is based primarily upon the realization of binocular vision camera, due to double The limited viewing angle of mesh vision camera can not cover 180 ° of vehicle front of angular field of view, so being generally between 40 °~60 ° Guarantee 3 D scene rebuilding precision, realizes vehicle automatic turning function, it will usually binocular vision camera is covered more to vehicle configuration, with Acquire the image in 180 ° of angulars field of view of vehicle front.However, covering binocular vision camera to vehicle configuration, it will lead to three dimensional field more The hardware cost that scape is rebuild increases.
Summary of the invention
In view of this, the present invention discloses a kind of binocular vision camera hardware system, 3 D scene rebuilding system and method, with It solves in traditional scheme, when acquiring the image in 180 ° of angulars field of view of vehicle front, because needing to cover binocular to vehicle configuration more Vision camera, caused by 3 D scene rebuilding hardware cost increase the problem of.
A kind of binocular vision camera hardware system, comprising: binocular vision camera, speed reduction gearing, stepper motor and step Into electric machine controller, wherein the binocular vision camera includes: left camera, right camera and the fixed left camera and described The bracket of right camera;
The output end of the controllor for step-by-step motor is connect with the control terminal of the stepper motor, the electricity of the stepper motor Machine exports speed reduction gearing described in axis connection, and the central rotating shaft of the speed reduction gearing is installed for fixing the left camera With the bracket of the right camera;
The control signal that the stepper motor is exported according to the controllor for step-by-step motor, drives the speed reduction gearing Rotation, and the bracket rotation being mounted on the central rotating shaft of the speed reduction gearing is driven, realization is fixed on the branch The left camera and the right camera on frame back and forth rotate synchronously in default rotating range.
Preferably, the camera optical axis rotation angle between adjacent two frame of the binocular vision camera is equal to predetermined angle.
A kind of 3 D scene rebuilding system, comprising: electronic control unit ECU, GPS module and described in claim 1 pair Mesh vision camera, wherein the left camera and right camera of the binocular vision camera are placed along vehicle centre-line horizontal symmetrical;
The input terminal of the ECU respectively with the output end of the GPS module, the output end of image of the left camera and described The output end of image of right camera connects, the control of controllor for step-by-step motor in the output end of the ECU and the binocular vision camera End connection processed, the ECU are used to determine turning position in front of vehicle distances in the navigation information passed back according to the GPS module Distance when reaching pre-determined distance, send triggering command to the magazine controllor for step-by-step motor of the binocular vision, pass through institute It states controllor for step-by-step motor and controls the magazine stepper motor of binocular vision according to default motor angular velocity of rotation drive institute It states left camera and the right camera and is being parallel to vehicle body and along vehicle center line position, it is back and forth synchronous in default rotating range to turn It is dynamic;Obtain the vehicle front of image data and the right camera acquisition in 180 ° of visual fields of vehicle front of the left camera acquisition Image data in 180 ° of visual fields;Scheme in the 180 ° of visual fields of vehicle front acquired based on exercise recovery structural principle to the left camera As data progress 3 D scene rebuilding, first group of 3 D scene rebuilding image is obtained;Based on exercise recovery structural principle to described Image data carries out 3 D scene rebuilding in 180 ° of visual fields of vehicle front of right camera acquisition, obtains second group of 3 D scene rebuilding Image;The vehicle obtained simultaneously based on instant positioning with map structuring principle left camera described in different moments and the right camera Image data carries out 3 D scene rebuilding in the visual field of 180 ° of front, obtains third group 3 D scene rebuilding image;By described first Group 3 D scene rebuilding image, second group of 3 D scene rebuilding image and the third group 3 D scene rebuilding image carry out Three dimensional spatial scene splicing, obtains the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front, wherein the default rotation Turn range are as follows: (- (90 ° of-a/2), (90 ° of-a/2)), a are the horizontal field of view angle of the binocular vision camera.
Preferably, the default motor angular velocity of rotation meets formula (1), and the expression formula of formula (1) is as follows:
W=N*c (1)
In formula, w be motor angular velocity of rotation, unit: °/s, N be the binocular vision camera frame frequency, unit: pfs, c Camera optical axis rotates angle between preset adjacent two frame of binocular vision camera, unit: °.
A kind of method for reconstructing three-dimensional scene, comprising:
Vehicle is obtained at a distance from current time is between the turning position of front;
When determining that the distance reaches pre-determined distance, triggering is sent to the magazine controllor for step-by-step motor of binocular vision Instruction controls the magazine stepper motor of binocular vision according to default motor rotation angle by the controllor for step-by-step motor Speed drives left camera and right camera being parallel to vehicle body and along vehicle center line position, back and forth synchronizes in default rotating range Rotation, wherein the default rotating range are as follows: (- (90 ° of-a/2), (90 ° of-a/2)), a are the level of the binocular vision camera Field angle;
Obtain the vehicle of image data and the right camera acquisition in 180 ° of visual fields of vehicle front of the left camera acquisition Image data in the visual field of 180 ° of front;
Image data in 180 ° of visual fields of vehicle front of the left camera acquisition is carried out based on exercise recovery structural principle 3 D scene rebuilding obtains first group of 3 D scene rebuilding image;
Image data in 180 ° of visual fields of vehicle front of the right camera acquisition is carried out based on exercise recovery structural principle 3 D scene rebuilding obtains second group of 3 D scene rebuilding image;
It is obtained simultaneously based on instant positioning with map structuring principle left camera described in different moments and the right camera Image data carries out 3 D scene rebuilding in 180 ° of visual fields of vehicle front, obtains third group 3 D scene rebuilding image;
By first group of 3 D scene rebuilding image, second group of 3 D scene rebuilding image and the third group three Dimension scene rebuilding image carries out coordinate system and is converted to the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front.
Preferably, it is described obtain the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front after, further includes:
Obstacle recognition, road to be turned are carried out to the 3 D scene rebuilding image in 180 ° of angulars field of view of the vehicle front Two sides Boundary Recognition and can traffic areas identification, obtain recognition result;
The optimal pass for meeting current condition is chosen from the recognition result;
The turning for controlling vehicle executes structure, realizes vehicle turning according to the optimal pass.
Preferably, the 3 D scene rebuilding image in 180 ° of angulars field of view of the vehicle front carries out barrier knowledge Not, road two sides to be turned Boundary Recognition and can traffic areas identification, obtaining recognition result includes:
To the 3 D scene rebuilding image in 180 ° of angulars field of view of the vehicle front, region is carried out based on depth and gray scale Segmentation, obtains T cut zone, T is positive integer;
It is performed the following operations for cut zone described in each:
It determines total pixel number N in current cut zone and meets the pixel number M of plane fitting equation model;
Judge whether pixel number M and the ratio of total pixel number N are less than threshold parameter;
If the ratio is not less than the threshold parameter, determine that the current cut zone is road area, and by institute The outer boundary of road area is stated as road two sides to be turned boundary;
If the ratio is less than the threshold parameter, determine that the current cut zone is barrier region;
After the completion of the area type belonging to the T cut zone determines, according to road area all in T cut zone With all barrier regions, according to formula (2) and formula (3) obtain can traffic areas L, it is described can traffic areas L apart from institute The distance of barrier region is stated not higher than safety distance threshold, formula (2) and formula (3) are specific as follows:
L ∩ A=L (2);
L ∩ B=0 (3);
In formula, A is the total quantity of all road areas in T cut zone, and B is all barriers in T cut zone The total quantity in region, 0 indicates empty set.
Preferably, further includes:
When determining that steering wheel angle restores to default corner, and when interior outside front wheel rotation speed is identical, to the stepper motor Controller sends halt instruction, controls the stepper motor by the controllor for step-by-step motor and drives the binocular vision camera Restore to the position parallel with vehicle centre-line.
From above-mentioned technical solution it is found that the invention discloses a kind of binocular vision camera hardware systems, three-dimensional scenic weight System and method is built, 3 D scene rebuilding system includes ECU, and the binocular vision camera hardware system and GPS mould that connect with ECU Block, when the navigation information that ECU is passed back according to GPS module, determine the distance between vehicle and front turning position reach it is default away from From when, the left camera and right camera of ECU control binocular vision camera back and forth rotate synchronously in default rotating range, acquire vehicle Image data in the visual field of 180 ° of front, and based on exercise recovery structural principle and instant positioning and map structuring principle, to left phase Machine acquisition 180 ° of visual fields of vehicle front in image data and right camera acquisition 180 ° of visual fields of vehicle front in image data into Row 3 D scene rebuilding obtains the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front.For traditional scheme, The present invention can be realized only with a set of binocular vision camera with one-dimensional rotation function to 180 ° of angulars field of view of vehicle front Interior Image Acquisition enhances perception of the vehicle to front environment to improve the field range of a set of binocular vision camera Ability reduces the hardware cost of 3 D scene rebuilding.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis Disclosed attached drawing obtains other attached drawings.
Fig. 1 is a kind of structural schematic diagram of binocular vision camera hardware system disclosed by the embodiments of the present invention;
Fig. 2 is a kind of structural schematic diagram of 3 D scene rebuilding system disclosed by the embodiments of the present invention;
Fig. 3 is a kind of binocular vision camera disclosed by the embodiments of the present invention for swing circle observation scope schematic diagram;
Fig. 4 is a kind of method flow diagram of method for reconstructing three-dimensional scene disclosed by the embodiments of the present invention;
Fig. 5 is a kind of 3 D scene rebuilding flow chart disclosed by the embodiments of the present invention;
Fig. 6 is that one kind disclosed by the embodiments of the present invention is based on 3 D scene rebuilding image, chooses the optimal logical of vehicle turning The method flow diagram of walking along the street diameter.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a kind of binocular vision cameras, 3 D scene rebuilding system and method, to solve tradition In scheme, when acquiring the image in 180 ° of angulars field of view of vehicle front, because needing to cover binocular vision camera to vehicle configuration more, Caused by 3 D scene rebuilding hardware cost increase the problem of.
Referring to Fig. 1, a kind of structural schematic diagram of binocular vision camera hardware system disclosed by the embodiments of the present invention, the binocular Vision camera includes: binocular vision camera, speed reduction gearing 14, stepper motor 15 and controllor for step-by-step motor 16, wherein institute State the bracket 13 that binocular vision camera includes: left camera 11, right camera 12 and fixed left camera 11 and right camera 12;
Wherein:
The output end of controllor for step-by-step motor 16 is connect with the control terminal of stepper motor 15, the motor output of stepper motor 15 Axis connection speed reduction gearing 14, the central rotating shaft installation of speed reduction gearing 14 is for fixing left camera 11 and right camera 12 Bracket 13.
Working principle are as follows: the control signal that stepper motor 15 is exported according to controllor for step-by-step motor 16 drives deceleration transmission Device 14 rotates, and the bracket 13 being mounted on the central rotating shaft of speed reduction gearing 14 is driven to rotate, and realization is fixed on bracket Left camera 11 and right camera 12 on 13 back and forth rotate synchronously in default rotating range.
Optionally, speed reduction gearing 14 includes: driving gear and driven gear, wherein driving gear is mounted on stepping On the motor output shaft of motor 15, driving gear and driven gear engagement, driven gear mounting bracket 13.15 basis of stepper motor The control signal that controllor for step-by-step motor 16 exports drives the driving gear being mounted on motor output shaft, driving gear rotation It is engaged with driven gear, so that driven gear rotates and the bracket being mounted on driven gear 13 is driven to rotate, realization is fixed on Left camera 11 and right camera 12 on bracket 13 back and forth rotate synchronously in default rotating range.
Wherein, in practical applications, the selection of stepper motor 15 is determined by step angle, step angle meet 0 °≤b of formula≤ C, b are step angle, unit: °, c camera optical axis between preset adjacent two frame rotates angle, unit: °
It should be noted that the structure of binocular vision camera hardware system includes but is not limited to embodiment illustrated in fig. 1, it is all It is that by the structure that left camera 11 and right camera 12 back and forth rotate synchronously in default rotating range, is belonged to of the invention Protection scope.
To realize binocular vision camera to the Image Acquisition in 180 ° of angulars field of view of vehicle front, the present invention is to binocular vision The parameter information of camera is set, specific as follows:
Assuming that the horizontal head-up rink corner of binocular vision camera is a (unit: °), frame frequency is N (pfs), then is to realize binocular vision Feel camera to the Image Acquisition in 180 ° of angulars field of view of vehicle front, the slewing area of stepper motor 15 be (- (90 ° of-a/2), (90°-a/2))。
To be further ensured that binocular vision camera to the Image Acquisition in 180 ° of angulars field of view of vehicle front, in practical application In, preferentially binocular vision camera is mounted on Chinese herbaceous peony windshield, is located at rearview mirror rear, the left camera of binocular vision camera 11 and right camera 12 along vehicle centre-line horizontal symmetrical place, the baseline length between left camera 11 and right camera 12 is adjustable, adjust For adjusting range between 10cm~30cm, the visual field scope of binocular vision camera is binocular vision camera phase between 40 °~60 ° Pitch angle for vehicle body is 0 °, and controllor for step-by-step motor can be set in rear deck.
In practical applications, the left camera 11 of binocular vision camera and right camera 12 are driven by stepper motor 15 rotates, by In stepper motor 15 slewing area be (- (90 ° of-a/2), (90 ° of-a/2)), therefore, correspondingly, the left phase of binocular vision camera Machine 11 and right camera 12 are being parallel to vehicle body and along vehicle center line position, reciprocal same in (- (90 ° of-a/2), (90 ° of-a/2)) Step rotation.
The motor angular velocity of rotation w of stepper motor meets formula (1), and the expression formula of formula (1) is as follows:
W=N*c (1);
In formula, w be motor angular velocity of rotation, unit: °/s, N be the binocular vision camera frame frequency, unit: pfs, c Camera optical axis rotates angle between preset adjacent two frame of binocular vision camera, unit: °.
When the distance of turning position reaches pre-determined distance (such as 50m~100m) in front of vehicle distances, to make binocular vision A swing circle Image Acquisition is completed in camera one second, while guaranteeing the three-dimensional reconstruction effect of degree of precision, binocular vision phase Overlapping region between machine adjacent two field pictures collected is The more the better, to reach this purpose, binocular vision camera it is adjacent Between two frames camera optical axis rotation angle be equal to predetermined angle, preferably 3 ° of predetermined angle.
In summary, the invention discloses a kind of binocular vision camera with one-dimensional rotation function, one is greatly improved The field range for covering binocular vision camera, enhances vehicle to the sensing capability of front environment, reduces 3 D scene rebuilding Hardware cost, while the potential danger factor caused due to Narrow Field Of Vision is also reduced, for the safety for promoting vehicle automatic turning Property provides guarantee.
Referring to fig. 2, one embodiment of the invention discloses a kind of structural schematic diagram of 3 D scene rebuilding system, the reconstruction system System includes: ECU (Electronic Control Unit, electronic control unit) 21, binocular vision described in GPS module 22 and Fig. 1 Feel camera hardware system 23, the input terminal of ECU21 respectively with the GPS module 22, the output end of image of the left camera and institute The output end of image connection of right camera is stated, the control of controllor for step-by-step motor in the output end and binocular vision camera 21 of ECU21 End connection;
GPS module 22, the location information that the vehicle for acquisition is presently in, and the location information is sent to described EPU21 makes the EPU21 determine the distance between vehicle and front turning position according to the positional information.
3 D scene rebuilding system carries out the process of 3 D scene rebuilding to the image in 180 ° of angulars field of view of vehicle front It is specific as follows:
ECU21 the distance for determining turning position in front of vehicle distances (such as crossroad, T-shaped road junction) reach it is default away from When from (such as 50m~100m), the controllor for step-by-step motor 16 into the binocular vision camera hardware system 23 sends triggering and refers to It enables, the stepper motor 15 in the binocular vision camera hardware system 23 is controlled according to default by the controllor for step-by-step motor Motor angular velocity of rotation drives the left camera and the right camera being parallel to vehicle body and along vehicle center line position, default It is back and forth rotated synchronously in rotating range;
ECU21 obtains image data and the right camera in 180 ° of visual fields of vehicle front of the left camera acquisition in real time and adopts Image data in 180 ° of visual fields of vehicle front of collection;
ECU21 carries out image data in 180 ° of visual fields of vehicle front of the left camera acquisition based on SFM principle three-dimensional Scene rebuilding obtains first group of 3 D scene rebuilding image, is denoted as ML;Based on SFM principle to the vehicle of the right camera acquisition Image data carries out 3 D scene rebuilding in the visual field of 180 ° of front, obtains second group of 3 D scene rebuilding image, is denoted as MR;It is based on Image data in 180 ° of visual fields of vehicle front that SLAM principle left camera described in different moments and the right camera obtain simultaneously 3 D scene rebuilding is carried out, third group 3 D scene rebuilding image is obtained, is denoted as MM;By first group of 3 D scene rebuilding figure As ML, second group of 3 D scene rebuilding image MR and the third group 3 D scene rebuilding image MM carry out three-dimensional space bay Scape splicing, obtains the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front, wherein the default rotating range are as follows: 90 ° of-a/2 of (- (), (90 ° of-a/2)), a is the horizontal field of view angle of the binocular vision camera.
It should be noted that when binocular vision camera is located at the position parallel with vehicle centre-line, the rotation of synchronous motor Gyration is 0 °, correspondingly, the rotation angle of the left camera of binocular vision camera and right camera is 0 °, it is assumed that synchronous motor is to the left Switch to bear, turn right and be positive, then the rotating range of synchronous motor is (- (90 ° of-a/2), (90 ° of-a/2)), correspondingly, binocular vision Feel the left camera of camera and the rotating range of right camera are as follows: (- (90 ° of-a/2), (90 ° of-a/2)) namely binocular vision camera Visual field scope are as follows: (- (90 ° of-a/2), (90 ° of-a/2)), it is specific as shown in figure 3, two overlappings shown in appended drawing reference 31 Fan-shaped region is the field range of binocular vision left side of camera maximum rotation position, and two overlappings shown in appended drawing reference 32 are fan-shaped Region is the visual field scope that binocular vision camera is horizontally arranged under non-turn state (namely original state), appended drawing reference 33 Shown in two overlapping fan-shaped regions be binocular vision right side of camera maximum rotation position field range.
Wherein, SFM (Structure from motion, exercise recovery structure) is a crossbar (cross matrix) Structure has a plurality of channal is horizontal and vertical staggeredly to form, and every channel provides 8Gbps exchange capacity (supervisor720 provides every channel 20Gpbs).In the present embodiment, it is based on SFM principle, respectively to the acquisition of left camera Image data carries out three-dimensional in 180 ° of visual fields of vehicle front that image data and right camera acquire in 180 ° of visual fields of vehicle front Scene rebuilding.
The work of SLAM (Simultaneous Localization And Mapping, instant positioning and map structuring) Principle are as follows: robot (being automatic driving vehicle in the application) is moved since a unknown position in circumstances not known, is being moved Self poisoning is carried out according to location estimation and map during dynamic, while building increment type map on the basis of self poisoning, Realize the autonomous positioning and navigation of robot.The present embodiment is based on SLAM principle left camera and the right side described in different moments Image data carries out 3 D scene rebuilding in 180 ° of visual fields of vehicle front that camera obtains simultaneously.
It should be noted that default rotating range in the present embodiment namely in above-described embodiment (- (90 ° of-a/2), (90 ° of-a/2)), a is the horizontal field of view angle of the binocular vision camera.
Default motor angular velocity of rotation namely the motor angular velocity of rotation for meeting above-mentioned formula (1), when meeting formula (1) When motor angular velocity of rotation is multiple, it can choose according to actual needs.
In practical applications, ECU21 can be according to GPS (the Global Positioning of automatic driving vehicle System, global positioning system) navigation information passed back of module passes through to calculate the distance of turning position in front of vehicle distances The distance is compared with pre-determined distance, it is determined whether open binocular vision camera hardware system 23.
In summary, 3 D scene rebuilding system disclosed by the invention includes: ECU21, GPS module 22 and binocular vision phase Machine hardware system 23 determines that vehicle is turned at current time and front when the navigation information that ECU21 is passed back according to GPS module 22 When the distance between position reaches pre-determined distance, the left camera 11 of ECU21 control binocular vision camera hardware system 23 and right phase Machine 12 back and forth rotates synchronously in default rotating range, acquires image data in 180 ° of visual fields of vehicle front, and extensive based on moving Complex structure principle and immediately positioning with map structuring principle, to left camera acquisition 180 ° of visual fields of vehicle front in image data and Image data carries out 3 D scene rebuilding in 180 ° of visual fields of vehicle front of right camera acquisition, obtains 180 ° of visual angle models of vehicle front Enclose interior 3 D scene rebuilding image.For traditional scheme, the present invention is only with a set of double with one-dimensional rotation function Visually feel camera hardware system 23, can be realized to the Image Acquisition in 180 ° of angulars field of view of vehicle front, to improve one The field range for covering binocular vision camera, enhances vehicle to the sensing capability of front environment, reduces 3 D scene rebuilding Hardware cost.
Corresponding with the above system embodiment, the invention also discloses a kind of method for reconstructing three-dimensional scene.
Referring to fig. 4, a kind of method flow diagram of method for reconstructing three-dimensional scene, this method disclosed in one embodiment of the invention are answered For the ECU21 in above-described embodiment, the method comprising the steps of:
Step S41, vehicle is obtained at a distance from current time is between the turning position of front;
Specifically, GPS module 22 starts to navigate to the location of vehicle information after autonomous driving vehicle work Positioning, and pass navigation information back ECU21, vehicle is calculated at a distance from current time is between the turning position of front by ECU21.
Step S42, when determining that the distance reaches pre-determined distance, control binocular vision camera hardware system 23 is default It is back and forth rotated synchronously in rotating range;
Specifically, when ECU21 determines that the distance reaches pre-determined distance, into binocular vision camera hardware system 23 Controllor for step-by-step motor 16 sends triggering command, controls binocular vision camera hardware system by the controllor for step-by-step motor Stepper motor 15 in system 23 drives left camera and right camera being parallel to vehicle body and along vehicle according to default motor angular velocity of rotation Position of center line back and forth rotates synchronously, wherein the default rotating range are as follows: (- (90 ° of-a/ in default rotating range 2), (90 ° of-a/2)), a is the horizontal field of view angle of the binocular vision camera;
When binocular vision camera is located at the position parallel with vehicle centre-line, the rotation angle of synchronous motor is 0 °, phase It answers, the left camera of binocular vision camera and the rotation angle of right camera are 0 °, it is assumed that synchronous motor, which is turned left, to be negative, and turns right It is positive, then the rotating range of synchronous motor is (- (90 ° of-a/2), (90 ° of-a/2)), correspondingly, the left camera of binocular vision camera With the rotating range of right camera are as follows: (- (90 ° of-a/2), (90 ° of-a/2)), it is specific as shown in Figure 2.
Step S43, image data and the right camera in 180 ° of visual fields of vehicle front of the left camera acquisition is obtained to adopt Image data in 180 ° of visual fields of vehicle front of collection;
Wherein, the vehicle that image data and the right camera acquire in 180 ° of visual fields of vehicle front of the left camera acquisition Image data refers specifically in the visual field of 180 ° of front: left camera and right camera back and forth move synchronously in default rotating range Each moment acquired image.
Step S44, three are carried out to image data in 180 ° of visual fields of vehicle front of the left camera acquisition based on SFM principle Scene rebuilding is tieed up, first group of 3 D scene rebuilding image is obtained;
Step S45, three are carried out to image data in 180 ° of visual fields of vehicle front of the right camera acquisition based on SFM principle Scene rebuilding is tieed up, second group of 3 D scene rebuilding image is obtained;
Step S46, before the vehicle that the left camera described in different moments and the right camera obtain simultaneously based on SLAM principle Image data carries out 3 D scene rebuilding in the 180 ° of visual fields in side, obtains third group 3 D scene rebuilding image;
It should be noted that step S44, step S45 and step S46, in practical implementation, unfixed is successive Sequentially, sequence shown in including but not limited to Fig. 4, in practical applications, three steps also may be performed simultaneously.
Step S47, by first group of 3 D scene rebuilding image, second group of 3 D scene rebuilding image and described Third group 3 D scene rebuilding image carries out three dimensional spatial scene splicing, obtains the three dimensional field in 180 ° of angulars field of view of vehicle front Scape reconstruction image.
For convenience of method for reconstructing three-dimensional scene is understood, as shown in figure 5, a kind of three dimensional field disclosed in another embodiment of the present invention Scape rebuilds flow chart, wherein and the image of left camera acquisition is named as left image, and the image of right camera acquisition is named as right image, The time of binocular vision camera images rotational acquisition is t1~tn, and since the t1 moment, left camera and right camera are by each moment The image data that acquired image data synchronism output is acquired to ECU21, ECU21 according to left camera, including t1 moment left figure Picture, t2 moment left image ... tn-1 moment left image and tn moment left image carry out 3 D scene rebuilding based on SFM principle, Obtain first group of 3 D scene rebuilding image;The image data that ECU21 is acquired according to right camera, including t1 moment right image, t2 Right image ... tn-1 moment at moment right image and tn moment right image carry out 3 D scene rebuilding based on SFM principle, obtain the Two groups of 3 D scene rebuilding images;The image data that ECU21 is acquired according to left camera simultaneously, including when t1 moment left image, t2 Carve the image data of left image ... tn-1 moment left image and tn moment left image and the acquisition of right camera, including the t1 moment right side Image, t2 moment right image ... tn-1 moment right image and tn moment right image carry out three-dimensional scenic weight based on SLAM principle It builds, obtains third group 3 D scene rebuilding image;Finally, ECU21 is by first group of 3 D scene rebuilding image, described second Group 3 D scene rebuilding image and the third group 3 D scene rebuilding image carry out three dimensional spatial scene splicing, before obtaining vehicle 3 D scene rebuilding image in the 180 ° of angulars field of view in side.
It in conjunction with Fig. 5, illustrates in above-described embodiment, is acquired according to the image data of left camera acquisition and right camera Image data carries out the process of 3 D scene rebuilding, specific as follows:
(1) image data that ECU21 is acquired according to left camera carries out 3 D scene rebuilding based on SFM principle, obtains first The process of group 3 D scene rebuilding image specifically:
It is illustrated by taking tn-2 moment left image, tn-1 moment left image and tn moment left camera image as an example, to other The process that moment left image carries out 3 D scene rebuilding is similar.
By tn-2 moment left image (being not shown in Fig. 5) and tn-1 moment left image carry out feature point extraction with match, obtain To N pairs of match point, by tn-1 moment left image and tn moment left camera image carry out feature point extraction with match, obtain match point M pairs, it can determine that homonymy matching point to L=N ∩ M based on Gray Correlation principle;It is passed according to vehicle itself inertial navigation or wheel speed Sensor provides vehicle moving displacement between tri- moment of tn-2, tn-1 and tn, can determine that based on stereoscopic vision matching principle A series of space characteristics points describe under wtnl system and wtn-1 system, and coordinate system is specifically defined are as follows: left camera photocentre is that coordinate system is former Point, left camera optical axis direction are Z-direction, are vertically downward Y-direction with left camera body, and X-direction meets right-hand screw rule pass System;According to same place relationship, the lower spatial point coordinate described of wtn-1 system can be converted to wtn system based on principle of coordinate transformation, Thus recursion can be obtained wt1 system, wt2 system ..., transformational relation between wtn system can further obtain wtn according to coordinate transform The lower all three-dimensional reconstruction results described of system.
(2) image data that ECU21 is acquired according to right camera carries out 3 D scene rebuilding based on SFM principle, obtains second The process of group 3 D scene rebuilding image is same as above, this is repeated no more.
(3) it is same with map structuring principle left camera described in different moments and the right camera to be based on positioning immediately by ECU21 When 180 ° of visual fields of vehicle front for obtaining in image data carry out 3 D scene rebuilding, obtain third group 3 D scene rebuilding image Process include:
World coordinate system Wtn is defined, tn indicated for the n-th moment, and origin is tn moment left camera coordinates system origin and right camera The central point of coordinate origin, Z-direction is parallel with left camera optical axis direction, and Y-direction is parallel with left camera coordinates system Y-direction, the side X To meeting right-hand rule relationship;
Respectively by tn-1, tn moment or so camera image carry out feature extraction and matching, it is assumed that have N to and M to correct With point pair, wherein the matching characteristic point set from tn-1 moment left camera image is denoted as Nn-1, tn moment left camera figure is come from The matching characteristic point set of picture is denoted as Mn;
If L=Nn-1 ∩ Mn, L indicate the image in the left camera acquisition of tn-1 moment and the figure in the left camera acquisition of tn moment As upper same place, then above-mentioned same place can be obtained in tn-1 and tn moment world coordinate system based on stereoscopic vision matching principle Three-dimensional coordinate under Wtn-1 and Wtn corresponds to same place relationship according to different moments, generation can be obtained based on principle of coordinate transformation Corresponding relationship between boundary coordinate system Wtn-1 and Wtn as a result, can sit the three-dimensional point under world coordinate system that different moments obtain Mark, which is transformed under the same world coordinate system, to be described, to reconstruct the three-dimensional scenic of different moments image sequence.
(4) ECU21 is by first group of 3 D scene rebuilding image, second group of 3 D scene rebuilding image and described Third group 3 D scene rebuilding image carries out three dimensional spatial scene splicing, obtains the three dimensional field in 180 ° of angulars field of view of vehicle front The process of scape reconstruction image includes:
Assuming that from the t1 moment to the tn moment, left and right camera obtains n frame image sequence respectively, it assumes that with tn moment left camera Coordinate system be wl system (left camera photocentre is coordinate origin, left camera optical axis direction be Z-direction, it is vertical with left camera body to Down it is Y-direction, X-direction meets right-hand screw rule relationship), tn moment right camera coordinates system is that (right camera photocentre is to sit for wr system Mark system origin, right camera optical axis direction are Z-direction, are vertically downward Y-direction with right camera body, X-direction meets right-handed helix Rule relationship), tn moment world coordinate system is that (origin is tn moment left camera coordinates system origin and right camera coordinates system origin to w Central point, Z-direction is parallel with left camera optical axis direction, and Y-direction is parallel with left camera coordinates system Y-direction, and X-direction meets the right hand Rule relationship), then relationship is as follows between tri- coordinate systems of wl, wr, w: B is baseline length here, according to formula (4) and (5) it can be realized and the 3 D scene rebuilding result under wl coordinate system and wr coordinate system be uniformly transformed under w coordinate system, realize The three-dimensional reconstruction of scene within the scope of 180 °.
[Xwl Ywl Zwl] T=[Xw Yw Zw] T+ [B/2 0 0] T (4);
[Xwr Ywr Zwr] T=[Xw Yw Zw] T+ [- B/2 0 0] T (5).
In traditional scheme, when automatic driving vehicle turning driving, it is typically based on high-precision map and realizes and barrier is known Not, road two sides to be turned Boundary Recognition and can traffic areas identification, to choose the optimal pass of vehicle, therefore, tradition side Case is larger to the dependence of high-precision map, however, being easy for when slightly having differences between actual scene and high-precision map Vehicle location accuracy decline is caused even to fail, to make vehicle that can not further realize automatic turning.
To solve the above problems, the present invention obtain the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front it Afterwards, the three-dimensional scene images are also based on, the optimal pass of vehicle turning is chosen.
Referring to Fig. 6, one kind disclosed in one embodiment of the invention is based on 3 D scene rebuilding image, chooses vehicle turning most The method flow diagram of excellent pass, comprising steps of
Step S61, obstacle recognition is carried out, wait turn to the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front Detour two sides Boundary Recognition and can traffic areas identification, obtain recognition result;
Specifically, being carried out to the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front based on depth and gray scale Region segmentation, obtains T cut zone, and T is positive integer;
It is performed the following operations for cut zone described in each:
It determines total pixel number N in current cut zone and meets the pixel number M of plane fitting equation model;Sentence Whether the ratio of disconnected pixel number M and total pixel number N is less than threshold parameter;If the ratio is not less than the threshold parameter, Then determine that the current cut zone is road area, and using the outer boundary of the road area as road two sides to be turned Boundary;If the ratio is less than the threshold parameter, determine that the current cut zone is barrier region;When T cut section After the completion of area type belonging to domain determines, according to road area all in T cut zone and all barrier regions, According to formula (2) and formula (3) obtain can traffic areas L, it is described can distance of the traffic areas L apart from the barrier region not Higher than safety distance threshold, formula (2) and formula (3) are specific as follows:
L ∩ A=L (2);
L ∩ B=0 (3);
In formula, A is the total quantity of all road areas in T cut zone, and B is all barriers in T cut zone The total quantity in region, 0 indicates empty set.
It illustrates, it is assumed that in the T cut zone divided, there is N number of pixel in current cut zone, it is three-dimensional Coordinate be respectively p1, p2, p3 ..., pN, plane fitting equation model are as follows: ax+by+cz+d=0, if meet plane fitting side The point of journey model has M, if M/N >=thre, thre are threshold parameter, is generally set to 0.8, then determines the current cut zone For road area, and using the outer boundary of the road area as road two sides to be turned boundary;If M/N < thre, determines institute Stating current cut zone is barrier region, in conjunction with the prior information of the mobile target such as vehicle/pedestrian, by deep learning, i.e., It can specifically determine barrier classification.
Can traffic areas identified on the basis of road area and barrier region identify, it is contemplated that barrier region Interactive presence with road area, therefore, it is necessary to passage path planing method determine reasonable can traffic areas.
Path planning criterion are as follows:
After the completion of the area type belonging to the T cut zone determines, according to road area all in T cut zone With all barrier regions, according to formula (2) and formula (3) obtain can traffic areas L, it is described can traffic areas L apart from institute The distance of barrier region is stated not higher than safety distance threshold (generally 0.5m), formula (2) and formula (3) are specific as follows:
L ∩ A=L (2);
L ∩ B=0 (3);
In formula, A is the total quantity of all road areas in T cut zone, three-dimensional coordinate point be respectively a1, a2, A3 ..., an, B is the total quantity of all barrier regions in T cut zone, three-dimensional coordinate point be respectively b1, b2, b3 ..., bm。
Step S62, the optimal pass for meeting current condition is chosen from the recognition result;
Wherein, the foundation of optimal pass is fixed then as follows really:
Connected domain threshold value d is set based on vehicle size (length and width are respectively W and L),Then meet | | Ai-bj | | all of > d can pass;Equipped with N item as candidate pass, and tracing point defines on every pass The boundary central point that can pass through on same fore-and-aft distance is taken, sets vehicle front in-position point as a point, current vehicle location point is B point, then in the Ni articles planning path vehicle driving trace point be Ni1, Ni2, Ni3 ..., Nil, then in all path candidates, | | Nik-bj | | > safedis and meet Distance=min (N1, N2, N3 ... NN), then corresponding planning path be it is optimal can Pass, wherein k indicates that k-th of vehicle driving trace point, i indicate that the boundary point on i-th of barrier, j indicate j-th Boundary point on barrier, safedis indicate safe distance, generally take 0.5m.
Step S63, the turning for controlling vehicle executes structure, realizes vehicle turning according to the optimal pass.
In summary, when the navigation information that ECU21 is passed back according to GPS module 22, determine vehicle at current time and front When the distance between turning position reaches pre-determined distance, ECU21 controls 11 He of left camera of binocular vision camera hardware system 23 Right camera 12 back and forth rotates synchronously in default rotating range, acquires image data in 180 ° of visual fields of vehicle front, and based on fortune It is dynamic to restore structural principle and immediately positioning and map structuring principle, to picture number in 180 ° of visual fields of vehicle front of left camera acquisition 3 D scene rebuilding is carried out according to image data in 180 ° of visual fields of the vehicle front acquired with right camera, obtains 180 ° of vehicle front views 3 D scene rebuilding image in angular region, and it is based on the 3 D scene rebuilding image, carry out obstacle recognition, section to be turned Two sides boundary differentiate and can traffic areas identification, according to recognition result determine it is optimal can pass, to realize that vehicle is steady Fixed reliable automatic turning function.The present invention is only with a set of binocular vision camera hardware system with one-dimensional rotation function 23, it can be realized to the Image Acquisition in 180 ° of angulars field of view of vehicle front, to improve the view of a set of binocular vision camera Wild range enhances vehicle to the sensing capability of front environment, reduces the hardware cost of 3 D scene rebuilding, reduce simultaneously To the dependence of high-precision map in traditional scheme.
It is understood that after the completion of automatic driving vehicle turning, it can be according to the display content of Vehicular display device, side It is determined to disk corner, wheel speed signal and turn signal.In practical applications, ECU21 can be by judging whether steering wheel angle is extensive It is multiple and when whether interior outside front wheel rotation speed is identical, to determine whether vehicle turns completion to default corner, when determining steering wheel angle Restore to default corner, and when interior outside front wheel rotation speed is identical, sends halt instruction to controllor for step-by-step motor, pass through the step Controlling the stepper motor into electric machine controller drives the recovery of binocular vision camera hardware system 23 extremely and vehicle centre-line Parallel position.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with other The difference of embodiment, the same or similar parts in each embodiment may refer to each other.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (8)

1. a kind of binocular vision camera hardware system characterized by comprising binocular vision camera, speed reduction gearing, stepping Motor and controllor for step-by-step motor, wherein the binocular vision camera includes: left camera, right camera and the fixed left phase The bracket of machine and the right camera;
The output end of the controllor for step-by-step motor is connect with the control terminal of the stepper motor, and the motor of the stepper motor is defeated The central rotating shaft of speed reduction gearing described in axis connection out, the speed reduction gearing is installed for fixing the left camera and institute State the bracket of right camera;
The control signal that the stepper motor is exported according to the controllor for step-by-step motor drives the speed reduction gearing to revolve Turn, and drive the bracket rotation being mounted on the central rotating shaft of the speed reduction gearing, realization is fixed on the bracket On the left camera and the right camera back and forth rotated synchronously in default rotating range.
2. binocular vision camera according to claim 1, which is characterized in that adjacent two frame of the binocular vision camera it Between camera optical axis rotation angle be equal to predetermined angle.
3. a kind of 3 D scene rebuilding system characterized by comprising electronic control unit ECU, GPS module and claim 1 The binocular vision camera, wherein the left camera and right camera of the binocular vision camera are along vehicle centre-line horizontal symmetrical It places;
The input terminal of the ECU output end of image and the right phase with the output end of the GPS module, the left camera respectively The output end of image of machine connects, the control terminal of controllor for step-by-step motor in the output end of the ECU and the binocular vision camera Connection, the ECU are used in the navigation information passed back according to the GPS module, determine turning position in front of vehicle distances away from When from reaching pre-determined distance, triggering command is sent to the magazine controllor for step-by-step motor of the binocular vision, passes through the step The magazine stepper motor of binocular vision, which is controlled, into electric machine controller drives the left side according to default motor angular velocity of rotation Camera and the right camera are being parallel to vehicle body and along vehicle center line position, back and forth rotate synchronously in default rotating range; Obtain image data and the right camera acquire in 180 ° of visual fields of vehicle front of the left camera acquisition 180 ° of vehicle front Image data in visual field;Based on exercise recovery structural principle to picture number in 180 ° of visual fields of vehicle front of the left camera acquisition According to 3 D scene rebuilding is carried out, first group of 3 D scene rebuilding image is obtained;Based on exercise recovery structural principle to the right phase Image data carries out 3 D scene rebuilding in 180 ° of visual fields of vehicle front of machine acquisition, obtains second group of 3 D scene rebuilding figure Picture;Before the vehicle obtained simultaneously based on instant positioning with map structuring principle left camera described in different moments and the right camera Image data carries out 3 D scene rebuilding in the 180 ° of visual fields in side, obtains third group 3 D scene rebuilding image;By described first group 3 D scene rebuilding image, second group of 3 D scene rebuilding image and the third group 3 D scene rebuilding image carry out three The splicing of dimension space scene, obtains the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front, wherein the default rotation Range are as follows: (- (90 ° of-a/2), (90 ° of-a/2)), a are the horizontal field of view angle of the binocular vision camera.
4. 3 D scene rebuilding system according to claim 3, which is characterized in that the default motor angular velocity of rotation is full Sufficient formula (1), the expression formula of formula (1) are as follows:
W=N*c (1)
In formula, w be motor angular velocity of rotation, unit: °/s, N be the binocular vision camera frame frequency, unit: pfs, c are double Camera optical axis rotates angle between preset adjacent two frame of mesh vision camera, unit: °.
5. a kind of method for reconstructing three-dimensional scene characterized by comprising
Vehicle is obtained at a distance from current time is between the turning position of front;
When determining that the distance reaches pre-determined distance, triggering is sent to the magazine controllor for step-by-step motor of binocular vision and is referred to It enables, the magazine stepper motor of binocular vision is controlled according to default motor rotation angle speed by the controllor for step-by-step motor Degree drives left camera and right camera being parallel to vehicle body and along vehicle center line position, back and forth synchronous in default rotating range to turn It is dynamic, wherein the default rotating range are as follows: (- (90 ° of-a/2), (90 ° of-a/2)), a are the horizontal view of the binocular vision camera Rink corner;
Obtain the vehicle front of image data and the right camera acquisition in 180 ° of visual fields of vehicle front of the left camera acquisition Image data in 180 ° of visual fields;
Image data in 180 ° of visual fields of vehicle front of the left camera acquisition is carried out based on exercise recovery structural principle three-dimensional Scene rebuilding obtains first group of 3 D scene rebuilding image;
Image data in 180 ° of visual fields of vehicle front of the right camera acquisition is carried out based on exercise recovery structural principle three-dimensional Scene rebuilding obtains second group of 3 D scene rebuilding image;
The vehicle obtained simultaneously based on instant positioning with map structuring principle left camera described in different moments and the right camera Image data carries out 3 D scene rebuilding in the visual field of 180 ° of front, obtains third group 3 D scene rebuilding image;
By first group of 3 D scene rebuilding image, second group of 3 D scene rebuilding image and the third group three dimensional field Scape reconstruction image carries out coordinate system and is converted to the 3 D scene rebuilding image in 180 ° of angulars field of view of vehicle front.
6. method for reconstructing three-dimensional scene according to claim 5, which is characterized in that obtain 180 ° of vehicle front views described After 3 D scene rebuilding image in angular region, further includes:
Obstacle recognition, road two sides to be turned are carried out to the 3 D scene rebuilding image in 180 ° of angulars field of view of the vehicle front Boundary Recognition and can traffic areas identification, obtain recognition result;
The optimal pass for meeting current condition is chosen from the recognition result;
The turning for controlling vehicle executes structure, realizes vehicle turning according to the optimal pass.
7. method for reconstructing three-dimensional scene according to claim 6, which is characterized in that described to be regarded to 180 ° of the vehicle front 3 D scene rebuilding image in angular region carries out obstacle recognition, road two sides to be turned Boundary Recognition and can traffic areas Identification, obtaining recognition result includes:
To the 3 D scene rebuilding image in 180 ° of angulars field of view of the vehicle front, region point is carried out based on depth and gray scale It cuts, obtains T cut zone, T is positive integer;
It is performed the following operations for cut zone described in each:
It determines total pixel number N in current cut zone and meets the pixel number M of plane fitting equation model;
Judge whether pixel number M and the ratio of total pixel number N are less than threshold parameter;
If the ratio is not less than the threshold parameter, determine that the current cut zone is road area, and by the road The outer boundary in road region is used as road two sides to be turned boundary;
If the ratio is less than the threshold parameter, determine that the current cut zone is barrier region;
After the completion of the area type belonging to the T cut zone determines, according to road area all in T cut zone and institute Some barrier regions, according to formula (2) and formula (3) obtain can traffic areas L, it is described can traffic areas L apart from the barrier Hinder the distance of object area not higher than safety distance threshold, formula (2) and formula (3) are specific as follows:
L ∩ A=L (2);
L ∩ B=0 (3);
In formula, A is the total quantity of all road areas in T cut zone, and B is all barrier regions in T cut zone Total quantity, 0 indicate empty set.
8. method for reconstructing three-dimensional scene according to claim 5, which is characterized in that further include:
When determining that steering wheel angle restores to default corner, and when interior outside front wheel rotation speed is identical, to the step motor control Device sends halt instruction, controls the stepper motor by the controllor for step-by-step motor and the binocular vision camera is driven to restore To the position parallel with vehicle centre-line.
CN201710576935.8A 2017-07-14 2017-07-14 Binocular vision camera hardware system, three-dimensional scene reconstruction system and method Active CN109254579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710576935.8A CN109254579B (en) 2017-07-14 2017-07-14 Binocular vision camera hardware system, three-dimensional scene reconstruction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710576935.8A CN109254579B (en) 2017-07-14 2017-07-14 Binocular vision camera hardware system, three-dimensional scene reconstruction system and method

Publications (2)

Publication Number Publication Date
CN109254579A true CN109254579A (en) 2019-01-22
CN109254579B CN109254579B (en) 2022-02-25

Family

ID=65051208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710576935.8A Active CN109254579B (en) 2017-07-14 2017-07-14 Binocular vision camera hardware system, three-dimensional scene reconstruction system and method

Country Status (1)

Country Link
CN (1) CN109254579B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148216A (en) * 2019-05-24 2019-08-20 中德(珠海)人工智能研究院有限公司 A kind of method of double ball curtain cameras and double ball curtain camera three-dimensional modelings
CN111174765A (en) * 2020-02-24 2020-05-19 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance
CN113085745A (en) * 2021-04-27 2021-07-09 重庆金康赛力斯新能源汽车设计院有限公司 Display method and system for head-up display
CN114119758A (en) * 2022-01-27 2022-03-01 荣耀终端有限公司 Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN114199235A (en) * 2021-11-29 2022-03-18 珠海一微半导体股份有限公司 Positioning system and positioning method based on sector depth camera

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976460A (en) * 2010-10-18 2011-02-16 胡振程 Generating method of virtual view image of surveying system of vehicular multi-lens camera
US20110316980A1 (en) * 2008-12-22 2011-12-29 Nederlandse Organisatie voor toegepastnatuurweten schappelijk Onderzoek TNO Method of estimating a motion of a multiple camera system, a multiple camera system and a computer program product
CN102592117A (en) * 2011-12-30 2012-07-18 杭州士兰微电子股份有限公司 Three-dimensional object identification method and system
CN102609934A (en) * 2011-12-22 2012-07-25 中国科学院自动化研究所 Multi-target segmenting and tracking method based on depth image
CN103048995A (en) * 2011-10-13 2013-04-17 中国科学院合肥物质科学研究院 Wide-angle binocular vision identifying and positioning device for service robot
CN103197494A (en) * 2013-03-18 2013-07-10 哈尔滨工业大学 Binocular camera shooting device restoring scene three-dimensional information
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
CN103390268A (en) * 2012-05-11 2013-11-13 株式会社理光 Object area segmentation method and device
CN103577790A (en) * 2012-07-26 2014-02-12 株式会社理光 Road turning type detecting method and device
US20160282874A1 (en) * 2013-11-08 2016-09-29 Hitachi, Ltd. Autonomous Driving Vehicle and Autonomous Driving System
CN106066645A (en) * 2015-04-21 2016-11-02 赫克斯冈技术中心 While operation bull-dozer, measure and draw method and the control system of landform
US20170023937A1 (en) * 2015-07-24 2017-01-26 The Trustees Of The University Of Pennsylvania Systems, devices, and methods for on-board sensing and control of micro aerial vehicles
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
CN106485233A (en) * 2016-10-21 2017-03-08 深圳地平线机器人科技有限公司 Drivable region detection method, device and electronic equipment
CN106741265A (en) * 2017-01-04 2017-05-31 芜湖德力自动化装备科技有限公司 A kind of AGV platforms
CN106873580A (en) * 2015-11-05 2017-06-20 福特全球技术公司 Based on perception data autonomous driving at the intersection
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107067436A (en) * 2015-11-17 2017-08-18 株式会社东芝 Pose estimation unit and vacuum cleaning system

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110316980A1 (en) * 2008-12-22 2011-12-29 Nederlandse Organisatie voor toegepastnatuurweten schappelijk Onderzoek TNO Method of estimating a motion of a multiple camera system, a multiple camera system and a computer program product
CN101976460A (en) * 2010-10-18 2011-02-16 胡振程 Generating method of virtual view image of surveying system of vehicular multi-lens camera
CN103048995A (en) * 2011-10-13 2013-04-17 中国科学院合肥物质科学研究院 Wide-angle binocular vision identifying and positioning device for service robot
CN102609934A (en) * 2011-12-22 2012-07-25 中国科学院自动化研究所 Multi-target segmenting and tracking method based on depth image
CN102592117A (en) * 2011-12-30 2012-07-18 杭州士兰微电子股份有限公司 Three-dimensional object identification method and system
CN103390268A (en) * 2012-05-11 2013-11-13 株式会社理光 Object area segmentation method and device
CN103577790A (en) * 2012-07-26 2014-02-12 株式会社理光 Road turning type detecting method and device
CN103197494A (en) * 2013-03-18 2013-07-10 哈尔滨工业大学 Binocular camera shooting device restoring scene three-dimensional information
CN103247075A (en) * 2013-05-13 2013-08-14 北京工业大学 Variational mechanism-based indoor scene three-dimensional reconstruction method
US20160282874A1 (en) * 2013-11-08 2016-09-29 Hitachi, Ltd. Autonomous Driving Vehicle and Autonomous Driving System
CN106066645A (en) * 2015-04-21 2016-11-02 赫克斯冈技术中心 While operation bull-dozer, measure and draw method and the control system of landform
US20170023937A1 (en) * 2015-07-24 2017-01-26 The Trustees Of The University Of Pennsylvania Systems, devices, and methods for on-board sensing and control of micro aerial vehicles
CN106873580A (en) * 2015-11-05 2017-06-20 福特全球技术公司 Based on perception data autonomous driving at the intersection
CN107067436A (en) * 2015-11-17 2017-08-18 株式会社东芝 Pose estimation unit and vacuum cleaning system
CN106441151A (en) * 2016-09-30 2017-02-22 中国科学院光电技术研究所 Three-dimensional object European space reconstruction measurement system based on vision and active optics fusion
CN106485233A (en) * 2016-10-21 2017-03-08 深圳地平线机器人科技有限公司 Drivable region detection method, device and electronic equipment
CN106741265A (en) * 2017-01-04 2017-05-31 芜湖德力自动化装备科技有限公司 A kind of AGV platforms
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HAOYIN ZHOU: "The combination of SfM and monocular SLAM", 《IEEE》 *
刘玉等: "基于序列图像的三维重建方法在空间目标", 《载人航天》 *
彭宝利: "《水泥生产巡检工》", 29 February 2012 *
徐维鹏等: "增强现实中的虚实遮挡处理综述", 《计算机辅助设计与图形学学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148216A (en) * 2019-05-24 2019-08-20 中德(珠海)人工智能研究院有限公司 A kind of method of double ball curtain cameras and double ball curtain camera three-dimensional modelings
CN110148216B (en) * 2019-05-24 2023-03-24 中德(珠海)人工智能研究院有限公司 Three-dimensional modeling method of double-dome camera
CN111174765A (en) * 2020-02-24 2020-05-19 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance
CN111174765B (en) * 2020-02-24 2021-08-13 北京航天飞行控制中心 Planet vehicle target detection control method and device based on visual guidance
CN113085745A (en) * 2021-04-27 2021-07-09 重庆金康赛力斯新能源汽车设计院有限公司 Display method and system for head-up display
CN114199235A (en) * 2021-11-29 2022-03-18 珠海一微半导体股份有限公司 Positioning system and positioning method based on sector depth camera
CN114199235B (en) * 2021-11-29 2023-11-03 珠海一微半导体股份有限公司 Positioning system and positioning method based on sector depth camera
CN114119758A (en) * 2022-01-27 2022-03-01 荣耀终端有限公司 Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN114119758B (en) * 2022-01-27 2022-07-05 荣耀终端有限公司 Method for acquiring vehicle pose, electronic device and computer-readable storage medium

Also Published As

Publication number Publication date
CN109254579B (en) 2022-02-25

Similar Documents

Publication Publication Date Title
CN109254579A (en) A kind of binocular vision camera hardware system, 3 D scene rebuilding system and method
JP7045628B2 (en) Vehicle equipment, vehicles, and computer programs for controlling vehicle behavior
CN107600067B (en) A kind of autonomous parking system and method based on more vision inertial navigation fusions
CN103954275B (en) Lane line detection and GIS map information development-based vision navigation method
Dickmanns et al. An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles
CN108983781A (en) A kind of environment detection method in unmanned vehicle target acquisition system
US20200175720A1 (en) Vehicle, vehicle positioning system, and vehicle positioning method
EP2209091B1 (en) System and method for object motion detection based on multiple 3D warping and vehicle equipped with such system
CN107167139A (en) A kind of Intelligent Mobile Robot vision positioning air navigation aid and system
CN101941438B (en) Intelligent detection control device and method of safe interval
WO2015024407A1 (en) Power robot based binocular vision navigation system and method based on
CN105103210B (en) Method and apparatus for guiding the vehicle in the surrounding environment of object
CN105946853A (en) Long-distance automatic parking system and method based on multi-sensor fusion
CN110497901A (en) A kind of parking position automatic search method and system based on robot VSLAM technology
WO2019173547A1 (en) Odometry system and method for tracking traffic lights
CN109144057A (en) A kind of guide vehicle based on real time environment modeling and autonomous path planning
CN110297491A (en) Semantic navigation method and its system based on multiple structured light binocular IR cameras
CN109752008A (en) Intelligent vehicle multi-mode co-located system, method and intelligent vehicle
CN106289285A (en) Map and construction method are scouted by a kind of robot associating scene
CN102365187A (en) Information display apparatus
CN105159291B (en) A kind of fleet&#39;s intelligent barrier avoiding device and barrier-avoiding method based on information physical net
CN110163963B (en) Mapping device and mapping method based on SLAM
CN110126824A (en) A kind of commercial vehicle AEBS system of integrated binocular camera and millimetre-wave radar
CN110263607A (en) A kind of for unpiloted road grade global context drawing generating method
CN108107897B (en) Real-time sensor control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant