GB2513703B - Method and apparatus for three-dimensional imaging of at least a partial region of a vehicle environment - Google Patents

Method and apparatus for three-dimensional imaging of at least a partial region of a vehicle environment Download PDF

Info

Publication number
GB2513703B
GB2513703B GB1403384.9A GB201403384A GB2513703B GB 2513703 B GB2513703 B GB 2513703B GB 201403384 A GB201403384 A GB 201403384A GB 2513703 B GB2513703 B GB 2513703B
Authority
GB
United Kingdom
Prior art keywords
virtual
images
camera
vehicle
partial region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
GB1403384.9A
Other versions
GB201403384D0 (en
GB2513703A (en
Inventor
Domingo Esparza Garcia Jose
Helmle Michael
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of GB201403384D0 publication Critical patent/GB201403384D0/en
Publication of GB2513703A publication Critical patent/GB2513703A/en
Application granted granted Critical
Publication of GB2513703B publication Critical patent/GB2513703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Description

Description
Title
Method and apparatus for three-dimensional imaging of at least a partial region of a vehicle environment
The present invention relates to a method and an apparatus for three-dimensional imaging of at least a partial region of a vehicle environment with the aid of a camera system, comprising at least two cameras spaced apart from one another, which are arranged on a vehicle and the respective camera perspective of which is defined by the position of an optical axis, comprising the following steps: capturing the partial region using the two cameras spaced apart from one another and storing images which have been obtained by the capturing of the partial region.
Prior art A method for generating an image of the environment of a motor vehicle is known from DE 10 2009 005 505 A1. A camera mounted on a motor vehicle captures images of the environment. These camera images are converted in such a way as if they had been captured from a virtual camera position arranged above the motor vehicle. By adding further information with the aid of a structure-from-motion method, the virtual camera images are corrected. For this purpose, the vehicle is moved and during the vehicle movement a plurality of images are captured with the aid of the camera. This results in temporally and spatially spaced images, by means of which spatial structures of the environment can be calculated. The representation of the vehicle environment from the virtual camera position is a two-dimensional plan view in which the image information is projected onto the road plane. DE 103 26 943 A1 discloses, inter alia, a method for determining the movement of an autonomous vehicle and/or for detecting objects in front of the autonomous vehicle. A camera mounted on the vehicle captures consecutive images when the vehicle is moving. The epipolar geometry information based on orientation information of the autonomous vehicle is calculated by means of an epipolar computing unit. A movement analysis unit analyses the movement of the autonomous vehicle based on the results of the epipolar computing unit. A 3D information analysis unit calculates 3D information of an object appearing in front of the vehicle on the basis of the epipolar geometry information calculated by the epipolar computing unit. This 3D information can be, for example, the distance between the autonomous vehicle and the object or information on the 3D shape of the object. The 3D detection of the vehicle environment can be performed only when a vehicle is moving.
Disclosure of the invention
The method according to the invention is characterised by the following steps: defining a virtual camera perspective for each camera by a virtual optical axis which differs in its position from the optical axis of this camera, calculating a virtual image of the partial region for each virtual camera perspective from the stored images, performing a correspondence analysis to establish a relationship, in particular a spatial relationship, between the generated virtual images, and calculating a three-dimensional image of the partial region based on the correspondence analysis and the generated virtual images.
By means of the method according to the invention, it is possible, also when the vehicle is stationary, to three-dimensionally image at least a partial region of a vehicle environment.
The method according to the invention can preferably be performed with the aid of cameras on the vehicle which are installed as standard on the vehicle. Retrofitting of cameras is thus not necessary. Preferably, the cameras are used in the outer region of a vehicle and are used as standard for generating a view around the vehicle. Present-day vehicles are usually equipped with 3 to 5 cameras in the outer region.
If a plurality of partial regions are to be three-dimensionally imaged with the method according to the invention, more than two, in particular more than five, cameras can also be used. For example, it is also conceivable to use further cameras in addition to cameras already arranged, in particular as standard, on the vehicle.
The method can be performed with identical cameras, such as, for example, a plurality of cameras installed as standard in the outer region of a vehicle. However, it is also possible to use for the method different cameras, such as, for example, a camera in the passenger compartment combined with cameras in the outer region of a vehicle.
Furthermore, it is conceivable to combine the method according to the invention with the method known from DE 103 26 943 A1 or similar methods. For example, a camera can additionally capture consecutive images when the vehicle is moving, from which a three-dimensional image of the vehicle environment is calculated. Further cameras could be fastened to the vehicle for this purpose.
The method according to the invention provides the driver of the vehicle with the possibility of virtually moving freely through the three-dimensionally imaged environment and of virtually assuming different viewing directions. The driver can thus, for example, better detect obstacles in the vehicle environment.
With the aid of the method according to the invention, the driver of a vehicle is assisted with the movement of the vehicle. Furthermore, with the aid of the method according to the invention, collisions of the vehicle with objects can be avoided in good time.
The method according to the invention does not require any information at all from odometry sensors that may be present on the vehicle. If odometry sensors are provided as standard on the vehicle, the functioning of these sensors can be checked with the aid of the 3D environment obtained by the method. For this purpose, objects detected in the 3D environment are tracked when the vehicle is moving and the information thereby obtained, for example varying distances from an object, is compared with basic odometry measurements .
It is also conceivable, based on the calculated three-dimensional vehicle environment, to virtually track an object which is moving through a plurality of partial regions of the vehicle environment.
The subclaims show preferred developments of the invention.
In a particularly advantageous embodiment of the invention, the virtual optical axes of the cameras are arranged parallel to one another in the virtual camera perspectives. The cameras arranged as standard on a vehicle in the outer region are positioned in such a way that they enable a 360° view around the vehicle. In order to ensure this view, the cameras are oriented differently to one another. For example, a camera in the front region of a vehicle has an optical axis pointing in the direction of travel, while a camera in the side region of the vehicle is oriented along an optical axis transversely to the direction of travel. Through the different orientations of the cameras, inter alia the images of an identical partial region captured by two cameras differ considerably from one another. Through the parallel arrangement of the virtual axes, these differences can be reduced, so that a calculation of a three-dimensional vehicle environment is markedly simplified.
Preferably, each camera is transferred from the camera perspective to the virtual camera perspective with the aid of a virtual rotation about the optical centre of the respective camera. A virtual rotation is possible while retaining certain fixed quantities, such as the specific calibration and the position of the optical centre of the respective camera. Thus, the specific calibration and the position of the optical centre correspond both for the real camera perspective and the virtual camera perspective. In the case of the virtual rotation, not only the optical axis but the entire camera, i.e. also the image plane, is virtually rotated into the virtual camera perspective.
By correcting distortions in the real captured images of the partial region with the aid of methods of approximation, in particular by a virtual camera model, the further processing of the image data and the creation of a 3D image is simplified. It is prior art for the cameras in the outer region of a vehicle to use wide-angle lenses, also known as fisheye lenses. By using such lenses, the captured images exhibit considerable distortions. For example, straight lines appear as curves. By using known methods of approximation, for example by generating camera models, the distortions can be simply corrected. The parameters which define a camera model can be estimated via the calibration of the real camera, the distortions of which are to be corrected.
It is also possible for the calculated virtual images to be epipolar-corrected. For example, in the case of the epipolar correction, the two image planes onto which an object point has been projected are transformed. The transformation takes place in such a way that the epipolar lines of both image planes are aligned collinearly with one another. This means that the images captured from two different camera perspectives are projected onto a common plane, the image sizes are adjusted to one another by scaling and the pixel rows of the images are aligned to correspond to one another by rotation or existing displacement of the images with respect to one another.
If then in a following correspondence analysis, for individual object points in a first image the corresponding object points in a second image are sought, this is a onedimensional task. Through the epipolar correction of both images, corresponding object points lie on a horizontal imaginary line.
Advantageously, the virtual images are vertically integrated after the epipolar correction. In the detection of objects in the vehicle environment, vertically running edges of objects are more important, i.e. they contain more information than horizontally running edges. In order to reduce the amount of data generated owing to the virtual images, the virtual images can be vertically integrated.
The relevant information, i.e. the vertical edges, is further available for the further method steps.
In a particularly preferred embodiment, the correspondence analysis is performed with the aid of algorithms which have an extended search range for the feature assignment. The cameras in the outer region of a vehicle must be incorporated in the vehicle design for aesthetic reasons.
As a result, these cameras are not protected from external disturbing influences, such as, for example, temperature changes, vibrations and the like. As a result, a perfect epipolar correction of the virtual images is not possible, so that the subsequent correspondence analysis is made more difficult. Algorithms, such as, for example, the optical flow, are known which can be used in particular in the correspondence analysis to detect corresponding object points in a large search range. This search range extends both in the horizontal and vertical direction. Thus, corresponding object points can also be detected in images which are not perfectly epipolar-corrected, and a reliable 3D generation can thereby take place.
The invention further relates to a camera system which can preferably be used in a vehicle. This camera system comprises at least two cameras spaced apart from one another, the respective position of which is defined by the position of an optical axis. The cameras can capture in particular images of the environment of the vehicle. Furthermore, the camera system comprises a storage device which is configured to store images of at least a partial region of a vehicle environment. Finally, according to the invention, the camera system is configured to carry out the following steps: defining a virtual camera perspective for each camera by a virtual optical axis which differs in its position from the optical axis, calculating a virtual image of the partial region for each virtual camera perspective from the stored images, performing a correspondence analysis to establish a relationship, in particular a spatial relationship, between the generated virtual images, calculating a three-dimensional image of the partial region based on the correspondence analysis and the generated virtual images.
For the camera system according to the invention, preferably an interface is provided which is connected to the vehicle, in particular to an optical display unit of the vehicle. Via this interface, images can be output from the storage device in order to display them preferably to the driver of the vehicle. For example, an image of the rear area of the vehicle can thereby be shown to the driver .
In a preferred development of the camera system according to the invention, the images which have been created by the camera system can be put together to form an at least partial panoramic representation of the vehicle environment. This makes it easier for the driver to keep an overview of the environment of his/her vehicle, in particular when obstacles are situated in regions of his/her vehicle which are difficult to see.
Advantageous developments of the invention are specified in the subclaims and described in the description.
Drawings
Exemplary embodiments of the invention are explained in more detail with the aid of the drawings and the following description. In the drawings:
Figure 1 shows a vehicle having a plurality of cameras in plan view,
Figure 2 shows a flowchart of the method according to the invention,
Figure 3 shows the vehicle having a plurality of cameras, for which a virtual optical axis has been defined, in plan view,
Figure 4a shows the result of a correspondence analysis taking a house as an example, and
Figure 4b shows the result of a correspondence analysis in the case of not completely epipolar-corrected virtual images taking the house as an example .
Embodiment of the invention
Figure 1 shows a vehicle 1 from above, on which a front camera 2, two side cameras 3 and a rear camera 4 are arranged. Each camera 2, 3, 4 views a vehicle environment 6 from a different camera perspective which is defined by the position of an optical axis 5.
The positions of the cameras 2, 3, 4 on the vehicle 1 are chosen as standard so as to enable a 360° view around the vehicle 1. The fields of view of the individual cameras 2, 3, 4 overlap in a first partial region 7, in a second partial region 8, in a third partial region 9 and in a fourth partial region 10. According to Figure 1, the partial regions 7, 8, 9, 10 lie on the left and right in front of and behind the vehicle 2 in the direction of travel 11 of the vehicle 1. These partial regions 7, 8, 9, 10 are each captured by two cameras 2, 3, 4 from different camera perspectives. Two respectively adjacent cameras 2, 3, 4, the fields of view of which overlap in the respective partial region 7, 8, 9, 10, each form a camera system 12, by means of which the respective partial region 7, 8, 9, 10 is imaged.
In the case of the first partial region 7, the front camera 2 and the side camera 3 on the right-hand side of the vehicle 2 in the direction of travel 11 form the camera system 12 with which the first partial region 7 is captured. For the second partial region 8, the side camera 3 on the right-hand side of the vehicle 1 in the direction of travel 11 and the rear camera 4 form the camera system 12. The situation is analogous for the third partial region 9 and the fourth partial region 10.
In the following, the method according to the invention will be explained in more detail with reference to Figures 1 to 4b using the partial region 7 by way of example. It will be understood that the method according to the invention can be applied simultaneously to the four partial regions 7, 8, 9, 10. It is, however, also conceivable to provide more than four cameras on the vehicle 1, so that more than the four partial regions 7, 8, 9, 10 described can be captured. It is likewise conceivable to increase the number of captured partial regions to such an extent that the entire vehicle environment 6 can be captured.
According to Figure 2, in a first step 13 of the method according to the invention image capture takes place. For this purpose, the first partial region 7 is captured by two cameras 2, 3 spaced apart from one another. The images obtained therefrom are stored on a storage medium (not illustrated) in the form of image data.
For the environment detection, the cameras 2, 3 of a vehicle 1 are usually equipped with wide-angle lenses, also known as fisheye lenses. As a result, the images of the partial region 7 obtained from the first step 13 are greatly distorted.
According to the invention, in a second step 14 of the method a correction of the distortions therefore takes place. Such a correction can be effected by methods of approximation which are known from the prior art. For example, camera models are generated, the parameters of which can be estimated using the calibrations of the cameras 2, 3. By applying the second step 14, corrected images of the partial region 7 are generated.
In a third step 15 of the method, virtual camera perspectives are defined. Virtual images based on the images obtained in the preceding step 14 are calculated, and epipolar-corrected. The method step 15 will be explained with the aid of Figures 3 to 4b.
Since the front camera 2 and the side camera 3 which have captured the partial region 7 in the method step 13 are arranged spaced apart from one another on the vehicle 1, the captured images of the partial region 7 differ markedly from one another. This makes it more difficult, in the subsequent generation of a 3D environment, to recognise individual corresponding object points in both images of the cameras 2, 3. In order to simplify this, virtual camera perspectives are generated for each camera 2, 3. The camera perspectives are defined via the position of a virtual optical axis 16. Figure 3 shows the optical axes 5 already known from Figure 1 and the virtual optical axes 16 of the front camera 2 and the side camera 3.
The virtual optical axes 16 result from a virtual rotation of the optical axes 5 about an optical centre of the cameras 2, 3. The virtual rotation takes place in such a way that the optical axes 16 are arranged parallel to one another .
After definition of the virtual camera perspectives, for each virtual camera perspective a virtual image of the partial region 7 is calculated from the images or image data stored in step 13 and from the images then corrected in step 14.
The step 15 of the method according to the invention also comprises an epipolar correction of the virtual images. In this, the virtual images which have been calculated from the two different virtual camera perspectives are projected onto a common plane and identically aligned with respect to their size and orientation. After this epipolar correction, the individual pixel rows of the two virtual images are identically aligned, so that mutually corresponding object points lie on a horizontal line.
As a result of the step 15, epipolar-corrected virtual images exist, which show the partial region 7 from two different virtual camera perspectives. In a fourth step 17 of the method according to the invention, these virtual images are related to one another. This is effected with the aid of a correspondence analysis.
This principle of a correspondence analysis is known. In each case two image points which show a point of the partial region 7 from the two virtual camera perspectives, in the context of the present invention also referred to as corresponding object points, are related to one another in the virtual images. Figure 4a shows the result of a successfully performed correspondence analysis taking a house 18 as an example, which has been captured from two different perspectives. In the correspondence analysis, for example the individual object points of an upper door frame 20 of the house 18 are related to one another by the horizontal line 19.
Since the cameras 2, 3 are arranged in the outer region of the vehicle 1, they are subjected to disturbing influences, such as, for example, temperature fluctuations or vibrations. As a result, the images captured by means of the camera 2, 3 may exhibit additional distortions which make perfect epipolar correction impossible and thereby make a subsequent correspondence analysis markedly more difficult. Figure 4b shows, taking the house 18 as an example, two images which have not been completely epipolar-corrected.
In order to facilitate the performance of a correspondence analysis, use is made according to the invention of algorithms which have an extended search range for corresponding object points. Through such algorithms, such as, for example, the optical flow algorithm, the mutually corresponding object points do not necessarily have to lie on a horizontal line but may also be found with a vertical deviation from one another. This can be seen in Figure 4b, by way of example, from the object point of the left-hand lower house corner 21. The mutually corresponding object points of the left-hand lower corner 21 of the house 18 do not lie on a horizontal line but are vertically offset from one another. Nevertheless, the optical flow made it possible to relate them to one another via the line 22.
As a result of the fourth step 17 of the method according to the invention, the relationships between two virtual images are known. In a fifth step 23, three-dimensional images of the partial region 7 are now generated based on the results of the correspondence analysis and the generated virtual images. This takes place in a known manner .
The individual steps of the method which are described and the individual operations included in the respective method steps can be carried out in the order described. It is, however, also conceivable to change the order of the individual steps and/or of the individual operations.
Optionally, according to the third method step 15, that is to say the epipolar correction of the virtual images, a further method step 24 can be performed. In this method step, the virtual images obtained as a result of the third step 15 are vertically integrated. Particularly in the case of the object detection, the vertical corners of the objects to be detected contain more information than the horizontal corners of the. By means of the vertical integration of the virtual images, the most important information, namely that of the vertically running corners, is retained and at the same time the amount of data arising is reduced.

Claims (10)

Claims
1. Method for three-dimensional imaging of a partial region (7) of a vehicle environment (6) with the aid of a camera system (12), comprising two cameras (2, 3) spaced apart from one another, which are arranged on a vehicle (1) and the respective camera perspective of which is defined by the position of an optical axis (5), comprising the following steps: capturing the partial region (7) using the two cameras (2, 3) spaced apart from one another and storing images which have been obtained by the capturing of the partial region (7), characterised by the following steps: defining a virtual camera perspective for each camera (2, 3) by a virtual optical axis (16) which differs in its position from the optical axis (5), calculating a virtual image of the partial region (7) for each virtual camera perspective from the stored images, performing a correspondence analysis to establish a relationship, in particular a spatial relationship, between the generated virtual images, calculating a three-dimensional image of the partial region (7) based on the correspondence analysis and the generated virtual images.
2. Method according to Claim 1, characterised in that the virtual optical axes (16) of the cameras (2, 3) are arranged parallel to one another in the virtual camera perspectives .
3. Method according to one of the preceding claims, characterised in that each camera (2, 3) is transferred from the camera perspective to the virtual camera perspective with the aid of a virtual rotation about the optical centre of the respective camera (2, 3) .
4. Method according to one of the preceding claims, characterised in that distortions in the real captured images of the partial region (7) are corrected with the aid of methods of approximation, in particular by a virtual camera model
5. Method according to one of the preceding claims, characterised in that the calculated virtual images are epipolar-corrected.
6. Method according to Claim 5, characterised in that the virtual images are vertically integrated after the epipolar correction.
7. Method according to one of the preceding claims, characterised in that the correspondence analysis is performed with the aid of algorithms which have an extended search range for the feature assignment.
8. Camera system, preferably for a vehicle, comprising two cameras (2, 3) spaced apart from one another, the respective position of which is defined by the position of an optical axis (5), and a storage device which is configured to store images of a partial region (7) of a vehicle environment (6), the images being able to be captured using the spaced-apart cameras (2, 3), characterised in that the camera system is configured to carry out the following steps: defining a virtual camera perspective for each camera (2, 3) by a virtual optical axis (16) which differs in its position from the optical axis (5), calculating a virtual image of the partial region (7) for each virtual camera perspective from the stored images, performing a correspondence analysis to establish a relationship, in particular a spatial relationship, between the generated virtual images, calculating a three-dimensional image of the partial region (7) based on the correspondence analysis and the generated virtual images.
9. Camera system according to Claim 8, comprising an interface which is connected to the vehicle, in particular to an optical display unit of the vehicle, and which is configured to output the images from the storage device.
10. Camera system according to Claim 9, characterised in that the images can be put together to form an at least partial panoramic representation of the vehicle environment (6).
GB1403384.9A 2013-02-28 2014-02-26 Method and apparatus for three-dimensional imaging of at least a partial region of a vehicle environment Active GB2513703B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102013203404.0A DE102013203404A1 (en) 2013-02-28 2013-02-28 Method and device for three-dimensional imaging of at least one subarea of a vehicle environment

Publications (3)

Publication Number Publication Date
GB201403384D0 GB201403384D0 (en) 2014-04-09
GB2513703A GB2513703A (en) 2014-11-05
GB2513703B true GB2513703B (en) 2019-11-06

Family

ID=50482839

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1403384.9A Active GB2513703B (en) 2013-02-28 2014-02-26 Method and apparatus for three-dimensional imaging of at least a partial region of a vehicle environment

Country Status (3)

Country Link
DE (1) DE102013203404A1 (en)
FR (1) FR3002673B1 (en)
GB (1) GB2513703B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014213536A1 (en) * 2014-07-11 2016-01-14 Bayerische Motoren Werke Aktiengesellschaft Merging partial images into an image of an environment of a means of transportation
CN106127115B (en) * 2016-06-16 2020-01-31 哈尔滨工程大学 hybrid visual target positioning method based on panoramic vision and conventional vision
DE102016225066A1 (en) * 2016-12-15 2018-06-21 Conti Temic Microelectronic Gmbh All-round visibility system for one vehicle
CN107009962B (en) * 2017-02-23 2019-05-14 杭州电子科技大学 A kind of panorama observation method based on gesture recognition
DE102018100211A1 (en) * 2018-01-08 2019-07-11 Connaught Electronics Ltd. A method for generating a representation of an environment by moving a virtual camera towards an interior mirror of a vehicle; as well as camera setup

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2192552A1 (en) * 2008-11-28 2010-06-02 Fujitsu Limited Image processing apparatus, image processing method, and recording medium
EP2192553A1 (en) * 2008-11-28 2010-06-02 Agfa HealthCare N.V. Method and apparatus for determining a position in an image, in particular a medical image
US20110234801A1 (en) * 2010-03-25 2011-09-29 Fujitsu Ten Limited Image generation apparatus
WO2013016409A1 (en) * 2011-07-26 2013-01-31 Magna Electronics Inc. Vision system for vehicle
WO2013081287A1 (en) * 2011-11-30 2013-06-06 주식회사 이미지넥스트 Method and apparatus for creating 3d image of vehicle surroundings
WO2013086249A2 (en) * 2011-12-09 2013-06-13 Magna Electronics, Inc. Vehicle vision system with customized display
EP2620917A1 (en) * 2012-01-30 2013-07-31 Harman Becker Automotive Systems GmbH Viewing system and method for displaying an environment of a vehicle

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100446636B1 (en) 2002-11-21 2004-09-04 삼성전자주식회사 Apparatus and method for measuring ego motion of autonomous vehicles and 3D shape of object in front of autonomous vehicles
DE102008060684B4 (en) 2008-03-28 2019-05-23 Volkswagen Ag Method and device for automatic parking of a motor vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2192552A1 (en) * 2008-11-28 2010-06-02 Fujitsu Limited Image processing apparatus, image processing method, and recording medium
EP2192553A1 (en) * 2008-11-28 2010-06-02 Agfa HealthCare N.V. Method and apparatus for determining a position in an image, in particular a medical image
US20110234801A1 (en) * 2010-03-25 2011-09-29 Fujitsu Ten Limited Image generation apparatus
WO2013016409A1 (en) * 2011-07-26 2013-01-31 Magna Electronics Inc. Vision system for vehicle
WO2013081287A1 (en) * 2011-11-30 2013-06-06 주식회사 이미지넥스트 Method and apparatus for creating 3d image of vehicle surroundings
WO2013086249A2 (en) * 2011-12-09 2013-06-13 Magna Electronics, Inc. Vehicle vision system with customized display
EP2620917A1 (en) * 2012-01-30 2013-07-31 Harman Becker Automotive Systems GmbH Viewing system and method for displaying an environment of a vehicle

Also Published As

Publication number Publication date
FR3002673B1 (en) 2018-01-19
DE102013203404A1 (en) 2014-08-28
GB201403384D0 (en) 2014-04-09
GB2513703A (en) 2014-11-05
FR3002673A1 (en) 2014-08-29

Similar Documents

Publication Publication Date Title
US10097812B2 (en) Stereo auto-calibration from structure-from-motion
JP4814669B2 (en) 3D coordinate acquisition device
CN107111879B (en) Method and apparatus for estimating vehicle's own motion by panoramic looking-around image
KR101188588B1 (en) Monocular Motion Stereo-Based Free Parking Space Detection Apparatus and Method
CN110910453B (en) Vehicle pose estimation method and system based on non-overlapping view field multi-camera system
GB2513703B (en) Method and apparatus for three-dimensional imaging of at least a partial region of a vehicle environment
US11887336B2 (en) Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle
US20170019655A1 (en) Three-dimensional dense structure from motion with stereo vision
JP4958279B2 (en) Object detection device
CN109074653B (en) Method for detecting an object next to a road of a motor vehicle, computing device, driver assistance system and motor vehicle
Fraundorfer et al. A constricted bundle adjustment parameterization for relative scale estimation in visual odometry
US20180056873A1 (en) Apparatus and method of generating top-view image
US9892519B2 (en) Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle
CN107209930B (en) Method and device for stabilizing all-round looking image
CN110719411B (en) Panoramic all-around view image generation method of vehicle and related equipment
JP7107931B2 (en) Method and apparatus for estimating range of moving objects
JP2014074632A (en) Calibration apparatus of in-vehicle stereo camera and calibration method
KR20220113781A (en) How to measure the topography of your environment
JP2010288060A (en) Circumference display device
JP2015179499A (en) Parallax value derivation device, apparatus control system, moving body, robot, parallax value derivation method and program
Schönbein et al. Environmental Perception for Intelligent Vehicles Using Catadioptric Stereo Vision Systems.
CN111860270A (en) Obstacle detection method and device based on fisheye camera
Gandhi et al. Vehicle mounted wide FOV stereo for traffic and pedestrian detection
Schamm et al. Vision and ToF-based driving assistance for a personal transporter
JP7311406B2 (en) Image processing device and image processing method