WO2018222122A1 - Methods for perspective correction, computer program products and systems - Google Patents

Methods for perspective correction, computer program products and systems Download PDF

Info

Publication number
WO2018222122A1
WO2018222122A1 PCT/SE2018/050553 SE2018050553W WO2018222122A1 WO 2018222122 A1 WO2018222122 A1 WO 2018222122A1 SE 2018050553 W SE2018050553 W SE 2018050553W WO 2018222122 A1 WO2018222122 A1 WO 2018222122A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
scene
driver
image
head
Prior art date
Application number
PCT/SE2018/050553
Other languages
French (fr)
Inventor
Alexander WORMBS
Michael BANO
Karl ÅSTRÖM
Original Assignee
Uniti Sweden Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uniti Sweden Ab filed Critical Uniti Sweden Ab
Publication of WO2018222122A1 publication Critical patent/WO2018222122A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/011Head-up displays characterised by optical features comprising device for correcting geometrical aberrations, distortion
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • the information may preferably be displayed at a position that depends on the viewpoint of the driver.
  • the information overlay may change as the viewpoint of the user changes. When the vehicle moves, this operation may be done in real-time.
  • the homography matrices may be stored in a memory, a plurality of homography matrices may be stored relating to different perspective transformation at different positions.
  • the positions may relate to the view of a user.
  • the method may provide a homography matrix for a first position of the viewpoint of a user and a second homography matrix for a second position of the viewpoint of the user.
  • the perspective relation between camera B and the viewpoint of the user may be determined.
  • an image plane may be used to calculate the homography matrix for the transformation for a given plane, for example an object.
  • a plurality homogaphy matrices may be calculated and stored for a plurality of positions corresponding to a plurality of view points of the driver.
  • a matrix with three degrees of freedom may be provided.
  • the three degrees may be represented by the x-coordinates of the first position, the y- coordinates of the first position and the distance between the camera and the scene.
  • the homography matrices may be stored using a hash map.
  • a hashmap may facilitate extraction of a given homography matrix.
  • homography matrix retrieved from the memory based on the determined distance to the object of interest and the determined view point of the driver.
  • the method may further comprise displaying a piece of information on the head-up display using the determined perspective correction.
  • the information relating to the identified object may thereby be displayed on the HUD in a correct way, taking into account the different perspectives of the first camera and the user
  • a set of homography matrices may be calculated corresponding to a set of points that represent a specific position of the driver's head, in a controlled environment.
  • Each homography matrix contains information that can be used to transform the perspective from the camera in front of the car to the driver's perspective.
  • the camera in front recognizes an object that needs to be highlighted on the head-up display (HUD)
  • the camera may e.g. draw a graphics around that object.
  • the matching homography is continuously applied on the graphics so that it matches the outside world from the driver's perspective.
  • the method may be divided into two parts, an "offline” phase and an “online” phase.
  • the “offline” phase will be discussed in detail with reference to figure 1 .
  • the “online” phase will be discussed in detail with reference to figure 2.
  • the method described above may be performed to create a plurality of homograpy matrices corresponding to a plurality of user viewpoints as well as distances to between the camera B and the scene 2.
  • New reference images may be captured by varying the distance z.
  • the head of the driver may be tracked in an x and y coordinate system.
  • coordinates xi, yi, zi may be provided using the method described above.
  • the three coordinates may be used to retrieve a homography matrix Mi from an array of matrices M.
  • the homography matrix Mi may be retrieved from the memory by using, for example a hashmap 5.
  • the homography matrix Mi may relate to the perspective transformation between the camera B and the viewpoint of the driver.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The disclosure relates to a method for providing perspective correction information to be used in a system displaying information on a head-up display, the method comprising: capturing a reference image (3) of a scene (2) using a camera (B) corresponding to a camera intended to depict a view in front of a vehicle; capturing a first image (4) of the scene(2) using a camera (A) positioned in a first position corresponding to a first view point of a driver; calculating, based on the reference image (3) and on the first image (4), a first homography matrix M1 representing the perspective difference between the cameras; capturing a second image of the scene using a camera positioned in a second position corresponding to a second view point of a driver, the second position being different from the first position; calculating, based on the reference image (3) and on the second image, a second homography matrix (M2) representing the perspective difference between the cameras; and saving the first and second homography matrices (M1, M2) in a memory. The disclosure also relates a method for perspective correction in a system displaying information on a head-up display, to computer program products and to systems.

Description

METHODS FOR PERSPECTIVE CORRECTION, COMPUTER PROGRAM
PRODUCTS AND SYSTEMS
Field of invention
The invention relates to a method for providing perspective correction information to be used in a system displaying information on a head-up display. The invention is especially related to a method for providing perspective correction to be used in a system displaying information on a head-up display covering a large part of the windshield in an automotive vehicle, such as a car.
The invention also relates to a method for perspective correction in a system displaying information on a head-up display.
The invention also relates to computer program products.
The invention also relates to a system for providing information on a head up display.
The invention also relates to a head up display system.
Technical Background
When a Head-Up Display system (HUD) overlays information that matches the outside world through the windshield, the information may preferably be displayed at a position that depends on the viewpoint of the driver. In order to achieve appropriate information overlay, the information overlay may change as the viewpoint of the user changes. When the vehicle moves, this operation may be done in real-time.
Head-Up Displays in the past have been limited by how much space on the windshield that can be utilized. One of the problems with using a full- scale display is that information that needs to match the outside scene has to be placed at different positions depending on the driver's head position.
Since the perspective from the driver will be different all the time, this has to be continuously corrected for.
US 8,350,686 discloses a vehicle information display system equipped with awareness information detecting unit that detects information of an awareness object of which a driver should be aware near a host vehicle, eye position detecting unit that detects the position of the eyes of the driver, and display unit that displays on vehicle glasses. The display unit displays, in an intersection where an axis interconnecting the position of the eyes of the driver that has been detected by the eye position detecting unit and the awareness object of which the driver should be aware intersects the vehicle glasses or in a neighbourhood of that intersection.
Summary of invention
It is an object of the invention to provide a novel and improved method for perspective correction in a system displaying information on a head-up display.
According to a first aspect, a method for providing perspective correction information to be used in a system displaying information on a head-up display is provided. The method comprises: capturing a reference image of a scene using a camera corresponding to a camera intended to depict a view in front of a vehicle, capturing a first image of the scene using a camera positioned in a first position corresponding to a first view point of a driver, calculating, based on the reference image and on the first image, a first homography matrix representing the perspective difference between the view points, capturing a second image of the scene using a camera
positioned in a second position corresponding to a second view point of a driver, the second position being different from the first position, calculating, based on the reference image and on the second image, a second
homography matrix representing the perspective difference between the view points, and
saving the first and second homography matrices in a memory.
By comparing a first image captured by a camera A from a first position to a reference image, a first homography matrix may be provided. By comparing a second image captured by a camera B from a second position, a second homography matrix may be provided. The first and second
homography matrices may pertain to the transformation of perspectives at the first and second position, respectively.
As the homography matrices may be stored in a memory, a plurality of homography matrices may be stored relating to different perspective transformation at different positions. For example, the positions may relate to the view of a user. The method may provide a homography matrix for a first position of the viewpoint of a user and a second homography matrix for a second position of the viewpoint of the user. By providing a homography matrix dependent on the position of the users' head and thereby its viewpoint, the perspective relation between camera B and the viewpoint of the user may be determined.
As a plurality of homography matrices are created based on different positions of camera A, the creation of the matrix may be performed in a controlled environment.
In an embodiment, the scene may comprise a set of a plurality of reference points.
In another embodiment, the reference points may be spatially distributed in the scene in at least two dimensions.
By having a plurality of reference points in the image scene, a homography matrix may be provided that pertains to perspective relationships between camera A and B.
In an embodiment, the set of reference points may comprise at least four points, preferably four points.
By having at least four reference points, an image plane may be used to calculate the homography matrix for the transformation for a given plane, for example an object.
In another embodiment, the method may comprise varying the distance between the scene and the camera corresponding to a camera intended to depict a view in front of a vehicle.
A plurality of homography matrices may be provided by varying the distance between the camera and the scene.
In another embodiment, for each distance between the scene and the camera corresponding to a camera intended to depict a view in front of a vehicle a plurality of images a plurality homogaphy matrices may be calculated and stored for a plurality of positions corresponding to a plurality of view points of the driver.
A plurality of homography matrices may be provided that corresponds to different viewpoints of a user as well as different distances between the camera and the scene.
In an embodiment, the first position and the second position may differ in an x-direction and/or an y-direction, the x-and y-directions being in a plane essentially transverse to the distance from the camera corresponding to the view points of the driver and the scene.
A matrix with three degrees of freedom may be provided. The three degrees may be represented by the x-coordinates of the first position, the y- coordinates of the first position and the distance between the camera and the scene.
In an embodiment, the homography matrices may be stored using a hash map.
A hashmap may facilitate extraction of a given homography matrix.
According to a second aspect, a method for perspective correction in a system displaying information on a head-up display is provided. The method comprises: providing memory on which a set of homography matrices is stored, wherein respective homography matrix being associated with a respective view point of a driver and a respective distance to an object, capturing an image of a scene in front of a vehicle using a first camera, determining a piece of information to be displayed on the head-up display based on identification of an object of interest in the image from the first camera, determining a distance to the object of interest, determining a view point of the driver, calculating a perspective correction based on a
homography matrix retrieved from the memory based on the determined distance to the object of interest and the determined view point of the driver.
Based on the distance to the object as well as the viewpoint of a driver, a homography matrix may be retrieved from the memory, the homography matrix relating to the perspective transformation between the first camera and the viewpoint. As the object is identified, relevant information may be displayed pertaining to the object.
In an embodiment, the homography matrices may be calculated using the method according to the first aspect. The above-mentioned features of the method, when applicable, apply to this second aspect as well. In order to avoid undue repetition, reference is made to the above.
By separating the processes of creating the homography matrices and retrieving said homography matrices, a more efficient perspective
transformation. The homography matrices may be created in a controlled environment, while the process of using the matrices for perspective transformation may be done when the vehicle is in use. The separation may alleviate resource management of the computer as well as precision of the calibration.
In an embodiment, the method may further comprise displaying a piece of information on the head-up display using the determined perspective correction. The information relating to the identified object may thereby be displayed on the HUD in a correct way, taking into account the different perspectives of the first camera and the user
In an embodiment, the view point of the driver may be determined using a second camera directed towards the driver.
A camera may facilitate the determination of the coordinates of the users viewpoints. A camera may also track a users viewpoint over time.
According to a third aspect, a computer program product is provided. The computer program product comprises a computer-readable medium, preferably a non-transitory computer readable medium, with computer- readable instructions such that when executed on a processing unit the computer program product will cause the processing unit to perform a method according to the first aspect.
According to a fourth aspect, a computer program product is provided. The computer program product comprises a computer-readable medium, preferably a non-transitory computer readable medium, with computer- readable instructions such that when executed on a processing unit the computer program product will cause the processing unit to perform a method according to the second aspect.
According to a fifth aspect, a system for providing information on a head up display is provided. The system comprises a memory on which a set of homography matrices is stored, wherein respective homography matrix being associated with a respective view point of a driver and a respective distance to an object, a computer program product according to the fourth aspect is stored, a processing unit configured to execute the computer program product according to the fourth aspect.
According to a sixth aspect, a head-up display system is provided. The system comprises a system according to the fifth aspect, a first camera configured to capture images of a scene in front of a vehicle, an apparatus, preferably a second camera, configured to determine a view point of the driver, a head up display configured to provide information on a windshield of a vehicle.
A further scope of applicability of the present invention will become apparent from the detailed description given below. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the scope of the invention will become apparent to those skilled in the art from this detailed description.
Hence, it is to be understood that this invention is not limited to the particular component parts of the device described or steps of the methods described as such device and method may vary. It is also to be understood that the terminology used herein is for purpose of describing particular embodiments only, and is not intended to be limiting. It must be noted that, as used in the specification and the appended claim, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements unless the context clearly dictates otherwise. Thus, for example, reference to "a unit" or "the unit" may include several devices, and the like. Furthermore, the words "comprising", "including", "containing" and similar wordings does not exclude other elements or steps.
Brief description of the drawings
The above and other aspects of the present invention will now be described in more detail, with reference to appended drawings showing embodiments of the invention. The figures should not be considered limiting the invention to the specific embodiment; instead they are used for explaining and understanding the invention.
As illustrated in the figures, the sizes of layers and regions are exaggerated for illustrative purposes and, thus, are provided to illustrate the general structures of embodiments of the present invention. Like reference numerals refer to like elements throughout.
Fig. 1 illustrates schematically an offline phase where a set of homography matrices are calculated, according to an embodiment of the inventive concept.
Fig. 2 illustrates schematically an online phase where the previously calculated homography matrices are used to provide perspective correction system displaying information on a head-up display, according to an embodiment of the inventive concept. Detailed description
The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which currently preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided for thoroughness and completeness, and to fully convey the scope of the invention to the skilled person. Preferred embodiments appear in the dependent claims and in the description.
The concept is based on a novel approach to solve the problem of providing perspective correction. A set of homography matrices may be calculated corresponding to a set of points that represent a specific position of the driver's head, in a controlled environment. Each homography matrix contains information that can be used to transform the perspective from the camera in front of the car to the driver's perspective. When the camera in front recognizes an object that needs to be highlighted on the head-up display (HUD), the camera may e.g. draw a graphics around that object. Then, by tracking the head of the driver the matching homography is continuously applied on the graphics so that it matches the outside world from the driver's perspective.
There are well developed approaches for automatically calculating homographies, such as SIFT (Scale-invariant feature transform) and SURF (Speeded Up Robust Features) but these are generally too slow for real-time purposes and would not be optimal for use in a critical environment.
Instead, the invention makes use of homographies being calculated in a controlled environment, saved and applied later with the help of head- tracking.
The word "information" is in the present application meant to be construed in a broad sense. "Information" may contain instructions to perform an action. The word "information" may also pertain data that facilitates an action such as a matrix or coordinates. Furthermore, "information" may also relate to "display information" as when the conecept of HUD-information is presented.
The word "scene" is in the context of the present application meant to be understood as a part of a scenery. "Scene" may pertain to a point or plane in the space in front of a camera. For example, a scene could be a two dimensional image plane transversal to the pointing direction of the camera, at a given distance from said camera. In other words, an "image of a scene" and "a scene" may be used interchangeably.
For the purpose of brevity, the word "capturing" comprises both filming and taking a picture.
The inventive concept will now be described in detail.
The method may be divided into two parts, an "offline" phase and an "online" phase. The "offline" phase will be discussed in detail with reference to figure 1 . The "online" phase will be discussed in detail with reference to figure 2.
As illustrated in figure 1 , during the offline phase, a camera A may be used to represent the view of the driver and a camera B may be used to represent the camera viewing the scenery in front of a vehicle. Camera A and camera B may be arranged to capture an image 1 in the scenery.
During the offline phase, camera B may capture a reference image 3 of a scene 2. The scene 2 may comprise four reference points P1 , P2, P3, P4. The four reference points may constitute the corners of a two-dimensional plane in the reference image 3 captured by camera B. The scene 2 may have the shape of a rectangle. The reference image 3 may comprise any other number of reference points such as three.
Camera B may measure a distance z between camera B and the scene 2 in the scenery. The distance z could be measured as the distance between camera B and a reference point in the scene 2. The scene 2 may be transversal to the viewing direction of camera B, causing the distance between the camera B and the different reference points to be essentially the same. If the scene 2 is not transversal to the viewing direction of camera B, an average distance z between camera B and the different reference points may be calculated.
The reference image 3 may thereby be represented by the distance value z.
It should also be noted that camera B may comprise a plurality of components, such as a camera capturing the image and a radar measuring the distance z between the camera B and the scene 2.
Camera B may be a camera mounted on the front of a vehicle. Camera
B may for example be mounted on the front-end protector of the vehicle.
Camera B may also be integrated into any other part of the exterior of the vehicle. An additional camera A may be positioned inside the vehicle. Camera A may represent the viewpoint of a user. Camera A may for example represent the head position of the user. A slight adjustment might be necessary to compensate for a difference between a users head position and the same user's viewpoint. Camera A may capture a first image 4 of the scene 2. Camera A may further register its position in an x and y coordinate system while capturing the first image 4 of the scene 2.
Based on the relation between the scene 2 in reference image 3 and the scene 2 in first image 4, a first homography matrix Mi may be created.
There are well known techniques to create homography matrices, which are not explained in detail here.
The resulting homography matrix Mi comprises values describing the perspective change between the reference image 3 and the first image 4, when observing the scene 2.
A function for retrieving the first homography matrix Mi may be formulated as f(xi,yi, zi).
It is preferable to have a camera A for capturing the first image 4 i.e. from the viewpoint of a user and a camera B for capturing the reference image 3 the scene 2. It should be noted, however, that the images could be captured by a single camera. For example, one could first capture the reference image 3 with a camera and then move the camera to capture first image 4.
A second image of the scene 2 may be captured using camera A.
Based on the second image and the reference image 3, a second
homography matrix IVb may be provided. The function for retrieving the second homography matrix IVb, may be expressed as f(x2,y2, 2.7). If the distance between camera B and the scene 2 is equal in the first and second image, the following holds true zi=∑2.
The method described above may be performed to create a plurality of homograpy matrices corresponding to a plurality of user viewpoints as well as distances to between the camera B and the scene 2. New reference images may be captured by varying the distance z.
The homography matrices Mi and M2 may be stored in a global array
M. The global array M may comprise a plurality of homography matrices Mi-n. The global array M may therefore comprise information about the perspective relations between the viewpoint of the user and the view of camera B at a plurality of viewpoints and distances. The global array M may be stored on a memory. The homography matrices Mi-n may also be stored individually on a memory.
It should be noted that the order of creating the homography matrices may be done differently. For example, camera B could capture images of the scene 2 at different distances before camera A captures images of the scene 2 at different positions. The skilled person could think of many other ways to perform the function, for example capturing images at a fixed x-coordinate of the viewpoint at a given distance, varying only the y-coordinate of the viewpoint and vice versa.
Camera A and camera B may be any type of camera, such as an optical camera, an infrared camera, or a camera capturing light of any other wavelength. For determining positions of viewpoint and reference points, other types of measurement instruments may be used such as radar, LiDAR etc.
The method may be performed continuously by tracking the reference points in the reference image.
Camera A and camera B may identify the scene 2 in various ways. For example, the cameras may identify an object by colour segmentation and/or contour approximation. For example, if a contour can be approximated using four straight lines a rectangle may be found and the four corners can be extracted as points. The four corners P1 , P2, P3, P4 may be identified as reference points. The four points from two different views are used to calculate a homography matrix that represents the perspective difference between the cameras.
Four points may be preferable giving information about perspective difference in all translations between the cameras and allows the amount of data to be kept at a minimum.
More than four points may be used to improve accuracy of the calibration.
The homographies may be retrieved from the global array M using a hashmap 5.
In a preferred embodiment, camera A moves around in a 2D plane that resembles the area where the driver might move his/her head inside, while homographies are saved created for every new coordinate. The measured camera coordinates correspond to a homography for every coordinate and is saved in memory, for example in a hash map. This procedure may be performed for different levels of depth where an object may occur, so the depth of the scene 2 is registered and saved as well. The global array M maps a coordinate (x, y, z) to a homography matrix Mi-n, where x and y represent the coordinates of camera A and z represents the depth of the scene 2.
As illustrated in Fig. 2, the inventive concept comprises an online phase. The online phase occurs when a driver operates the car. Information may be presented on the windshield 6 using a HUD system. The position of the head of the driver may be determined by a camera 1 1 . The camera 1 1 may be the same type of camera as previously described camera A and camera B.
While the car is being driven, camera 10 positioned on the front part of the vehicle may identify an object 9. It may be noted that the camera B and camera 10 corresponds to each other but camera B is provided in a
calibration set-up and camera 10 is provided on the actual vehicle. The identified object 9 may for example be a human. The object 9 may be of any type, such as another car, a sign, an obstacle etc. To warn the driver, the identified 9 may need to be highlighted. The distance between camera 10 and the object 9 may be measured using for example stereoscopic vision or an infra-red projector.
The head of the driver may be tracked in an x and y coordinate system.
Substantially at the same point in time, the distance z may be measured between camera 10 and the object 9.
The system looks up the x; y; z in the memory, such as the hash map, and retrieves a homography matrix M. The homography is applied to the graphics that needs to highlight the object through the windshield, and is then shown perspective corrected for the driver.
In an embodiment, coordinates xi, yi, zi may be provided using the method described above. The three coordinates may be used to retrieve a homography matrix Mi from an array of matrices M. The homography matrix Mi may be retrieved from the memory by using, for example a hashmap 5. The homography matrix Mi may relate to the perspective transformation between the camera B and the viewpoint of the driver.
The homography may be applied to perspective captured by camera 10. The information may then be applied to the graphics shown on the windshield 6. The information is then overlayed on the graphics in the view of the driver 8 in a perspective corrected manner. In the present application the user has been described as a driver and the vehicle has been described as a car. These are merely non-limiting examples, other examples are operators and other types of vehicles such as boats, airplanes and trucks.
The offline phase described above may be performed in a controlled environment The controlled environment may be a testlab or any other off- the-road site.
The person skilled in the art realizes that the present invention by no means is limited to the preferred embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims.
For example, the scene 2 may comprise a plurality of reference points that constitute a circle.
Additionally, variations to the disclosed embodiments can be understood and effected by the skilled person in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.

Claims

1 . A method for providing perspective correction information to be used in a system displaying information on a head-up display, the method comprising:
capturing a reference image (3) of a scene (2, P1 , P2, P3, P4) using a camera (B) corresponding to a camera intended to depict a view in front of a vehicle,
capturing a first image (4) of the scene (2, P1 , P2, P3, P4) using a camera (A) positioned in a first position corresponding to a first view point of a driver,
calculating, based on the reference image (3) and on the first image (4), a first homography matrix (Mi) representing the perspective difference between the view points,
capturing a second image of the scene (P1 , P2, P3, P4) using a camera (A) positioned in a second position corresponding to a second view point of a driver, the second position being different from the first position, calculating, based on the reference image (3) and on the second image, a second homography matrix (IVb) representing the perspective difference between the view points, and
saving the first and second homography matrices in a memory.
2. Method according to claim 1 , wherein the scene comprises a set of a plurality of reference points.
3. Method according to claim 2, wherein the reference points are spatially distributed in the scene (2) in at least two dimensions.
4. Method according to claim 2 or 3, wherein the set of reference points comprises at least four points, preferably four points.
5. Method according to any one of claims 1 -4, wherein the method comprises varying the distance (z) between the scene (2) and the camera (B) corresponding to a camera intended to depict a view in front of a vehicle.
6. Method according to claim 5, wherein for each distance z between the scene (2) and the camera (B) corresponding to a camera intended to depict a view in front of a vehicle a plurality of images are captured and homography matrices are calculated and stored for a plurality of positions corresponding to a plurality of view points of the driver.
7. Method according to any one of claims 1 -6, wherein the first position and the second position differs in an x-direction and/or an y-direction, the x- and y-directions being in a plane essentially transverse to the distance from the camera (A) corresponding to the view points of the driver and the scene.
8. Method according to any one of claims 1 -7, wherein the homography matrices Mi-n are stored using a hashmap (5).
9. Method for perspective correction in a system displaying information on a head-up display, the method comprising:
providing memory on which a set of homography matrices produced with the method as defined in any one of claims 1 - 8 is stored, wherein respective homography matrix (Mn) is associated with a respective view point (x,y) of a driver and a respective distance (z) to an object,
capturing an image (1 ) of a scene (2) in front of a vehicle using a first camera (10),
determining a piece of information to be displayed on the head-up display based on identification of an object of interest in the image from the first camera,
determining a distance (z) to the object of interest (9),
determining a view point of the driver,
calculating a perspective correction based on one or more homography matrices retrieved from the memory based on the determined distance to the object of interest and the determined view point of the driver.
10. Method according to claim 9, wherein the method further comprises displaying a piece of information on the head-up display using the determined perspective correction.
1 1 . Method according to any one of claims 9-10, wherein the view point of the driver is determined using a second camera (1 1 ) directed towards the driver.
12. Computer program product comprising a computer-readable medium, preferably a non-transitory computer readable medium, with computer-readable instructions such that when executed on a processing unit the computer program product will cause the processing unit to perform a method according to any one of claims 1 -8.
13. Computer program product comprising a computer-readable medium, preferably a non-transitory computer readable medium, with computer-readable instructions such that when executed on a processing unit the computer program product will cause the processing unit to perform a method according to any one of claims 9-1 1 .
14. A system for providing information on a head up display, the system comprising
a memory on which
a set of homography matrices is stored, wherein respective homography matrix being associated with a respective view point of a driver and a respective distance to an object,
a computer program product according to claim 13 is stored, the system further comprising:
a processing unit configured to execute the computer program product according to claim 13.
15. A head-up display system comprising
a system according to claim 14,
a first camera (10) configured to capture images of a scene in front of a vehicle,
an apparatus, preferably a second camera (1 1 ), configured to determine a view point of the driver,
a head up display configured to provide information on a windshield (6) of a vehicle.
PCT/SE2018/050553 2017-05-31 2018-05-31 Methods for perspective correction, computer program products and systems WO2018222122A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1750685 2017-05-31
SE1750685-8 2017-05-31

Publications (1)

Publication Number Publication Date
WO2018222122A1 true WO2018222122A1 (en) 2018-12-06

Family

ID=64455450

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2018/050553 WO2018222122A1 (en) 2017-05-31 2018-05-31 Methods for perspective correction, computer program products and systems

Country Status (1)

Country Link
WO (1) WO2018222122A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993798A (en) * 2019-04-09 2019-07-09 上海肇观电子科技有限公司 Method, equipment and the storage medium of multi-cam detection motion profile
CN110738696A (en) * 2019-08-27 2020-01-31 中国科学院大学 Driving blind area perspective video generation method and driving blind area view perspective system
CN112485262A (en) * 2020-12-22 2021-03-12 常州信息职业技术学院 Method and device for detecting apparent crack width and expansion evolution of concrete
CN112595257A (en) * 2020-11-19 2021-04-02 江苏泽景汽车电子股份有限公司 Windshield glass surface type detection method for HUD display
CN115345923A (en) * 2022-10-19 2022-11-15 佛山科学技术学院 Virtual scene three-dimensional reconstruction method for brain function rehabilitation training

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100157430A1 (en) * 2008-12-22 2010-06-24 Kabushiki Kaisha Toshiba Automotive display system and display method
US20110216201A1 (en) * 2008-10-01 2011-09-08 Hi-Key Limited method and a system for calibrating an image capture device
US20160167514A1 (en) * 2014-12-10 2016-06-16 Yoshiaki Nishizaki Information provision device, information provision method, and recording medium storing information provision program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110216201A1 (en) * 2008-10-01 2011-09-08 Hi-Key Limited method and a system for calibrating an image capture device
US20100157430A1 (en) * 2008-12-22 2010-06-24 Kabushiki Kaisha Toshiba Automotive display system and display method
US20160167514A1 (en) * 2014-12-10 2016-06-16 Yoshiaki Nishizaki Information provision device, information provision method, and recording medium storing information provision program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHANGRAK YOON ET AL.: "Augmented Reality Information Registration for Head-Up Display", 2015 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC, 28 October 2015 (2015-10-28), pages 1135 - 1137, XP032830104 *
CHANGRAK YOON ET AL.: "Development of augmented forward collision warning system for head-up display", 17TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC, 8 October 2014 (2014-10-08), pages 2277 - 2279, XP032685507 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993798A (en) * 2019-04-09 2019-07-09 上海肇观电子科技有限公司 Method, equipment and the storage medium of multi-cam detection motion profile
CN109993798B (en) * 2019-04-09 2021-05-28 上海肇观电子科技有限公司 Method and equipment for detecting motion trail by multiple cameras and storage medium
CN110738696A (en) * 2019-08-27 2020-01-31 中国科学院大学 Driving blind area perspective video generation method and driving blind area view perspective system
CN110738696B (en) * 2019-08-27 2022-09-09 中国科学院大学 Driving blind area perspective video generation method and driving blind area view perspective system
CN112595257A (en) * 2020-11-19 2021-04-02 江苏泽景汽车电子股份有限公司 Windshield glass surface type detection method for HUD display
CN112595257B (en) * 2020-11-19 2022-08-05 江苏泽景汽车电子股份有限公司 Windshield glass surface type detection method for HUD display
CN112485262A (en) * 2020-12-22 2021-03-12 常州信息职业技术学院 Method and device for detecting apparent crack width and expansion evolution of concrete
CN112485262B (en) * 2020-12-22 2023-08-11 常州信息职业技术学院 Method and device for detecting apparent crack width and expansion evolution of concrete
CN115345923A (en) * 2022-10-19 2022-11-15 佛山科学技术学院 Virtual scene three-dimensional reconstruction method for brain function rehabilitation training

Similar Documents

Publication Publication Date Title
US10726576B2 (en) System and method for identifying a camera pose of a forward facing camera in a vehicle
CN111783820B (en) Image labeling method and device
CN109461211B (en) Semantic vector map construction method and device based on visual point cloud and electronic equipment
US11216673B2 (en) Direct vehicle detection as 3D bounding boxes using neural network image processing
CN108647638B (en) Vehicle position detection method and device
WO2018222122A1 (en) Methods for perspective correction, computer program products and systems
US10097812B2 (en) Stereo auto-calibration from structure-from-motion
Gandhi et al. Vehicle surround capture: Survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps
US20110298988A1 (en) Moving object detection apparatus and moving object detection method
US8711486B2 (en) System for highlighting targets on head up displays with near focus plane
US11410430B2 (en) Surround view system having an adapted projection surface
US10150415B2 (en) Method and apparatus for detecting a pedestrian by a vehicle during night driving
EP3163506A1 (en) Method for stereo map generation with novel optical resolutions
US9892519B2 (en) Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle
CN110969064A (en) Image detection method and device based on monocular vision and storage equipment
CN112242009A (en) Display effect fusion method, system, storage medium and main control unit
WO2016146559A1 (en) Method for determining a position of an object in a three-dimensional world coordinate system, computer program product, camera system and motor vehicle
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN110197104B (en) Distance measurement method and device based on vehicle
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
KR20180061803A (en) Apparatus and method for inpainting occlusion of road surface
WO2015182771A1 (en) Image capturing device, image processing device, image processing method, and computer program
EP3629292A1 (en) Reference point selection for extrinsic parameter calibration
CN113011212B (en) Image recognition method and device and vehicle
KR20160063039A (en) Method of Road Recognition using 3D Data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18810032

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18810032

Country of ref document: EP

Kind code of ref document: A1