WO2022088103A1 - 一种图像标定方法及装置 - Google Patents

一种图像标定方法及装置 Download PDF

Info

Publication number
WO2022088103A1
WO2022088103A1 PCT/CN2020/125535 CN2020125535W WO2022088103A1 WO 2022088103 A1 WO2022088103 A1 WO 2022088103A1 CN 2020125535 W CN2020125535 W CN 2020125535W WO 2022088103 A1 WO2022088103 A1 WO 2022088103A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coordinate system
virtual image
vehicle body
coordinates
Prior art date
Application number
PCT/CN2020/125535
Other languages
English (en)
French (fr)
Inventor
张宇腾
周鹏程
于海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/125535 priority Critical patent/WO2022088103A1/zh
Priority to CN202080004865.9A priority patent/CN112655024B/zh
Publication of WO2022088103A1 publication Critical patent/WO2022088103A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image calibration method and device.
  • HUDs head up displays
  • the head-up display can project important driving information such as speed and navigation on the windshield in front of the driver, forming an image in front of the glass, so that the driver can see the speed and navigation without looking down or turning his head. driving information to improve driving safety.
  • an augmented reality head up display which combines the HUD virtual image with the real road information, thereby enhancing the driver's perception of the actual driving environment.
  • the imaging principle of AR HUD light is emitted through the light source, refracted, and finally projected onto the windshield to form a virtual HUD image.
  • the augmented reality head-up display needs to realize the human eye-HUD virtual image-road three-point line (see Figure 1a). Therefore, the position of the HUD virtual image needs to be precisely controlled according to the position of the road surface and the human eye. In order to precisely control the position of the HUD virtual image, it is necessary to accurately obtain the actual position information of the HUD virtual image.
  • Fig. 1b it is the method of measuring the position of the HUD virtual image in the prior art.
  • zoom measurement is adopted, and the HUD virtual image formed in the camera is observed by changing the focal length of the camera lens, and the focus position with the clearest imaging is the position where the HUD virtual image is located.
  • the image of the HUD virtual image on the focal plane 3 is the clearest.
  • the focal plane 3 is where the HUD virtual image is located, and the distance from the focal plane 3 to the camera is the virtual image distance.
  • the captured HUD virtual image is clear. Therefore, determining the position of the HUD virtual image according to the clear focal plane of the image will result in a large position error of the determined HUD virtual image.
  • the present application provides an image calibration method and device, which are used to improve the accuracy of the determined imaging parameters of the HUD virtual image as much as possible.
  • the present application provides an image calibration method, the method includes acquiring a first image of a target and a third image of a HUD virtual image displayed by a HUD, and acquiring a second image of the target and a fourth image of the HUD virtual image;
  • One image is obtained by shooting the target with the shooting device in the first position
  • the third image is obtained by shooting the HUD virtual image with the shooting device in the first position
  • the second image is obtained by shooting the target with the shooting device in the second position
  • the fourth image is obtained by shooting the target with the shooting device in the second position.
  • the image is obtained from the HUD virtual image taken by the camera in the second position; the first extrinsic parameter matrix of the camera in the first position can be determined according to the first image and the coordinates of the target in the vehicle body coordinate system; The coordinates of the image and the target in the vehicle body coordinate system determine the second extrinsic parameter matrix of the shooting device in the second position; it can be determined according to the first extrinsic parameter matrix, the third image, the second extrinsic parameter matrix and the fourth image.
  • the coordinates of the HUD virtual image in the vehicle body coordinate system; further, the imaging parameters of the HUD virtual image can be determined according to the coordinates of the HUD virtual image in the vehicle body coordinate system.
  • the coordinates of the HUD virtual image in the vehicle body coordinate system can be determined, and according to the coordinates of the HUD virtual image in the vehicle body coordinate system, the HUD can be further determined. Imaging parameters of the virtual image.
  • the image calibration method can simply, quickly and accurately determine the imaging parameters of the HUD virtual image.
  • the first position is different from the second position.
  • the vehicle body coordinate system may take the front wheel of the vehicle as the origin, and the forward direction or the backward direction of the vehicle as the X axis.
  • the first extrinsic parameter matrix and the second extrinsic parameter matrix may be determined in the following manner: determining the first pixel coordinates of each target point on the first image, according to the first pixel coordinates of each target point , the coordinates of each target point in the vehicle body coordinate system and the third coordinate conversion relationship to determine the first external parameter matrix; the third coordinate conversion relationship is the first pixel coordinates of each target point and each target point in the vehicle body coordinate system.
  • the second pixel coordinates of each target point on the second image according to the second pixel coordinates of each target point, the coordinates of each target point in the vehicle body coordinate system and the fourth coordinate conversion relationship , determine the second external parameter matrix, and the fourth coordinate conversion relationship is the relationship between the second pixel coordinates of each target point and the coordinates of each target point in the vehicle body coordinate system.
  • the first pixel coordinates of each target point are the coordinates of each target point on the first image in the image coordinate system.
  • the second pixel coordinates of each target point are the coordinates of each target point in the image coordinate system on the second image.
  • the HUD virtual image includes n reference points, where n is an integer greater than 1; the third pixel coordinates of the n reference points on the third image and the n reference points on the fourth image can be determined respectively.
  • the third pixel coordinates of the n reference points are the coordinates of the n reference points on the third image in the image coordinate system.
  • the fourth pixel coordinates of the n reference points are the coordinates of the n reference points on the fourth image in the image coordinate system.
  • the image coordinate system may take the upper left corner or the lower left corner of the image as the origin.
  • the imaging parameters of the HUD virtual image include, but are not limited to, any one or more of: virtual image distance (VID), horizontal field of view, vertical field of view, center position, distortion rate or rotational deformation .
  • VIP virtual image distance
  • the coordinates of each of the n reference points in the vehicle body coordinate system can be determined accurately and quickly. Based on the coordinates of the n reference points in the vehicle body coordinate system, the The imaging parameters of the HUD virtual image are determined, thereby helping to improve the accuracy and efficiency of the HUD virtual image calibration.
  • the imaging parameters for determining the HUD virtual image are introduced as follows.
  • Imaging parameter one virtual image distance.
  • the average value of the x-coordinates of at least two reference points in the vehicle body coordinate system among the n reference points on the HUD virtual image may be determined, and the x-coordinate is the forward or reverse direction of the vehicle;
  • the x coordinate of the center of the box in the vehicle body coordinate system; the absolute value of the difference between the average value and the x coordinate of the center position of the eye box in the vehicle body coordinate system is determined as the virtual image distance.
  • the second imaging parameter is the field of view.
  • the viewing angle includes a horizontal viewing angle and a vertical viewing angle.
  • the length in the horizontal direction of the HUD virtual image can be determined according to the coordinates of at least two reference points located in the same horizontal direction among the n reference points in the vehicle body coordinate system; Length and virtual image distance, determine the horizontal field of view.
  • the length of the HUD virtual image in the vertical direction may be determined according to the coordinates of at least two reference points located in the same vertical direction among the n reference points in the vehicle body coordinate system; according to the vertical length of the HUD virtual image The length in the direction and the virtual image distance determine the vertical field of view.
  • the coordinates of the center reference point among the n reference points on the HUD virtual image in the vehicle body coordinate system may be determined as the center position of the HUD virtual image.
  • the fourth imaging parameter is the distortion rate.
  • the distortion rate of the first reference point may be determined, and the first reference point is at least one of n reference points on the HUD virtual image; according to the distortion rate of the first reference point, the distortion rate of the HUD virtual image is determined Distortion rate.
  • the actual distance between the first reference point and the central reference point can be determined; then according to the central reference point and at least 4 reference points around the central reference point, the predicted distance of the first reference point is determined; Therefore, the distortion rate of the first reference point can be determined according to the actual distance and the predicted distance.
  • Imaging parameter five rotational deformation.
  • the z coordinate of the second reference point in the vehicle body coordinate system, the y coordinate of the second reference point in the vehicle body coordinate system, and the z coordinate of the third reference point in the vehicle body coordinate system are determined coordinates and the y-coordinate of the third reference point in the vehicle body coordinate system; wherein, the second reference point and the third reference point are two reference points in the same horizontal direction among the n reference points;
  • the z coordinate in the body coordinate system, the y coordinate of the second reference point in the vehicle body coordinate system, the z coordinate of the third reference point in the vehicle body coordinate system, and the y coordinate of the third reference point in the vehicle body coordinate system Determine the rotation deformation.
  • the present application provides an image calibration device, which can be used to implement the above first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, which are respectively used to implement the steps in the above method.
  • the functions can be implemented by hardware, or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the image calibration device may include a transceiver module and a processing module: wherein the transceiver module is used to acquire a first image of the target and a third image of the HUD virtual image displayed by the head-up display HUD, the first image The third image is obtained by photographing the target by the photographing device in the first position, and the third image is obtained by photographing the HUD virtual image by the photographing device in the first position; and the second image of the target and the fourth image of the HUD virtual image are obtained.
  • the fourth image is obtained by photographing the target by the photographing device at the second position, and the fourth image is obtained by photographing the HUD virtual image by the photographing device at the second position; the processing module is used to determine the location in the vehicle body coordinate system according to the coordinates of the first image and the target in the vehicle body coordinate system.
  • the first extrinsic parameter matrix of the photographing device at the first position; the second extrinsic parameter matrix of the photographing device at the second position is determined according to the second image and the coordinates of the target in the vehicle body coordinate system; and according to the first extrinsic parameter matrix , the third image, the second external parameter matrix and the fourth image to determine the coordinates of the HUD virtual image in the vehicle body coordinate system; and determine the imaging parameters of the HUD virtual image according to the coordinates of the HUD virtual image in the vehicle body coordinate system.
  • the HUD virtual image includes n reference points, where n is an integer greater than 1; the processing module is specifically configured to: respectively determine n reference points on the third image at the third pixel coordinates, and n The fourth pixel coordinate of the reference point on the fourth image; the first coordinate conversion relationship is determined according to the third pixel coordinates of the n reference points and the first external parameter matrix; wherein, the first coordinate conversion relationship is the The relationship between the third pixel coordinates and the coordinates of the n reference points in the vehicle body coordinate system; the second coordinate conversion relationship is determined according to the fourth pixel coordinates of the n reference points and the second external parameter matrix; The coordinate conversion relationship is the relationship between the fourth pixel coordinates of the n reference points and the coordinates of the n reference points in the vehicle body coordinate system; according to the first coordinate conversion relationship and the second coordinate conversion relationship, determine n on the HUD virtual image The coordinates of a reference point in the body coordinate system.
  • the imaging parameters of the HUD virtual image include but are not limited to any one or more of virtual image distance (VID), horizontal field of view, vertical field of view, center position, distortion rate or rotational deformation .
  • VIP virtual image distance
  • the imaging parameter includes a virtual image distance
  • the processing module is specifically configured to: determine the average value of the x-coordinates of at least two reference points in the vehicle body coordinate system among the n reference points on the HUD virtual image, x The coordinate is the forward or backward direction of the car; determine the x coordinate of the center of the eye box in the vehicle body coordinate system; the absolute value of the difference between the average value and the x coordinate of the center position of the eye box in the vehicle body coordinate system is determined as Virtual image distance.
  • the imaging parameter further includes a horizontal field of view
  • the processing module is specifically configured to: determine, according to the coordinates in the vehicle body coordinate system of at least two reference points located in the same horizontal direction among the n reference points, The length of the HUD virtual image in the horizontal direction; according to the length of the HUD virtual image in the horizontal direction and the virtual image distance, the horizontal field of view is determined.
  • the imaging parameter further includes a vertical field of view
  • the processing module is specifically configured to: determine, according to the coordinates in the vehicle body coordinate system of at least two reference points located in the same vertical direction among the n reference points, The length of the HUD virtual image in the vertical direction; according to the length of the HUD virtual image in the vertical direction and the virtual image distance, the vertical field of view is determined.
  • the imaging parameters include a center position; the processing module is specifically used to: determine the coordinates of the center reference point among the n reference points on the HUD virtual image in the vehicle body coordinate system as the center of the HUD virtual image Location.
  • the imaging parameter includes a distortion rate
  • the processing module is specifically configured to: determine the distortion rate of a first reference point, where the first reference point is at least one of n reference points on the HUD virtual image; A distortion rate of a reference point to determine the distortion rate of the HUD virtual image.
  • n is an integer greater than 5; the processing module is specifically configured to: determine the actual distance between the first reference point and the central reference point; Four reference points are used to determine the predicted distance of the first reference point; according to the actual distance and the predicted distance, the distortion rate of the first reference point is determined.
  • the imaging parameters include rotational deformation
  • the processing module is specifically configured to: determine the z coordinate of the second reference point in the vehicle body coordinate system, the y coordinate of the second reference point in the vehicle body coordinate system, The z coordinate of the third reference point in the vehicle body coordinate system and the y coordinate of the third reference point in the vehicle body coordinate system; wherein, the second reference point and the third reference point are the n reference points in the same horizontal direction Two reference points; according to the z coordinate of the second reference point in the vehicle body coordinate system, the y coordinate of the second reference point in the vehicle body coordinate system, the z coordinate of the third reference point in the vehicle body coordinate system, and the third The y-coordinate of the reference point in the body coordinate system determines the rotational deformation.
  • the processing module is specifically configured to: determine the first pixel coordinates of each target point on the first image; The coordinate and the third coordinate conversion relationship determine the first external parameter matrix; the third coordinate conversion relationship is the relationship between the first pixel coordinates of each target point and the coordinates of each target point in the vehicle body coordinate system; the processing module specifically uses In: determine the second pixel coordinates of each target point on the second image; determine the second external parameter according to the second pixel coordinates of each target point, the coordinates of each target point in the vehicle body coordinate system and the fourth coordinate conversion relationship.
  • matrix, and the fourth coordinate conversion relationship is the relationship between the second pixel coordinates of each target point and the coordinates of each target point in the vehicle body coordinate system.
  • the present application provides an image calibration device.
  • the image calibration device is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, which are respectively used to implement the steps in the above method.
  • the functions can be implemented by hardware, or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the image calibration apparatus may include: a transceiver and a processor.
  • the processor may be configured to support the image calibration apparatus to perform the corresponding functions of the image calibration apparatus shown above, and the transceiver to support communication between the image calibration apparatus and other devices and the like.
  • the transceiver may be an independent receiver, an independent transmitter, a transceiver with integrated transceiver functions, or an interface circuit.
  • the image calibration device may further include a memory, which may be coupled to the processor, and stores necessary program instructions and data for the image calibration device.
  • the transceiver is used to: obtain a first image of the target and a third image of the HUD virtual image displayed by the head-up display HUD, where the first image is obtained by photographing the target by the photographing device in the first position, and the third image is obtained by the camera in the first position obtained by photographing the HUD virtual image by the photographing device; and obtaining a second image of the target and a fourth image of the HUD virtual image, the second image is obtained by the photographing device at the second position photographing the target, and the fourth image is obtained by the photographing device at the second position Obtained by shooting the HUD virtual image; the processor is used to: determine the first external parameter matrix of the shooting device at the first position according to the first image and the coordinates of the target in the vehicle body coordinate system; The coordinates in the system, determine the second extrinsic parameter matrix of the shooting device at the second position; and determine the HUD virtual image in the vehicle body coordinate system according to the first extrinsic parameter matrix, the third image, the second extrin
  • the HUD virtual image includes n reference points, where n is an integer greater than 1; the processor is specifically configured to: respectively determine the coordinates of the n reference points on the third image at the third pixel, and n The fourth pixel coordinate of the reference point on the fourth image; the first coordinate conversion relationship is determined according to the third pixel coordinates of the n reference points and the first external parameter matrix; wherein, the first coordinate conversion relationship is the The relationship between the third pixel coordinates and the coordinates of the n reference points in the vehicle body coordinate system; the second coordinate conversion relationship is determined according to the fourth pixel coordinates of the n reference points and the second external parameter matrix; The coordinate conversion relationship is the relationship between the fourth pixel coordinates of the n reference points and the coordinates of the n reference points in the vehicle body coordinate system; according to the first coordinate conversion relationship and the second coordinate conversion relationship, determine n on the HUD virtual image The coordinates of a reference point in the body coordinate system.
  • the imaging parameters of the HUD virtual image include but are not limited to any one or more of virtual image distance (VID), horizontal field of view, vertical field of view, center position, distortion rate or rotational deformation .
  • VIP virtual image distance
  • the imaging parameter includes a virtual image distance
  • the processor is specifically configured to: determine the average value of the x-coordinates of at least two reference points in the vehicle body coordinate system among the n reference points on the HUD virtual image, x The coordinate is the forward or backward direction of the car; determine the x coordinate of the center of the eye box in the vehicle body coordinate system; the absolute value of the difference between the average value and the x coordinate of the center position of the eye box in the vehicle body coordinate system is determined as Virtual image distance.
  • the imaging parameter further includes a horizontal field of view;
  • the processor is specifically configured to: determine, according to the coordinates in the vehicle body coordinate system of at least two reference points located in the same horizontal direction among the n reference points, The length of the HUD virtual image in the horizontal direction; according to the length of the HUD virtual image in the horizontal direction and the virtual image distance, the horizontal field of view is determined.
  • the imaging parameter further includes a vertical field of view;
  • the processor is specifically configured to: determine, according to coordinates in the vehicle body coordinate system of at least two reference points located in the same vertical direction among the n reference points, The length of the HUD virtual image in the vertical direction; according to the length of the HUD virtual image in the vertical direction and the virtual image distance, the vertical field of view is determined.
  • the imaging parameters include a center position; the processor is specifically configured to: determine the coordinates of the center reference point among the n reference points on the HUD virtual image in the vehicle body coordinate system as the center of the HUD virtual image Location.
  • the imaging parameter includes a distortion rate
  • the processor is specifically configured to: determine a distortion rate of a first reference point, where the first reference point is at least one of n reference points on the HUD virtual image; A distortion rate of a reference point to determine the distortion rate of the HUD virtual image.
  • n is an integer greater than 5; the processor is specifically configured to: determine the actual distance between the first reference point and the central reference point; Four reference points are used to determine the predicted distance of the first reference point; according to the actual distance and the predicted distance, the distortion rate of the first reference point is determined.
  • the imaging parameters include rotational deformation
  • the processor is specifically configured to: determine the z coordinate of the second reference point in the vehicle body coordinate system, the y coordinate of the second reference point in the vehicle body coordinate system, The z coordinate of the third reference point in the vehicle body coordinate system and the y coordinate of the third reference point in the vehicle body coordinate system; wherein, the second reference point and the third reference point are the n reference points in the same horizontal direction Two reference points; according to the z coordinate of the second reference point in the vehicle body coordinate system, the y coordinate of the second reference point in the vehicle body coordinate system, the z coordinate of the third reference point in the vehicle body coordinate system, and the third The y-coordinate of the reference point in the body coordinate system determines the rotational deformation.
  • the processor is specifically configured to: determine the first pixel coordinates of each target point on the first image; The coordinate and the third coordinate conversion relationship determine the first external parameter matrix; the third coordinate conversion relationship is the relationship between the first pixel coordinates of each target point and the coordinates of each target point in the vehicle body coordinate system; the processor specifically uses In: determine the second pixel coordinates of each target point on the second image; determine the second external parameter according to the second pixel coordinates of each target point, the coordinates of each target point in the vehicle body coordinate system and the fourth coordinate conversion relationship. matrix, and the fourth coordinate conversion relationship is the relationship between the second pixel coordinates of each target point and the coordinates of each target point in the vehicle body coordinate system.
  • the present application provides an image calibration system, which includes a vehicle, a photographing device, and an image calibration device diagnosis device.
  • the image calibration device can be used to execute any one of the above first aspect or the first aspect method, and the photographing device can be used to photograph the above first image, second image, third image and fourth image.
  • the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by the device, the image calibration device is made to perform the above-mentioned first aspect or the first aspect.
  • a method in any possible implementation of an aspect.
  • the present application provides a computer program product, the computer program product includes a computer program or an instruction, when the computer program or instruction is executed by an image calibration device, the above-mentioned first aspect or any possible implementation of the first aspect is realized method in method.
  • 1a is a schematic diagram of a human eye, a HUD virtual image, and a three-point-one-line road surface provided by the application;
  • Fig. 1b is a schematic diagram of a method of measuring the position of a HUD virtual image in the prior art
  • FIG. 2 is a schematic diagram of the relationship between a pixel coordinate system and an image coordinate system provided by the application;
  • 3a is a schematic diagram of the architecture of a system provided by the application.
  • 3b is a schematic diagram of the architecture of another system provided by the application.
  • 3c is a schematic diagram of the architecture of another system provided by the application.
  • FIG. 3d is a schematic diagram of an application scenario provided by this application.
  • FIG. 4 is a schematic flowchart of a method of an image calibration method provided by the present application.
  • FIG. 5 is a schematic diagram of a target provided by the application.
  • Fig. 6 is a kind of HUD virtual image schematic diagram provided by this application.
  • FIG. 7 is a schematic diagram of imaging parameters of a HUD virtual image provided by the application.
  • Fig. 8 is a kind of principle schematic diagram of generating ghosting provided by this application.
  • FIG. 9 is a schematic structural diagram of an image calibration device provided by the application.
  • FIG. 10 is a schematic structural diagram of an image calibration device provided by the present application.
  • the current measurement of the position of the HUD virtual image adopts the zoom measurement method, and the clearest focus position of the HUD virtual image is determined as the position where the HUD virtual image is located, and the distance from the clearest position of the HUD virtual image to the camera is the virtual image distance.
  • the camera since the camera has a certain depth of field range, within the depth of field, the captured HUD virtual image is clear. Therefore, determining the position of the HUD virtual image according to the clear focal plane of the image will result in a large position error of the determined HUD virtual image.
  • the present application provides an image calibration method, and the image calibration method in the present application can accurately and quickly calibrate the HUD virtual image.
  • the image calibration method provided by the present application will be described in detail below with reference to the accompanying drawings.
  • the world coordinate system was introduced to describe the position of the object in the real world. It is the absolute coordinate system of the objective three-dimensional world. Because the camera is placed in three-dimensional space, the reference coordinate system of the world coordinate system is needed to describe the position of the camera, and it is used to describe the position of any other object placed in this three-dimensional coordinate.
  • Use (X w , Y w , Z w ) Represents the coordinate value of the object in the world coordinate system.
  • the camera coordinate system also known as the optical center coordinate system, is a coordinate system established on the camera. It is defined to describe objects from the camera's point of view. It is used as an intermediate link between the world coordinate system and the image coordinate system (or pixel coordinate system).
  • the unit is m. Taking the optical center of the camera lens as the coordinate origin, the X-axis and Y-axis are parallel to the X-axis and Y-axis of the image coordinate system, respectively, and the optical axis of the camera is the Z-axis, and its coordinates are represented by (X c , Y c , Z c ) value.
  • the image coordinate system is a two-dimensional rectangular coordinate system on the image plane.
  • the origin of the image coordinate system is the intersection of the optical axis of the lens and the image plane (also called the principal point), and the x and y axes of the image coordinate system are parallel to the X and Y axes of the camera coordinate system, respectively. Coordinate values are represented by (x, y).
  • An image coordinate system expresses the position of a pixel in an image in physical units, such as millimeters.
  • the pixel coordinate system is a two-dimensional rectangular coordinate system commonly used in image processing, which reflects the arrangement of pixels in the camera's charge coupled device (CCD)/complementary metal oxide semiconductor (CMOS) chip. .
  • the unit is pieces (number of pixels).
  • the upper left or lower left corner of the image plane is used as the origin, and the u-axis and v-axis are parallel to the X-axis and Y-axis of the image coordinate system, respectively.
  • Column, the ordinate v represents the row where the pixel is located.
  • the images collected by the camera are firstly in the form of standard electrical signals, and then converted into digital images through analog-to-digital conversion.
  • the storage form of each image is an array of P ⁇ Q, and the value of each element in the image of P row and Q column represents the grayscale of the image point.
  • Each such element is called a pixel, and the pixel coordinate system is the image coordinate system in pixels.
  • FIG. 2 is a schematic diagram of the relationship between a pixel coordinate system and an image coordinate system provided by the present application.
  • the pixel coordinate system has a translational relationship with the image coordinate system.
  • (u 0 , v 0 ) are the coordinates of the origin (principal point) of the image coordinate system
  • dx and dy are the physical dimensions of each pixel on the x-axis and y-axis, respectively.
  • the internal parameter matrix N can be understood as each value in the matrix is only related to the internal parameters of the camera, and does not change with the position of the object.
  • the transformation from three-dimensional coordinates to two-dimensional coordinates that is, the projection perspective process (using the central projection method to project the object onto the projection surface, so as to obtain a single-sided projection map that is closer to the visual effect, that is, the scene seen by the human eye is close to An imaging method of big, far and small).
  • f represents the focal length when the camera captures an image, that is, the distance between the image plane and the origin of the camera coordinate system
  • Z c represents the distance relationship between the photographer and the shooting device, which is a known number.
  • the world coordinate system and the camera coordinate system do not coincide.
  • the coordinates of the point must first be converted to the camera coordinate system. Any two three-dimensional coordinates can be converted by rotation and translation, and the process of converting a rigid body from the world coordinate system to the camera coordinate system can also be obtained by rotation and translation.
  • the coordinate of point P in the world coordinate system be X w
  • the vertical distance from P to the optical center is s
  • the coordinate on the image plane is x
  • the relative rotation between the world coordinate system and the camera coordinate system is the matrix R (R is a rotation matrix with three rows and three columns)
  • the relative displacement is a vector T (three rows and one column).
  • the homogeneous coordinate matrix composed of a rotation matrix and a translation vector is expressed as follows:
  • (X w , Y w , Z w , 1) are the homogeneous coordinates of the world coordinate system
  • (X c , Y c , Z c , 1) are the homogeneous coordinates of the camera coordinate system. It should be understood that since the transformation matrix between the world coordinate system and the camera coordinate system has nothing to do with the camera, it is also called the external parameter matrix.
  • camera calibration In the process of image measurement and machine vision applications, in order to determine the relationship between the three-dimensional geometric position of a point on the surface of a space object and its corresponding point in the image, it is necessary to establish a geometric model of camera imaging, and these geometric model parameters are camera parameters.
  • the purpose of camera calibration is to obtain the internal parameters of the camera (such as the internal parameter matrix) and external parameters (such as the outer gap), and the process of solving the parameters can be called camera calibration.
  • the eye box usually refers to the area where the driver's eyes can see the entire displayed image.
  • the general eye box size is 130mmx50mm. Due to the height of different drivers, the eye box needs to have a moving range of about ⁇ 50mm in the vertical direction.
  • the human eye can see the area of the clear HUD virtual image in the scope of the eye box. Referring to Figure 1a above, if the human eye is aligned with the center of the eye box, a complete and clear HUD virtual image can be obtained. As the eye moves left and right or up and down, at some point in each direction, the image will deteriorate until it becomes unacceptable, i.e. beyond the eye box. Image distortion, wrong color rendering, or even no display may occur in areas beyond the eye box.
  • FIG. 3a is a schematic diagram of the architecture of an applicable system of the present application.
  • the system may include a target, a vehicle, a photographing device and a fixed component, wherein the vehicle includes an AR-HUD, an AR-HUD -
  • the vehicle includes an AR-HUD, an AR-HUD -
  • the specific structure of the HUD can be seen in Figure 3b or Figure 3c.
  • the HUD virtual image generated by AR-HUD can be projected in the driver's front view.
  • the main principle of AR-HUD is to use multiple curved mirrors or plane emitters to amplify the HUD virtual image generated by the picture generation unit (PGU) and reflect it to a certain position outside the car, that is, to reflect the driver's In the forward field of view (eyebox range), the driver is presented with an image that is a certain distance (eg 2 to 20m) away from the road.
  • the actual position of the HUD virtual image is determined by the HUD's optical system. In theory, the navigation lane lines projected by the AR HUD and related warning information should fit the actual road as closely as possible, preferably without errors.
  • the AR HUD has an imaging distance of more than 7.5 meters according to the actual driving needs of the vehicle, so that the virtual image of the HUD can be superimposed with the object or the real road scene to form an augmented reality effect, so that the driver can
  • the prompt information is obtained while observing the real environment, and there is no longer a blind spot for vision. It should be understood that if the AR-HUD only displays some vehicle speed and prompt information, it does not need to care too much about the position of the HUD virtual image, but if it involves navigation, advanced driving assistant system (ADAS) information, etc., you need to Get the exact position of the HUD virtual image.
  • ADAS advanced driving assistant system
  • the photographing device is arranged in the eye box area, wherein the eye box range is usually about 10 cm.
  • the photographing device may be, for example, a camera, and the system may include one photographing device or may include two photographing devices, wherein the two photographing devices are located at different positions and are both arranged in the eye box area.
  • the fixing assembly is used to fix the camera in a certain position.
  • the fixing device can be, for example, a robotic arm, or a slide rail.
  • a camera can be fixed at different positions through a robotic arm or a slide rail, see Figure 3b; alternatively, it can also include two fixing components, one fixing component can fix the photographing device in one position, and two fixing components Two cameras can be fixed in two positions, see Figure 3c.
  • the target may include at least 6 target points, and the coordinates of the target points in the vehicle body coordinate system are known, and the HUD virtual image can be calibrated by the target.
  • the distance between the target and the vehicle can be determined according to the specific scene.
  • the shape of the target may be circular, or square, or other regular or irregular shapes, and the circular targets in Figures 3a to 3c are for illustration only.
  • Fig. 3a, Fig. 3b and Fig. 3c can be applied to the scene of HUD virtual image calibration on the vehicle AR-HUD production line, please refer to Fig. 3d.
  • FIG. 3d is a scene to which this application can be applied.
  • test equipment and vehicles can be included.
  • the test equipment can be connected to the vehicle through the on-board diagnostic system (OBD) port on the vehicle.
  • OBD on-board diagnostic system
  • the test equipment is plugged into the OBD port of the vehicle, enabling communication between the test equipment and the vehicle.
  • the OBD is usually installed in the vehicle and can be used to record the performance information of the vehicle in real time, wherein the interface through which the OBD communicates with the test equipment can be called an OBD port.
  • Test equipment is a professional instrument or system dedicated to vehicle inspection, that is, the test equipment can be used to obtain information about the vehicle. For example, it can be used to detect the performance of the vehicle, and the performance information of the vehicle (such as the imaging parameters of the HUD virtual image) can be obtained.
  • the test equipment can realize the test of the vehicle through the developed test software. It can also be understood that the equipment with the test software installed can be understood as the test equipment. For example, a personal computer (PC) installed with testing software, or a tablet computer, or a special device, etc., wherein the special device such as a diagnostics tester (DT), DT can also be called a tester or a vehicle diagnostics or upper instrument. Further, the test equipment can also present various test information in the form of a graphical interface.
  • PC personal computer
  • DT diagnostics tester
  • FIG. 4 exemplarily shows an image calibration method provided by the present application.
  • the test equipment in this method may be the test equipment in the above-mentioned FIG. 3d, and the AR-HUD may be the AR-HUD in any of the above-mentioned embodiments of FIG. 3a to FIG. 3d.
  • the method includes the following steps:
  • Step 401 the testing device obtains the coordinates (x T , y T , z T ) of the target point on the target in the vehicle body coordinate system.
  • This step 401 is an optional step.
  • the coordinates of the target point on the target in the vehicle body coordinate system are (x T , y T , z T ).
  • the target includes at least 6 target points, and at least 3 target points are not collinear. That is, the target is a target that has been calibrated.
  • T can be 1 to 6, please refer to FIG. 5 , which is a schematic diagram of a target provided in the present application.
  • the target includes 6 reference points.
  • the coordinates of the 6 target points in the vehicle body coordinate system are: (x 1 , y 1 , z 1 ), (x 2 , y 2 , z 2 ), (x 3 , y 3 , z 3 ), (x 4 , y 4 , z 4 ), (x 5 , y 5 , z 5 ), (x 6 , y 6 , z 6 ).
  • Step 402 the test device sends a lighting instruction to the AR-HUD.
  • the AR-HUD receives the lighting instruction from the test equipment, and lights up according to the lighting instruction, so that the AR-HUD displays the HUD virtual image.
  • This step 402 is an optional step.
  • the HUD virtual image can be displayed at a certain position in front of the driver.
  • the HUD virtual image may be referred to as a calibration image or a test image.
  • C 0 is the center point of the HUD virtual image
  • C 1 , C 2 , C 3 , and C 4 are four points on the top, bottom, left, right, and right of C 0 , and the distances from C 0 are all equal.
  • the distances from C 0 may also be unequal, and the specific distance may be determined according to the size of the HUD virtual image. For example, it can be 1/4 of the length and width of the HUD virtual image.
  • E 1 , E 2 , E 5 , and E 6 are the four vertices of the edge of the HUD virtual image, and E 3 and E 4 are the center points of the vertical sides.
  • C 0 , C 1 , C 2 , C 3 , and C 4 may represent central regions
  • E 1 , E 2 , E 3 , E 4 , E 5 , and E 6 may represent edge regions.
  • the test equipment can send instructions to the HUD through the OBD port, and the HUD lights up according to the instructions to display the HUD virtual image.
  • the size of the HUD virtual image is related to the type of AR-HUD, so the number of reference points on the HUD virtual image of different AR-HUDs will also vary.
  • the number of reference points on the HUD virtual image can be set according to the requirements given at the factory. For example, if the spacing between reference points is required to be less than 0.5 degrees and evenly distributed, the minimum number of reference points required on the HUD virtual image can be determined. quantity.
  • Step 401 may be performed first and then step 402 may be performed, or step 402 may be performed first and then step 403 may be performed.
  • Step 403 the testing device acquires the first image of the target and the third image of the virtual HUD image displayed by the HUD.
  • the first image is obtained by photographing the target by the photographing device in the first position
  • the third image is obtained by photographing the HUD virtual image by the photographing device at the first position.
  • the photographing device may transmit the photographed first image and the third image to the test equipment (eg, through a network).
  • Step 404 the testing device acquires the second image of the target and the fourth image of the HUD virtual image.
  • the second image is obtained by photographing the target by the photographing device in the second position
  • the fourth image is obtained by photographing the HUD virtual image by the photographing device at the second position.
  • the above-mentioned system includes two shooting devices, that is, the target and the HUD virtual image are shot by the two shooting devices, wherein one shooting device can be set at the first position, and the other is shooting
  • the device can be set in the second position, the shooting device in the first position captures the target to obtain the first image, and the HUD virtual image captures the third image; the capture device in the second position captures the target to obtain the second image, and captures the HUD virtual image to obtain the fourth image.
  • the photographing device at the first position and the photographing device at the second position may be photographed at the same time, that is, the first image and the second image may be photographed at the same time, and the third image and the fourth image may also be photographed at the same time.
  • the two photographing devices may not be photographed at the same time, which is not limited in this application.
  • the first image and the third image may be obtained by the photographing device at the first position
  • the second image and the fourth image are obtained by the photographing device at the first position.
  • the photographing device may be moved to the first position or the second position by, for example, a robotic arm or a slide rail, and may be moved left and right, or may be moved back and forth, or may be moved up and down.
  • the photographing device can transmit the second image and the fourth image obtained by photographing to the test equipment (eg, through a network).
  • both the first position and the second position are within the eye box area.
  • the driver's eyes can see a clear HUD virtual image in the eye box area. If the area of the eye box is exceeded, the driver cannot see the relevant image or see that the distortion of the image is serious.
  • the distance between the first location and the second location is less than 10 cm.
  • Step 405 the testing device determines the first pixel coordinates of each target point included in the first image
  • the first pixel coordinates of each target point are the coordinates of each target point on the first image in the image coordinate system.
  • a checkerboard is used to represent each target on the first image, and an image processing algorithm is used to identify the straight line features and black and white features of the target region, and detect parallel lines, which are determined by intersection points. target. Since the positional relationship between the target points is known (that is, the distance between the target points is known), the relationship between each target point and the intersection point in the first image can be inferred, and then each target point included in each first image can be determined. The first pixel coordinate of .
  • Step 406 the testing equipment is based on the coordinates (x T , y T , z T ) of each target point in the vehicle body coordinate system, the first pixel coordinates of each target point on the first image and a third coordinate conversion relationship to determine the first extrinsic parameter matrix of the photographing device at the first position.
  • the third coordinate conversion relationship is the relationship between the first pixel coordinates of each target point and the coordinates of each target point in the vehicle body coordinate system.
  • the pixel coordinates (u, v), the camera's internal parameter matrix, and the coordinates (X c , Y c , Z c ) in the camera coordinate system satisfy the first relationship, that is, formula 1; the coordinates in the camera coordinate system ( The second relationship is satisfied between X c , Y c , Z c ), the external parameter matrix of the camera, and the coordinates (x T , y T , z T ) in the vehicle body coordinate system, that is, formula 2.
  • (dx, dy) represents the pixel size in the pixel array of the photographing device
  • (u 0 , v 0 ) represents the center coordinates of the pixel array of the photographing device
  • f is the focal length when the photographing device captures an image
  • Z C represents the distance relationship between the photographer and the photographing device, which is a known number.
  • the first pixel coordinates of each target point There is a third coordinate conversion relationship between the coordinates (x T , y T , z T ) of each target point in the vehicle body coordinate system, that is, the first pixel coordinate of each target point Substitute the coordinates (x T , y T , z T ) of each target point in the vehicle body coordinate system into the above formula 1 and formula 2 to obtain formula 3 and formula 4.
  • the first external parameter can be determined.
  • the first pixel coordinates of the 6 target points are respectively The coordinates of the 6 target points in the vehicle body coordinate system are (x 1 , y 1 , z 1 ), (x 2 , y 2 , z 2 ), (x 3 , y 3 , z 3 ), (x 4 ) , y 4 , z 4 ), (x 5 , y 5 , z 5 ), (x 6 , y 6 , z 6 ); where, A target point corresponding to (x 1 , y 1 , z 1 ), A target point corresponding to (x 2 , y 2 , z 2 ), A target point corresponding to (x 3 , y 3 , z 3 ), A target point corresponding to (x 4 , y 4 , z 4 ), A target point corresponding to (x 5 , y 5 , z 5 ), A target point corresponding to (x 6 , y 6 , z 6 );
  • Step 407 the testing device determines the second pixel coordinates of each target point included on the second image
  • the second pixel coordinates of each target point are the coordinates of each target point in the image coordinate system on the second image.
  • step 407 reference may be made to the introduction of the above-mentioned step 405, and details are not repeated here.
  • Step 408 the testing equipment is based on the coordinates (x T , y T , z T ) of each target point in the vehicle body coordinate system and the second pixel coordinates of each target point on the second image A second extrinsic parameter matrix of the photographing device in the second position is determined.
  • the second pixel coordinates of each target point There is a fourth coordinate conversion relationship between the coordinates (x T , y T , z T ) of each target point in the vehicle body coordinate system, that is, the second pixel coordinate of each target point Substitute the coordinates (x T , y T , z T ) of each target point in the vehicle body coordinate system into the above formula 1 and formula 2 to obtain formula 5 and formula 6.
  • the second external parameter can be determined.
  • the second pixel coordinates of the 6 target points are respectively The corresponding vehicle body coordinates are (x 1 , y 1 , z 1 ), (x 2 , y 2 , z 2 ), (x 3 , y 3 , z 3 ), (x 4 , y 4 , z 4 ) , (x 5 , y 5 , z 5 ), (x 6 , y 6 , z 6 ); where, A target point corresponding to (x 1 , y 1 , z 1 ), A target point corresponding to (x 2 , y 2 , z 2 ), A target point corresponding to (x 3 , y 3 , z 3 ), A target point corresponding to (x 4 , y 4 , z 4 ), A target point corresponding to (x 5 , y 5 , z 5 ), A target point corresponding to (x 6 , y 6 , z 6 ).
  • the first extrinsic parameter matrix of the photographing device at the first position can be obtained and the second extrinsic parameter matrix of the camera at the second position That is to say, through the above steps 401 to 408, the camera position calibration is completed, and the camera position is calibrated according to the target point with known coordinates, which helps to improve the accuracy of the camera position calibration.
  • the extrinsic parameter matrix of the photographing device is related to the position of the photographing device, that is, the extrinsic parameter matrix of the first image and the third image taken by the photographing device in the first position is the same, and both are the first extrinsic parameter matrix.
  • parameter matrix; the extrinsic parameter matrix of the second image and the fourth image captured by the photographing device in the second position are the same, and both are the second extrinsic parameter matrix.
  • Step 409 the testing device determines the third pixel coordinates of the n reference points on the third image, and the fourth pixel coordinates of the n reference points on the fourth image.
  • the third pixel coordinates of the n reference points are the coordinates of the n reference points on the third image in the image coordinate system.
  • the fourth pixel coordinates of the n reference points are the coordinates of the n reference points on the fourth image in the image coordinate system.
  • the above-mentioned image coordinate systems for determining the first pixel coordinates, the second pixel coordinates, the third pixel coordinates, and the fourth pixel coordinates are the same.
  • the images captured by the same photographing device all take the lower left corner of the images (eg, the first image, the second image, the third image and the fourth image) as the origin, or all take the lower right corner as the origin.
  • the third pixel coordinate means, the fourth pixel coordinate is used Indicates that, for the manner of determining the third pixel coordinate and the fourth pixel coordinate, reference may be made to the introduction of the above step 405, and details are not repeated here.
  • Step 410 The testing device determines the coordinates of the HUD virtual image in the vehicle body coordinate system according to the first extrinsic parameter matrix, the third image, the second extrinsic parameter matrix, and the fourth image.
  • the first coordinate conversion relationship may be determined according to the third pixel coordinates of the n reference points and the first external parameter matrix; wherein, the first coordinate conversion relationship is the n The relationship between the third pixel coordinates of the reference point and the coordinates of the n reference points in the vehicle body coordinate system.
  • the testing device may determine the n pixels on the HUD virtual image according to the above formula 1, formula 2, the third pixel coordinates of the n reference points, and the fourth pixel coordinates of the n reference points The coordinates of the reference point in the vehicle body coordinate system.
  • the third pixel coordinates of the n reference points may be And the first external parameter matrix is substituted into the above formula 1 and formula 2, and formula 7 and formula 8 are obtained, and the third pixel coordinates of n reference points can be obtained.
  • a second coordinate conversion relationship can be determined according to the fourth pixel coordinates of the n reference points and the second external parameter matrix; wherein, the second coordinate conversion relationship is all the n reference points.
  • the fourth pixel coordinates of the n reference points may be And the second external parameter matrix is substituted into the above formula 1 and formula 2 to obtain formula 9 and formula 10, and the fourth pixel coordinates of the n reference points can be obtained The second coordinate conversion relationship with the coordinates (x T , y T , z T ) of the n reference points in the vehicle body coordinate system.
  • the coordinates of the n reference points on the HUD virtual image in the vehicle body coordinate system may be determined according to the first coordinate conversion relationship and the second coordinate conversion relationship determined above. That is to say, based on the above formula 7, formula 8, formula 9 and formula 10, the vehicle body coordinates (x H , y H , z H ) of each of the n reference points on the HUD virtual image can be determined, namely ( x 1 , y 1 , z 1 ), (x 2 , y 2 , z 2 )...(x n , y n , z n ).
  • the HUD virtual image is a surface existing in the three-dimensional space. If the HUD virtual image is perpendicular to the X axis, the x values of the n reference points on the HUD virtual image are the same. If the HUD virtual image is not perpendicular to the X axis, the x values of the n reference points on the HUD virtual image may be different.
  • Step 411 the testing device determines the imaging parameters of the HUD virtual image according to the coordinates of the HUD virtual image in the vehicle body coordinate system.
  • HUD virtual images When judging AR-HUD eligibility, it is necessary to provide a series of imaging parameters (ie detection indicators) of HUD virtual images, such as virtual image distance (VID), horizontal field of view, vertical field of view, center position, distortion rate One or more of , rotational deformation, etc.
  • VID virtual image distance
  • HDR horizontal field of view
  • vertical field of view center position
  • distortion rate One or more of , rotational deformation, etc.
  • the image calibration method is simple, fast and accurate.
  • the process of determining the imaging parameters of the HUD virtual image is exemplarily shown.
  • the reference point on the HUD virtual image is taken as an example in the above-mentioned FIG. 6 .
  • Imaging parameter one virtual image distance.
  • the virtual image distance refers to the distance from the HUD virtual image to the human eye, see Figure 7.
  • the average value of the x-coordinates of at least two reference points in the vehicle body coordinate system among the n reference points on the HUD virtual image may be determined, and the center of the eye box is in the vehicle body coordinate system.
  • the x coordinate in the vehicle body coordinate system; the absolute value of the difference between the average value and the x coordinate of the eyebox center position in the vehicle body coordinate system is determined as the virtual image distance. It should be noted that the coordinates (x e , y e , z e ) of the center position of the eye box in the vehicle body coordinate system have been determined during the AR-HUD design.
  • the virtual image distance can be determined by the following formula 11.
  • x e represents the x coordinate of the center position of the eye box in the vehicle body coordinate system
  • x 1 , x 2 , ... x n represents that each of the n reference points on the HUD virtual image is in the vehicle body coordinate system. the x-coordinate of .
  • the second imaging parameter is the field of view.
  • the field of view includes a horizontal field of view (H_FOV) and a vertical field of view (V_FOV).
  • H_FOV horizontal field of view
  • V_FOV vertical field of view
  • the horizontal field of view refers to the maximum visible range of the human eye in the horizontal direction
  • the vertical field of view refers to the maximum visible range of the human eye in the vertical direction.
  • the length in the horizontal direction of the HUD virtual image may be determined according to the coordinates of at least two reference points located in the same horizontal direction among the n reference points in the vehicle body coordinate system ; Determine the horizontal field of view according to the length of the HUD virtual image in the horizontal direction and the virtual image distance.
  • the horizontal field of view can be determined by the following formula 12.
  • H_FOV 2 ⁇ Arctan[half of the length of the HUD virtual image in the horizontal direction/virtual image distance]
  • the length in the vertical direction of the HUD virtual image may be determined according to the coordinates of at least two reference points located in the same vertical direction among the n reference points in the vehicle body coordinate system ; According to the length in the vertical direction of the HUD virtual image and the virtual image distance, determine the vertical angle of view.
  • the vertical viewing angle can be determined by the following formula 13.
  • V_FOV 2 ⁇ Arctan[half of the length of the HUD virtual image in the vertical direction/virtual image distance]
  • Imaging parameter three the center position of the HUD virtual image.
  • the coordinates of the center reference point among the n reference points on the HUD virtual image in the vehicle body coordinate system may be determined as the center position of the HUD virtual image.
  • the vehicle body coordinate of the reference point C 0 is the center coordinate of the HUD virtual image.
  • the fourth imaging parameter is the distortion rate of the first reference point.
  • the first reference point may be any one or any number of n reference points.
  • the reference point in the central region of the HUD virtual image is less prone to distortion.
  • C 0 , C 1 , C 2 , C 3 , and C 4 are considered to be not distorted, and the predicted distance between the central reference point and the first reference point can be determined based on the reference point that is not prone to distortion, and then The actual distance between the central reference point and the first reference point is determined, and then the distortion rate of the first reference point is determined according to the actual distance and the predicted distance.
  • the actual distance between the first reference point and the center reference point is determined, and the first reference point is determined according to the center reference point and at least four reference points around the center reference point.
  • the predicted distance of the point; the distortion rate of the first reference point can be determined by the following formula 14.
  • the center reference point C 0 and the first reference point E are determined based on reference points C 0 , C 1 , C 2 , C 3 , and C 4 that are not prone to distortion Predicted distance C 0 E 1 between 1 .
  • Distortion rate of E 1 [(actual distance C 0 E 1 /predicted distance C 0 E 1 ) ⁇ 1] ⁇ 100%.
  • the direction of the distortion can be identified by a positive or negative sign, that is, a positive sign indicates that the distortion causes the original image to be enlarged, and a negative sign indicates that the distortion causes the original image to be reduced.
  • the distortion rate of the HUD virtual image may be determined according to the distortion rate of the first reference point.
  • the distortion rate of the HUD virtual image may be determined by weighted averaging the distortion rates of the n reference points; or the distortion rate of the maximum distortion point among the n reference points may be determined as the distortion rate of the HUD virtual image;
  • Reference points in the middle of the reference points eg, the average of any one or more of reference points C 0 , C 1 , C 2 , C 3 , C 4 in FIG. 6
  • edge vertices eg, FIG.
  • the distortion ratios of any one or more of E 1 , E 2 , E 3 , E 4 , E 5 , and E 6 in 6) are used as the distortion ratio of the HUD virtual image. It should be understood that the distortion in the middle area of the HUD virtual image is relatively small, and the distortion in the edge area is relatively large.
  • the fifth imaging parameter is the degree of rotational deformation.
  • the HUD virtual image may have a degree of rotational deformation, which is represented by ⁇ .
  • degree of rotational deformation
  • the HUD virtual image mainly rotates around the X-axis, so the rotational deformation degree ⁇ can be determined by formula 15.
  • the second reference point and the third reference point are two reference points in the same horizontal direction among the n reference points; the fourth reference point and the fifth reference point are the n reference points Two reference points in the same vertical direction among the reference points.
  • E 1 E 2 are two reference points in the same horizontal direction
  • E 3 E 4 are two reference points in the same horizontal direction
  • E 5 E 6 are two reference points in the same horizontal direction
  • the rotational deformation degrees of E 1 E 2 , E 3 E 4 , and E 5 E 6 are the same.
  • the rotational deformation degrees of the HUD virtual image are described by taking E 1 E 2 as an example.
  • z E1 represents the z coordinate of the reference point E 1 in the vehicle body coordinate system
  • z E2 represents the z coordinate of the reference point E 2 in the vehicle body coordinate system
  • y E1 represents the reference point E 1 in the vehicle body coordinate system
  • Imaging parameter six ghosting
  • the ghost ⁇ can be determined by formula 17.
  • VID represents the virtual image distance
  • E i E j represents the distance between the reference point E i and the reference point E j in the vehicle body coordinate system, which can be determined by Referring to FIG. 7 , both i and j can take any integer from 1 to 6.
  • E 1 E 2 means that i takes 1 and j takes 2.
  • C i C j represents the distance between the reference point C i and the reference point C j in the vehicle body coordinate system, which can be obtained by Referring to FIG. 7 , both i and j can take any integer from 0 to 4.
  • C 1 C 2 means i takes 1 and j takes 2.
  • the testing equipment determines the imaging parameters of the HUD virtual image, it can transmit the parameters to the HUD through the OBD port to complete the calibration of the HUD virtual image.
  • the image calibration apparatus or the test equipment includes corresponding hardware structures and/or software modules for executing each function.
  • the modules and method steps of each example described in conjunction with the embodiments disclosed in the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software-driven hardware depends on the specific application scenarios and design constraints of the technical solution.
  • FIG. 9 and FIG. 10 are schematic structural diagrams of possible image calibration apparatuses provided by the present application. These image calibration apparatuses can be used to implement the functions of the testing equipment in the above method embodiments, and thus can also achieve the beneficial effects of the above method embodiments.
  • the image calibration device may be the test equipment as shown in Figure 3d, or may be a module (such as a chip) applied to the test equipment.
  • the image calibration apparatus 900 includes a processing module 901 and a transceiver module 902 .
  • the image calibration apparatus 900 is used to implement the functions of the test equipment in the method embodiment shown in FIG. 4 above.
  • the transceiver module 902 is used to acquire the first image of the target and the third image of the virtual HUD image displayed by the head-up display HUD, the first image The third image is obtained by photographing the target by the photographing device in the first position, and the third image is obtained by photographing the HUD virtual image by the photographing device in the first position; and the second image of the target and the fourth image of the HUD virtual image are obtained.
  • the fourth image is obtained by shooting the target by the shooting device in the second position, and the fourth image is obtained by shooting the HUD virtual image by the shooting device in the second position; the processing module 901 is used to determine the position in the vehicle body coordinate system according to the first image and the coordinates of the target in the vehicle body coordinate system.
  • the first extrinsic parameter matrix of the photographing device at the first position; the second extrinsic parameter matrix of the photographing device at the second position is determined according to the second image and the coordinates of the target in the vehicle body coordinate system; and according to the first extrinsic parameter matrix , the third image, the second external parameter matrix and the fourth image to determine the coordinates of the HUD virtual image in the vehicle body coordinate system; and determine the imaging parameters of the HUD virtual image according to the coordinates of the HUD virtual image in the vehicle body coordinate system.
  • processing module 901 in this embodiment of the present application may be implemented by a processor or a circuit component related to the processor, and the transceiver module 902 may be implemented by a transceiver or a circuit component related to the transceiver.
  • the present application further provides an image calibration apparatus 1000 .
  • the image calibration apparatus 1000 may include a processor 1001 and a transceiver 1002 .
  • the processor 1001 and the transceiver 1002 are coupled to each other.
  • the transceiver 1002 can be an interface circuit or an input-output interface.
  • the image calibration apparatus 1000 may further include a memory 1003 for storing instructions executed by the processor 1001 or input data required by the processor 1001 to run the instructions or data generated after the processor 1001 runs the instructions.
  • the processor 1001 is used to perform the functions of the above-mentioned processing module 901
  • the transceiver 1002 is used to perform the functions of the above-mentioned transceiver module 902 .
  • processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general-purpose processors, digital signal processors (digital signal processors, DSP), application-specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • CPU central processing unit
  • DSP digital signal processors
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor may be a microprocessor or any conventional processor.
  • the method steps in the embodiments of the present application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), registers, hard disks, removable hard disks, CD-ROMs or known in the art in any other form of storage medium.
  • An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage medium may reside in an ASIC.
  • the ASIC may be located in the image calibration device.
  • the processor and the storage medium may also exist in the image calibration device as discrete components.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer programs or instructions.
  • the processes or functions described in the embodiments of the present application are executed in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, network equipment, user equipment, or other programmable apparatus.
  • the computer program or instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer program or instructions may be downloaded from a website, computer, A server or data center transmits by wire or wireless to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, data center, or the like that integrates one or more available media.
  • the usable medium can be a magnetic medium, such as a floppy disk, a hard disk, and a magnetic tape; it can also be an optical medium, such as a digital video disc (DVD); it can also be a semiconductor medium, such as a solid state drive (solid state drive). , SSD).
  • a magnetic medium such as a floppy disk, a hard disk, and a magnetic tape
  • an optical medium such as a digital video disc (DVD)
  • DVD digital video disc
  • it can also be a semiconductor medium, such as a solid state drive (solid state drive). , SSD).
  • the word "exemplary” is used to mean serving as an example, illustration, or illustration. Any embodiment or design described in this application as “exemplary” should not be construed as preferred or advantageous over other embodiments or designs. Alternatively, it can be understood that the use of the word example is intended to present concepts in a specific manner, and not to limit the application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Instrument Panels (AREA)
  • Image Analysis (AREA)

Abstract

一种图像标定方法及装置,以解决现有技术中确定出的HUD虚像的虚像距误差大的问题。该方法包括获取靶标的第一图像和第二图像、HUD虚像的第三图像和第四图像,第一图像和第三图像为相机在第一位置拍摄的,第二图像和第四图像为相机在第二位置拍摄的;根据第一图像和靶标在车体坐标系中的坐标,确定相机的第一外参矩阵;根据第二图像和靶标在车体坐标系中的坐标,确定相机的第二外参矩阵;根据第一外参矩阵、第三图像、第二外参矩阵和第四图像,确定HUD虚像在车体坐标系中的坐标,以确定HUD虚像的成像参数。通过第一位置和第二位置拍摄的四幅图像,即可确定出HUD虚像在车体坐标系中的坐标,从而可准确的确定出HUD虚像的成像参数。

Description

一种图像标定方法及装置 技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像标定方法及装置。
背景技术
随着车辆技术的不断发展,对车辆使用的便捷性和高效性的需求越来越高。例如,抬头显示器(head up display,HUD)(或称为平行显示***)已被广泛应用于车辆。抬头显示器可以将时速、导航等重要行车信息,投影到驾驶员前面的挡风玻璃上,在玻璃前方形成影像,让驾驶员尽量做到不低头、不转头就能看到时速、导航等重要的行车信息,从而可提高驾驶的安全性。
为了可以结合实际交通路况,提出了增强现实抬头显示器(augmented reality head up display,AR HUD),即将HUD虚像与真实路面信息融合起来,从而增强了驾驶员对于实际驾驶环境的感知。AR HUD的成像原理:通过光源发射光,经过折射,最后投影到挡风玻璃,形成HUD虚像。增强现实抬头显示器需要实现人眼-HUD虚像-路面三点一线(参阅图1a),因此,需要根据路面和人眼的位置,精确控制HUD虚像的位置。为了精确控制HUD虚像的位置,需要精确获取HUD虚像的实际位置信息。
如图1b所示,为现有技术中测量HUD虚像的位置的方式。现有技术中采用变焦测量,通过改变相机镜头的焦距,观察在相机里所成的HUD虚像,成像最清晰的对焦位置就是HUD虚像所在的位置。在图1b中,HUD虚像在焦面3的图像最清晰,焦面3即为HUD虚像所在的位置,焦面3到相机的距离即为虚像距。但是由于相机具有一定的景深范围,在景深范围内,拍摄的HUD虚像都是清晰的,因此,根据成像清晰的焦面确定HUD虚像的位置,会造成确定出的HUD虚像的位置误差较大。
发明内容
本申请提供一种图像标定方法及装置,用于尽可能提高确定出的HUD虚像的成像参数的精确度。
第一方面,本申请提供一种图像标定方法,该方法包括获取靶标的第一图像和HUD显示的HUD虚像的第三图像,获取靶标的第二图像和HUD虚像的第四图像;其中,第一图像为处于第一位置的拍摄装置拍摄靶标得到的,第三图像为处于第一位置的拍摄装置拍摄HUD虚像得到的,第二图像为处于第二位置的拍摄装置拍摄靶标得到的,第四图像为处于第二位置的拍摄装置拍摄HUD虚像得到的;可根据第一图像和靶标在车体坐标系中的坐标,确定处于第一位置的拍摄装置的第一外参矩阵;可根据第二图像和靶标在车体坐标系中的坐标,确定处于第二位置的拍摄装置的第二外参矩阵;可根据第一外参矩阵、第三图像、第二外参矩阵以及第四图像,确定HUD虚像在车体坐标系中的坐标;进一步,可根据HUD虚像在车体坐标系中的坐标,确定HUD虚像的成像参数。
基于该方案,通过在第一位置和第二位置拍摄四幅图像,即可确定出HUD虚像在车体坐标系中的坐标,根据HUD虚像在车体坐标系中的坐标,从而可进一步确定出HUD虚像的成像参数。相比于现有技术的变焦方式,该通过该图像标定方法可以简单、快速且准 确的确定出HUD虚像的成像参数。
进一步,可选地,第一位置与第二位置不同。
在一种可能的实现方式中,车体坐标系可以以车辆的前轮为原点,车的前进方向或后退方向为X轴。
在一种可能的实现方式中,可通过如下方式确定第一外参矩阵和第二外参矩阵:确定第一图像上的各靶点的第一像素坐标,根据各靶点的第一像素坐标、各靶点在车体坐标系中的坐标以及第三坐标转换关系,确定第一外参矩阵;第三坐标转换关系为各靶点的第一像素坐标与各靶点在车体坐标系中的坐标之间的关系;并确定第二图像上的各靶点的第二像素坐标;根据各靶点的第二像素坐标、各靶点在车体坐标系中的坐标以及第四坐标转换关系,确定第二外参矩阵,第四坐标转换关系为各靶点的第二像素坐标与各靶点在车体坐标系中的坐标之间的关系。
进一步,可选地,各靶点的第一像素坐标为第一图像上的各靶点在图像坐标系中的坐标。各靶点的第二像素坐标为第二图像上各靶点在图像坐标系中的坐标。
在一种可能的实现方式中,HUD虚像包括n个参考点,n为大于1的整数;可分别确定第三图像上的n个参考点的第三像素坐标、以及第四图像上的n个参考点的第四像素坐标;根据n个参考点的第三像素坐标以及第一外参矩阵,确定第一坐标转换关系;可根据n个参考点的第四像素坐标以及第二外参矩阵,确定第二坐标转换关系;其中,第一坐标转换关系为n个参考点的第三像素坐标与n个参考点在车体坐标系中的坐标之间的关系;第二坐标转换关系为n个参考点的第四像素坐标与n个参考点在车体坐标系中的坐标之间的关系;可根据第一坐标转换关系和第二坐标转换关系,确定HUD虚像上的n个参考点在车体坐标系中的坐标。
进一步,可选地,n个参考点的第三像素坐标为第三图像上的n个参考点在图像坐标系中的坐标。n个参考点的第四像素坐标为第四图像上n个参考点在图像坐标系中的坐标。
在一种可能的实现方式中,图像坐标系可以以图像的左上角或左下角为原点。
在一种可能的实现方式,HUD虚像的成像参数包括但不限于:虚像距(VID)、水平视场角、垂直视场角、中心位置、畸变率或旋转变形中任一项或任多项。
通过上述方式,可以较精确且快速的确定出n个参考点中各个参考点在车体坐标系中的坐标,基于这n个参考点在车体坐标系中的坐标,可简单快速且精确的确定出HUD虚像的成像参数,从而有助于提高HUD虚像的标定的精度和效率。
如下分别介绍确定HUD虚像的成像参数。
成像参数一,虚像距。
在一种可能的实现方式中,可确定HUD虚像上的n个参考点中至少两个参考点在车体坐标系中的x坐标的平均值,x坐标为车的前进或后退方向;确定眼盒的中心在车体坐标系中的x坐标;将平均值与眼盒中心位置在车体坐标系中的x坐标的差值的绝对值,确定为虚像距。
成像参数二,视场角。
在一种可能的实现方式中,视场角包括水平视场角和垂直视场角。
进一步,可选地,可根据n个参考点中位于同一水平方向的至少两个参考点在车体坐标系中的坐标,确定HUD虚像的水平方向上的长度;根据HUD虚像的水平方向上的长度和虚像距,确定水平视场角。
示例性地,
Figure PCTCN2020125535-appb-000001
在一种可能的实现方式中,可根据n个参考点中位于同一垂直方向的至少两个参考点在车体坐标系中的坐标,确定HUD虚像的垂直方向上的长度;根据HUD虚像的垂直方向上的长度和虚像距,确定垂直视场角。
示例性地,
Figure PCTCN2020125535-appb-000002
成像参数三,中心位置。
在一种可能的实现方式中,可将HUD虚像上的n个参考点中的中心参考点在车体坐标系中的坐标,确定为HUD虚像的中心位置。
成像参数四,畸变率。
在一种可能的实现方式中,可确定第一参考点的畸变率,第一参考点为HUD虚像上的n个参考点中的至少一个;根据第一参考点的畸变率,确定HUD虚像的畸变率。
进一步,可选地,可确定第一参考点与中心参考点的之间的实际距离;再根据中心参考点以及中心参考点的周围的至少4个参考点,确定第一参考点的预测距离;从而可根据实际距离与预测距离,确定第一参考点的畸变率。
示例性地,
Figure PCTCN2020125535-appb-000003
成像参数五,旋转变形。
在一种可能的实现方式中,确定第二参考点在车体坐标系中的z坐标、第二参考点在车体坐标系中的y坐标、第三参考点在车体坐标系中的z坐标以及第三参考点在车体坐标系中的y坐标;其中,第二参考点与第三参考点为n个参考点中同一水平方向上的两个参考点;根据第二参考点在车体坐标系中的z坐标、第二参考点在车体坐标系中的y坐标、第三参考点在车体坐标系中的z坐标以及第三参考点在车体坐标系中的y坐标,确定旋转变形。
示例性地,
Figure PCTCN2020125535-appb-000004
第二方面,本申请提供一种图像标定装置,可用于实现上述第一方面或第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该图像标定装置可包括收发模块和处理模块:其中,收发模块,用于获取靶标的第一图像和抬头显示器HUD显示的HUD虚像的第三图像,第一图像为处于第一位置的拍摄装置拍摄靶标得到的,第三图像为处于第一位置的拍摄装置拍摄HUD虚像得到的;以及获取靶标的第二图像和HUD虚像的第四图像,第二图像为处于第二位置的拍摄装置拍摄靶标得到的,第四图像为处于第二位置的拍摄装置拍摄HUD虚像得到的;处理模块,用于根据第一图像和靶标在车体坐标系中的坐标,确定处于第一位置的拍摄装置的第一外参矩阵;根据第二图像和靶标在车体坐标系中的坐标,确定处于第二位置的拍摄装置的第二外参矩阵;以及根据第一外参矩阵、第三图像、第二外参矩阵以及第四图像,确定HUD虚像在车体坐标系中的坐标;以及根据HUD虚像在车体坐标系中的 坐标,确定HUD虚像的成像参数。
在一种可能的实现方式中,HUD虚像包括n个参考点,n为大于1的整数;处理模块具体用于:分别确定第三图像上的n个参考点在第三像素坐标、以及n个参考点在第四图像上的第四像素坐标;根据n个参考点的第三像素坐标以及第一外参矩阵,确定第一坐标转换关系;其中,第一坐标转换关系为n个参考点的第三像素坐标与n个参考点在车体坐标系中的坐标之间的关系;根据n个参考点的第四像素坐标以及第二外参矩阵,确定第二坐标转换关系;其中,第二坐标转换关系为n个参考点的第四像素坐标与n个参考点在车体坐标系中的坐标之间的关系;根据第一坐标转换关系和第二坐标转换关系,确定HUD虚像上的n个参考点在车体坐标系中的坐标。
在一种可能的实现方式中,HUD虚像的成像参数包括但不限于虚像距(VID)、水平视场角、垂直视场角、中心位置、畸变率或旋转变形中任一项或任多项。
在一种可能的实现方式中,成像参数包括虚像距;处理模块具体用于:确定HUD虚像上的n个参考点中至少两个参考点在车体坐标系中的x坐标的平均值,x坐标为车的前进或后退方向;确定眼盒的中心在车体坐标系中的x坐标;将平均值与眼盒中心位置在车体坐标系中的x坐标的差值的绝对值,确定为虚像距。
在一种可能的实现方式中,成像参数还包括水平视场角;处理模块具体用于:根据n个参考点中位于同一水平方向的至少两个参考点在车体坐标系中的坐标,确定HUD虚像的水平方向上的长度;根据HUD虚像的水平方向上的长度和虚像距,确定水平视场角。
在一种可能的实现方式中,成像参数还包括垂直视场角;处理模块具体用于:根据n个参考点中位于同一垂直方向的至少两个参考点在车体坐标系中的坐标,确定HUD虚像的垂直方向上的长度;根据HUD虚像的垂直方向上的长度和虚像距,确定垂直视场角。
在一种可能的实现方式中,成像参数包括中心位置;处理模块具体用于:将HUD虚像上的n个参考点中的中心参考点在车体坐标系中的坐标,确定为HUD虚像的中心位置。
在一种可能的实现方式中,成像参数包括畸变率;处理模块具体用于:确定第一参考点的畸变率,第一参考点为HUD虚像上的n个参考点中的至少一个;根据第一参考点的畸变率,确定HUD虚像的畸变率。
在一种可能的实现方式中,n为大于5的整数;处理模块具体用于:确定第一参考点与中心参考点的之间的实际距离;根据中心参考点以及中心参考点的周围的至少4个参考点,确定第一参考点的预测距离;根据实际距离与预测距离,确定第一参考点的畸变率。
在一种可能的实现方式中,成像参数包括旋转变形;处理模块具体用于:确定第二参考点在车体坐标系中的z坐标、第二参考点在车体坐标系中的y坐标、第三参考点在车体坐标系中的z坐标以及第三参考点在车体坐标系中的y坐标;其中,第二参考点与第三参考点为n个参考点中同一水平方向上的两个参考点;根据第二参考点在车体坐标系中的z坐标、第二参考点在车体坐标系中的y坐标、第三参考点在车体坐标系中的z坐标以及第三参考点在车体坐标系中的y坐标,确定旋转变形。
在一种可能的实现方式中,处理模块具体用于:确定第一图像上的各靶点的第一像素坐标;根据各靶点的第一像素坐标、各靶点在车体坐标系中的坐标以及第三坐标转换关系,确定第一外参矩阵;第三坐标转换关系为各靶点的第一像素坐标与各靶点在车体坐标系中的坐标之间的关系;处理模块具体用于:确定第二图像上的各靶点的第二像素坐标;根据各靶点的第二像素坐标、各靶点在车体坐标系中的坐标以及第四坐标转换关系,确定第二 外参矩阵,第四坐标转换关系为各靶点的第二像素坐标与各靶点在车体坐标系中的坐标之间的关系。
第三方面,本申请提供一种图像标定装置,该图像标定装置用于实现上述第一方面或第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该图像标定装置可以包括:收发器和处理器。该处理器可被配置为支持该图像标定装置执行以上所示图像标定装置的相应功能,该收发器用于支持该图像标定装置与其它设备等之间的通信。其中,收发器可以为独立的接收器、独立的发射器、集成收发功能的收发器、或者是接口电路。可选地,该图像标定装置还可以包括存储器,该存储器可以与处理器耦合,其保存该图像标定装置必要的程序指令和数据。
其中,收发器用于:获取靶标的第一图像和抬头显示器HUD显示的HUD虚像的第三图像,第一图像为处于第一位置的拍摄装置拍摄靶标得到的,第三图像为处于第一位置的拍摄装置拍摄HUD虚像得到的;以及获取靶标的第二图像和HUD虚像的第四图像,第二图像为处于第二位置的拍摄装置拍摄靶标得到的,第四图像为处于第二位置的拍摄装置拍摄HUD虚像得到的;处理器用于:根据第一图像和靶标在车体坐标系中的坐标,确定处于第一位置的拍摄装置的第一外参矩阵;根据第二图像和靶标在车体坐标系中的坐标,确定处于第二位置的拍摄装置的第二外参矩阵;以及根据第一外参矩阵、第三图像、第二外参矩阵以及第四图像,确定HUD虚像在车体坐标系中的坐标;以及根据HUD虚像在车体坐标系中的坐标,确定HUD虚像的成像参数。
在一种可能的实现方式中,HUD虚像包括n个参考点,n为大于1的整数;处理器具体用于:分别确定第三图像上的n个参考点在第三像素坐标、以及n个参考点在第四图像上的第四像素坐标;根据n个参考点的第三像素坐标以及第一外参矩阵,确定第一坐标转换关系;其中,第一坐标转换关系为n个参考点的第三像素坐标与n个参考点在车体坐标系中的坐标之间的关系;根据n个参考点的第四像素坐标以及第二外参矩阵,确定第二坐标转换关系;其中,第二坐标转换关系为n个参考点的第四像素坐标与n个参考点在车体坐标系中的坐标之间的关系;根据第一坐标转换关系和第二坐标转换关系,确定HUD虚像上的n个参考点在车体坐标系中的坐标。
在一种可能的实现方式中,HUD虚像的成像参数包括但不限于虚像距(VID)、水平视场角、垂直视场角、中心位置、畸变率或旋转变形中任一项或任多项。
在一种可能的实现方式中,成像参数包括虚像距;处理器具体用于:确定HUD虚像上的n个参考点中至少两个参考点在车体坐标系中的x坐标的平均值,x坐标为车的前进或后退方向;确定眼盒的中心在车体坐标系中的x坐标;将平均值与眼盒中心位置在车体坐标系中的x坐标的差值的绝对值,确定为虚像距。
在一种可能的实现方式中,成像参数还包括水平视场角;处理器具体用于:根据n个参考点中位于同一水平方向的至少两个参考点在车体坐标系中的坐标,确定HUD虚像的水平方向上的长度;根据HUD虚像的水平方向上的长度和虚像距,确定水平视场角。
在一种可能的实现方式中,成像参数还包括垂直视场角;处理器具体用于:根据n个参考点中位于同一垂直方向的至少两个参考点在车体坐标系中的坐标,确定HUD虚像的垂直方向上的长度;根据HUD虚像的垂直方向上的长度和虚像距,确定垂直视场角。
在一种可能的实现方式中,成像参数包括中心位置;处理器具体用于:将HUD虚像上的n个参考点中的中心参考点在车体坐标系中的坐标,确定为HUD虚像的中心位置。
在一种可能的实现方式中,成像参数包括畸变率;处理器具体用于:确定第一参考点的畸变率,第一参考点为HUD虚像上的n个参考点中的至少一个;根据第一参考点的畸变率,确定HUD虚像的畸变率。
在一种可能的实现方式中,n为大于5的整数;处理器具体用于:确定第一参考点与中心参考点的之间的实际距离;根据中心参考点以及中心参考点的周围的至少4个参考点,确定第一参考点的预测距离;根据实际距离与预测距离,确定第一参考点的畸变率。
在一种可能的实现方式中,成像参数包括旋转变形;处理器具体用于:确定第二参考点在车体坐标系中的z坐标、第二参考点在车体坐标系中的y坐标、第三参考点在车体坐标系中的z坐标以及第三参考点在车体坐标系中的y坐标;其中,第二参考点与第三参考点为n个参考点中同一水平方向上的两个参考点;根据第二参考点在车体坐标系中的z坐标、第二参考点在车体坐标系中的y坐标、第三参考点在车体坐标系中的z坐标以及第三参考点在车体坐标系中的y坐标,确定旋转变形。
在一种可能的实现方式中,处理器具体用于:确定第一图像上的各靶点的第一像素坐标;根据各靶点的第一像素坐标、各靶点在车体坐标系中的坐标以及第三坐标转换关系,确定第一外参矩阵;第三坐标转换关系为各靶点的第一像素坐标与各靶点在车体坐标系中的坐标之间的关系;处理器具体用于:确定第二图像上的各靶点的第二像素坐标;根据各靶点的第二像素坐标、各靶点在车体坐标系中的坐标以及第四坐标转换关系,确定第二外参矩阵,第四坐标转换关系为各靶点的第二像素坐标与各靶点在车体坐标系中的坐标之间的关系。
第四方面,本申请提供一种图像标定***,该图像标定***包括车辆、拍摄装置和图像标定装置诊断装置。其中,图像标定装置可以用于执行上述第一方面或第一方面中的任意一种方法,拍摄装置可用于拍摄上述第一图像、第二图像、第三图像和第四图像。
第五方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令被装置执行时,使得图像标定装置执行上述第一方面或第一方面的任意可能的实现方式中的方法。
第六方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当该计算机程序或指令被图像标定装置执行时,实现上述第一方面或第一方面的任意可能的实现方式中的方法。
上述第二方面至第六方面中任一方面可以达到的技术效果可以参照上述第一方面中有益效果的描述,此处不再重复赘述。
附图说明
图1a为本申请提供的一种人眼、HUD虚像、路面三点一线的示意图;
图1b为现有技术中的一种测量HUD虚像的位置的方式的示意图;
图2为本申请提供的一种像素坐标系与图像坐标系的关系示意图;
图3a为本申请提供的一种***的架构示意图;
图3b为本申请提供的另一种***的架构示意图;
图3c为本申请提供的又一种***的架构示意图;
图3d为本申请提供的一种应用场景示意图;
图4为本申请提供的一种图像标定方法的方法流程示意图;
图5为本申请提供的一种靶标示意图;
图6为本申请提供的一种HUD虚像示意图;
图7为本申请提供的一种HUD虚像的成像参数示意图;
图8为本申请提供的一种产生重影的原理示意图;
图9为本申请提供的一种图像标定装置的结构示意图;
图10为本申请提供的一种图像标定装置的结构示意图。
具体实施方式
下面将结合附图,对本申请实施例进行详细描述。
如背景技术,目前测量HUD虚像的位置采用的是变焦测量的方法,将HUD虚像最清晰的对焦位置确定为HUD虚像所在的位置,HUD虚像最清晰的位置到相机的距离为虚像距。但是由于相机具有一定的景深范围,在景深范围内,拍摄的HUD虚像都是清晰的,因此,根据成像清晰的焦面确定HUD虚像的位置,会造成确定出的HUD虚像的位置误差较大。
鉴于此,本申请提供一种图像标定方法,本申请中的图像标定方法可以准确且快速的标定HUD虚像。下面结合附图对本申请提供的图像标定方法进行具体介绍。
以下,对本申请中的部分用语进行解释说明。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。
1、世界坐标系
世界坐标系是为了描述目标物在真实世界里的位置而被引入。是客观三维世界的绝对坐标系。因为相机安放在三维空间中,需要世界坐标系这个基准坐标系来描述相机的位置,并且用它来描述安放在此三维坐标中其它任何物体的位置,用(X w,Y w,Z w)表示物体在世界坐标系中的坐标值。
2、相机坐标系
相机坐标系也称为光心坐标系,是在相机上建立的坐标系,为了从相机的角度描述物体而定义,作为沟通世界坐标系和图像坐标系(或像素坐标系)的中间一环,单位为m。以相机的镜头光心为坐标原点,X轴和Y轴分别平行于图像坐标系的X轴和Y轴,相机的光轴为Z轴,用(X c,Y c,Z c)表示其坐标值。
3、图像坐标系
为了描述成像过程中物体从相机坐标系到图像坐标系的关系而引入的,方便进一步得到像素坐标系下的坐标。图像坐标系是图像平面上的二维直角坐标系。图像坐标系的原点为镜头光轴与像平面的交点(也称主点),图像坐标系的x轴和y轴分别平行于相机坐标系的X轴和Y轴。用(x,y)表示坐标值。图像坐标系是用物理单位(例如毫米)表示像素在图像中的位置。
4、像素坐标系
像素坐标系是图像处理工作中常用的二维直角坐标系,反映了相机的电荷耦合元件(charge coupled device,CCD)/金属氧化物半导体元件(complementary metal oxide semiconductor,CMOS)芯片中像素的排列情况。单位为个(像素数目)。通常是以图像平 面的左上角或左下角为原点,u轴和v轴分别平行于图像坐标系的X轴和Y轴,用(u,v)表示其坐标值,横坐标u表示像素所在的列,纵坐标v表示像素所在的行。相机采集的图像首先是形成标准的电信号的形式,然后再通过模数转换变换为数字图像。每幅图像的存储形式是P×Q的数组,P行Q列的图像中的每一个元素的数值代表的是图像点的灰度。这样的每个元素叫像素,像素坐标系就是以像素为单位的图像坐标系。
5、坐标系之间的关系
5.1、像素坐标系与图像坐标系的关系(内参矩阵N)。
请参阅图2,为本申请提供的一种像素坐标系与图像坐标系的关系示意图。像素坐标系与图像坐标系是平移关系。
u=x/dx+u 0
v=y/dy+v 0
其中,(u 0,v 0)是图像坐标系原点(主点)的坐标,dx和dy分别是每个像素在x轴和y轴上的物理尺寸。
采用矩阵形式,如下:
Figure PCTCN2020125535-appb-000005
Figure PCTCN2020125535-appb-000006
其中,内参矩阵N可以理解为矩阵内各值只有相机内部参数有关,不随物***置变化而变化。
5.2、图像坐标系与相机坐标系的关系。
从三维坐标到二维坐标的转换,也即投影透视过程(用中心投影法将物体投射到投影面上,从而获得的一种较为接近视觉效果的单面投影图,即人眼看到的景物近大远小的一种成像方式)。
采用矩阵形式,如下:
Figure PCTCN2020125535-appb-000007
其中,f表示相机拍摄图像时的焦距,即像平面与相机坐标系原点的距离,Z c表示拍照人与拍摄装置之间的距离关系,为已知数。
5.3、相机坐标系与世界坐标系的变换(外参矩阵)。
通常,世界坐标系和相机坐标系不重合,该情况下,世界坐标系中的某一点P要投影到像面上,先要将该点的坐标转换到相机坐标系下。任意两个三维坐标之间的均可以通过旋转和平移进行转换,刚体从世界坐标系转换到相机坐标系的过程,也可以通过旋转和平移来得到。
设P点在世界坐标系中的坐标为X w,P到光心的垂直距离为s,在像面上的坐标为x,世界坐标系与相机坐标系之间的相对旋转为矩阵R(R是一个三行三列的旋转矩阵),相对位移为向量T(三行一列),将其变化矩阵由一个旋转矩阵和平移向量组合成的齐次坐标矩 阵表示如下:
Figure PCTCN2020125535-appb-000008
其中,(X w,Y w,Z w,1)为世界坐标系的齐次坐标,(X c,Y c,Z c,1)为相机坐标系的齐次坐标。应理解,由于世界坐标系与相机坐标系之间的变换矩阵与相机无关,因此,也称为外部参数矩阵。
6、相机标定
在图像测量过程以及机器视觉应用中,为确定空间物体表面某点的三维几何位置与其在图像中对应点之间的相互关系,需要建立相机成像的几何模型,这些几何模型参数即为相机参数。进行相机标定的目的是为了求出相机内参数(例如内参矩阵)、外参数(例如外差距真),求解参数的过程即可称为相机标定。
7、眼盒(eyebox)
眼盒通常是指驾驶员的眼睛能够看到全部显示图像的范围。一般眼盒尺寸大小是130mmx50mm。由于不同驾驶员的身高,需要满足眼盒在垂直方向上有约±50mm的移动范围。在本申请中,人眼在眼盒范围可以看到清晰的HUD虚像的区域。可参阅上述图1a,如果人眼与眼盒的中心对齐,则可获得完整且清晰的的HUD虚像。当眼睛向左右或上下移动时,在每个方向上的某个点处,图像将变差,直到无法接受,即超出眼盒范围。在超出眼盒的区域可能会呈现图像扭曲、显色错误,甚至不显示等问题。
基于上述内容,图3a是本申请的可应用的一种***的架构示意图,如图3a所示,该***中可包括靶标、车辆、拍摄装置和固定组件,其中,车辆包括AR-HUD,AR-HUD的具体结构可参见图3b或图3c。
AR-HUD产生的HUD虚像可投射在驾驶员前方视野中。AR-HUD的主要原理是利用多个曲面反射镜或平面发射镜,将图像生成单元(picture generate unit,PGU)产生的HUD虚像进行放大,并反射到车外一定位置,即反射到驾驶员的前方视野中(眼盒范围),从而向驾驶员呈现道路远方一定距离(例如2到20m)外的图像。HUD虚像的实际位置是由HUD的光学***决定的。理论上,AR HUD投射出的导航车道线,以及相关的警示信息,与实际道路贴合度越高越好,最好没有误差。但由于诸多技术以及现实道路情况,实际根据车辆行驶中的导航需求,AR HUD的成像距离达到7.5米以上,使得HUD虚像能够与物体或路面实景发生叠加,形成增强现实的效果,让驾驶员可以观察现实环境的同时获取到提示信息,不再有视觉盲区的存在。应理解,AR-HUD如果只显示一些车速、提示信息,则不需要过多关心HUD虚像的位置问题,而如果涉及到导航、高级驾驶辅助***(advanced driving assistant system,ADAS)信息等,则需要获取到精确的HUD虚像的位置。
拍摄装置设置于眼盒区域内,其中,眼盒范围通常为10cm左右。拍摄装置例如可以是相机等,该***中可包括一个拍摄装置或者可包括两个拍摄装置,其中,两个拍摄装置位于不同的位置,且均设置于眼盒区域内。
固定组件用于将拍摄装置固定于某一个位置。固定装置例如可以是机械臂、或滑轨。通过机械臂或滑轨,可以实现将一个拍摄装置固定于不同的位置,可参见图3b;或者,也可以包括两个固定组件,一个固定组件可将拍摄装置固定在一个位置,两个固定组件可将 两个拍摄装置固定在两个位置,可参见图3c。
靶标上可包括至少6个靶点,且靶点在车体坐标系下的坐标已知,可通过靶标对HUD虚像进行标定。靶标与车辆之间的距离可以根据具体场景确定。应理解,靶标的形状可以圆形、或方形、或其它规则或不规则形状,图3a至图3c中圆形的靶标仅是为了示意。
在一种可能的实现方式中,图3a、图3b和图3c的***架构可应用于车载AR-HUD生产线上的HUD虚像标定的场景中,请参阅图3d。
图3d为本申请可应用的一种场景。在该场景中,可包括测试设备和车辆,当需要对车载AR-HUD的HUD虚像标定时,测试设备可通过车辆上的车载诊断***(on-board diagnostic system,OBD)端口连接到车辆上,例如,测试设备***车辆的OBD端口,从而可实现测试设备与车辆之间的通信。OBD通常安置于车辆中,可用于实时记录车辆的性能信息,其中,OBD与测试设备通信的接口可称为OBD端口。
测试设备是专门针对车辆检测的专业仪器或***,即,测试设备可用于获取车辆的信息。例如,可用于检测车辆的性能,可获得车辆的性能信息(如HUD虚像的成像参数)等。测试设备可通过开发的测试软件实现对车辆的测试。也可以理解为,安装有测试软件的设备均可理解为是测试设备。例如安装有测试软件的个人计算机(personal computer,PC),或平板电脑,或专用设备等,其中,专用设备如诊断仪(diagnostics tester,DT),DT也可称为测试仪或车辆诊断仪或上位仪。进一步,测试设备还可以以图形界面的方式呈现各种测试信息。
基于上述内容,图4示例性示出了本申请提供的一种图像标定方法。该方法中的测试设备可以是上述图3d中测试设备,AR-HUD可以上述图3a至图3d任一实施例中的AR-HUD该方法包括以下步骤:
步骤401,测试设备获取靶标上靶点在车体坐标系下的坐标(x T,y T,z T)。
该步骤401为可选步骤。
此处,靶标上的靶点在车体坐标系下的坐标为(x T,y T,z T)。其中,靶标上包括至少6个靶点,至少3个靶点是不共线的。也就是说,该靶标为已被标定的靶标。
当靶标上包括6个靶点时,T可取1至6,请参阅图5,为本申请提供的一种靶标示意图。该靶标上包括6个参考点。6个靶点在车体坐标系的坐标分别为:(x 1,y 1,z 1)、(x 2,y 2,z 2)、(x 3,y 3,z 3)、(x 4,y 4,z 4)、(x 5,y 5,z 5)、(x 6,y 6,z 6)。
步骤402,测试设备向AR-HUD发送点亮指令。相应地,AR-HUD接收来自测试设备的点亮指令,根据点亮指令点亮,使AR-HUD显示HUD虚像。
该步骤402为可选步骤。
其中,HUD虚像可显示在驾驶员前方的某一位置。该HUD虚像可以称为标定图像或测试图像。
HUD虚像包括n个参考点,n为大于或等于2的整数。结合图6,以n=11为例,这11个参考点为棋盘格上的点,11个参考点可以包括C 0、C 1、C 2、C 3、C 4、E 1、E 2、E 3、E 4、E 5、E 6。C 0为HUD虚像的中心点,C 1、C 2、C 3、C 4为C 0的上下左右四个点,且距离C 0的距离均相等。当然,距离C 0的距离也可以不相等,具体距离可根据HUD虚像的大小而定。例如,可以分别HUD虚像的长宽的1/4。E 1、E 2、E 5、E 6为HUD虚像的边缘的四个顶点,E 3、E 4为垂直边的中心点。换言之,C 0、C 1、C 2、C 3、C 4可以表示中心区域,E 1、E 2、E 3、E 4、E 5、E 6可以表示边缘区域。
结合上述图3d,测试设备可通过OBD端口向HUD发送指令,HUD根据指令进行点亮以显示HUD虚像。
需要说明的是,HUD虚像尺寸与AR-HUD的类型相关,所以,不同AR-HUD的HUD虚像上的参考点的数量也会有差别。通常,HUD虚像上的参考点的数量可以依据出厂时给出的要求设置,例如,要求参考点之间的间距小于0.5度、且均匀分布,则可确定出HUD虚像上需要的参考点的最少数量。
还需要说明的是,上述步骤401与步骤402之间没有先后顺序,可以先执行步骤401后执行步骤402,也可以先执行步骤402后执行步骤403。
步骤403,测试设备获取靶标的第一图像和HUD显示的HUD虚像的第三图像。
其中,所述第一图像为处于第一位置的拍摄装置拍摄所述靶标得到的,所述第三图像为处于第一位置的拍摄装置拍摄所述HUD虚像得到的。
结合上述图3a至图3d,拍摄装置可将拍摄得到的第一图像和第三图像传输至测试设备(例如通过网络)。
步骤404,测试设备获取所述靶标的第二图像和所述HUD虚像的第四图像。
其中,所述第二图像为处于第二位置的拍摄装置拍摄所述靶标得到的,所述第四图像为处于第二位置的拍摄装置拍摄所述HUD虚像得到的。
结合上述图3a、或图3b或图3c,若上述***中包括两个拍摄装置,即通过两个拍摄装置拍摄拍摄靶标和HUD虚像,其中,一个拍摄装置可设置于第一位置,另一个拍摄装置可设置于第二位置,第一位置的拍摄装置拍摄靶标得到第一图像,拍摄HUD虚像得到第三图像;第二位置的拍摄装置拍摄靶标得到第二图像,拍摄HUD虚像得到第四图像。应理解,第一位置的拍摄装置和第二位置拍摄装置可以同时拍摄,即第一图像和第二图像可以是同时拍摄得到的,第三图像和第四图像也可以是同时拍摄得到的。当然,这两个拍摄装置也可以不是同时拍摄,本申请对此不做限定。
若上述***中包括一个拍摄装置,即通过一个拍摄装置拍摄靶标和HUD虚像,第一图像和第三图像可以是拍摄装置在第一位置拍摄得到的,第二图像和第四图像为拍摄装置在第二位置拍摄得到的。具体可以是通过如机械臂等或滑轨移动拍摄装置处于第一位置或第二位置位置,可以是左右移动,也可以是前后移动,或者也可以是上下移动等。
结合上述图3a至图3d,拍摄装置可将拍摄得到的第二图像和第四图像传输至测试设备(例如通过网络)。
需要说明的是,第一位置和第二位置均在眼盒区域内。通常,驾驶员眼睛在眼盒区域内可以看到清晰的HUD虚像,若超出这个眼盒的区域,驾驶员无法看到相关的图像或者看到图像的畸变较为严重等。示例性地,第一位置和第二位置之间的距离小于10cm。
步骤405,测试设备确定第一图像上包括的各靶点的第一像素坐标
Figure PCTCN2020125535-appb-000009
此处,各靶点的第一像素坐标为第一图像上的各靶点在图像坐标系中的坐标。
在一种可能的实现方式中,使用棋盘格方式来表示第一图像上的各靶点,利用图像处理算法,识别靶点区域的直线特征,及黑白特征,并检测平行线,以交叉点确定靶点。因为靶点间的位置关系已知(即靶点之间的距离已知的),可以推断出各个靶点与第一图像中交叉点的关系,进而确定各第一图像上包括的各靶点的第一像素坐标。
步骤406,测试设备根据各靶点在车体坐标系下的坐标(x T,y T,z T)、第一图像上的各靶点的第一像素坐标
Figure PCTCN2020125535-appb-000010
以及第三坐标转换关系,确定处于第一位置的拍摄装置 的第一外参矩阵。
此处,所述第三坐标转换关系为所述各靶点的所述第一像素坐标与所述各靶点在所述车体坐标系中的坐标之间的关系。
进一步,像素坐标(u,v)、相机的内参矩阵、以及相机坐标系中的坐标(X c,Y c,Z c)之间满足第一关系,即公式1;相机坐标系中的坐标(X c,Y c,Z c)、相机的外参矩阵、以及车体坐标系中的坐标(x T,y T,z T)之间的满足第二关系,即公式2。
Figure PCTCN2020125535-appb-000011
Figure PCTCN2020125535-appb-000012
其中,(dx,dy)表示拍摄装置的像素阵列中像素尺寸,(u 0,v 0)表示拍摄装置的像素阵列的中心坐标,
Figure PCTCN2020125535-appb-000013
表示拍摄装置的内参矩阵,f为拍摄装置拍摄图像时的焦距,Z C表示拍照人与拍摄装置之间的距离关系,为已知数。
此处,各靶点的第一像素坐标
Figure PCTCN2020125535-appb-000014
与各靶点在车体坐标系下的坐标(x T,y T,z T)之间存在第三坐标转换关系,即将各靶点的第一像素坐标
Figure PCTCN2020125535-appb-000015
与各靶点在车体坐标系下的坐标(x T,y T,z T)代入上述公式1和公式2,得到公式3和公式4,根据公式3和公式4,可确定第一外参矩阵
Figure PCTCN2020125535-appb-000016
Figure PCTCN2020125535-appb-000017
Figure PCTCN2020125535-appb-000018
其中,
Figure PCTCN2020125535-appb-000019
表示拍摄装置拍摄第一图像时的内参矩阵,f为拍摄装置拍摄焦距。
上述公式3和公式4联立,可以写成有12个未知数(即r和t为未知数)的2个方程,即
Figure PCTCN2020125535-appb-000020
为一组方程,
Figure PCTCN2020125535-appb-000021
为另一组方程。
以6个靶点为例,6个靶点的第一像素坐标分别为
Figure PCTCN2020125535-appb-000022
Figure PCTCN2020125535-appb-000023
6个靶点在车体坐标系下的坐标分别为(x 1,y 1,z 1)、(x 2,y 2,z 2)、(x 3,y 3,z 3)、(x 4,y 4,z 4)、(x 5,y 5,z 5)、(x 6,y 6,z 6);其中,
Figure PCTCN2020125535-appb-000024
与(x 1,y 1,z 1)对应一个靶点,
Figure PCTCN2020125535-appb-000025
与(x 2,y 2,z 2)对应一个靶点,
Figure PCTCN2020125535-appb-000026
与 (x 3,y 3,z 3)对应一个靶点,
Figure PCTCN2020125535-appb-000027
与(x 4,y 4,z 4)对应一个靶点,
Figure PCTCN2020125535-appb-000028
与(x 5,y 5,z 5)对应一个靶点,
Figure PCTCN2020125535-appb-000029
与(x 6,y 6,z 6)对应一个靶点。将这个6个靶点的第一像素坐标、对应的靶点在车体坐标系下的坐标、以及拍摄装置拍摄第一图像的焦距f分别代入上述公式1和公式2,可得到12个方程,求解这个12个方程,可得到
Figure PCTCN2020125535-appb-000030
中的12个未知数,即可确定出第一外参矩阵
Figure PCTCN2020125535-appb-000031
步骤407,测试设备确定第二图像上包括的各个靶点的第二像素坐标
Figure PCTCN2020125535-appb-000032
此处,各靶点的第二像素坐标为第二图像上各靶点在图像坐标系中的坐标。
该步骤407可参见上述步骤405的介绍,此处不再重复赘述。
步骤408,测试设备根据各靶点在车体坐标系下的坐标(x T,y T,z T)与第二图像上各靶点的第二像素坐标
Figure PCTCN2020125535-appb-000033
确定处于第二位置的拍摄装置的第二外参矩阵。
此处,各靶点的第二像素坐标
Figure PCTCN2020125535-appb-000034
与各靶点在车体坐标系下的坐标(x T,y T,z T)之间存在第四坐标转换关系,即将各靶点的第二像素坐标
Figure PCTCN2020125535-appb-000035
与各靶点在车体坐标系下的坐标(x T,y T,z T)代入上述公式1和公式2,得到公式5和公式6,根据公式5和公式6,可确定第二外参矩阵
Figure PCTCN2020125535-appb-000036
Figure PCTCN2020125535-appb-000037
Figure PCTCN2020125535-appb-000038
其中,
Figure PCTCN2020125535-appb-000039
表示拍摄装置拍摄第二图像时的内参矩阵,f为拍摄装置拍摄第二图像的焦距。
上述公式5和公式6联立,可以写成有12个未知数(即r和t为未知数)的2个方程,即
Figure PCTCN2020125535-appb-000040
为一组方程,
Figure PCTCN2020125535-appb-000041
为另一组方程。
以6个靶点为例,6个靶点的第二像素坐标分别为
Figure PCTCN2020125535-appb-000042
Figure PCTCN2020125535-appb-000043
对应的车体坐标分别为(x 1,y 1,z 1)、(x 2,y 2,z 2)、(x 3,y 3,z 3)、(x 4,y 4,z 4)、(x 5,y 5,z 5)、(x 6,y 6,z 6);其中,
Figure PCTCN2020125535-appb-000044
与(x 1,y 1,z 1)对应一个靶点,
Figure PCTCN2020125535-appb-000045
与(x 2,y 2,z 2)对应一个靶点,
Figure PCTCN2020125535-appb-000046
与(x 3,y 3,z 3)对应一个靶点,
Figure PCTCN2020125535-appb-000047
与(x 4,y 4,z 4)对应一个靶点,
Figure PCTCN2020125535-appb-000048
与(x 5,y 5,z 5)对应一个靶点,
Figure PCTCN2020125535-appb-000049
与(x 6,y 6,z 6)对应一个靶点。将这个6个靶点的第二像素坐标、对应的靶点在车体坐标系下的坐标、以及拍摄装置拍摄第二图像的焦距f分别代入上述公式1和公式2,可得到12个方程,求解这个12个方程,可得到
Figure PCTCN2020125535-appb-000050
中的12个未知数,即 可确定出第二外参矩阵
Figure PCTCN2020125535-appb-000051
基于上述步骤401至步骤408,可以获得在第一位置的拍摄装置的第一外参矩阵
Figure PCTCN2020125535-appb-000052
以及在第二位置的拍摄装置的第二外参矩阵
Figure PCTCN2020125535-appb-000053
也就是说,通过上述步骤401至步骤408,完成了相机的位置标定,而且根据坐标已知的靶点对相机的位置进行标定,有助于提高标定相机位置的准确性。需要说明的是,拍摄装置的外参矩阵与拍摄装置的所处的位置相关,即处于第一位置的拍摄装置拍摄第一图像和第三图像的外参矩阵是相同的,均为第一外参矩阵;处于第二位置的拍摄装置拍摄第二图像和第四图像的外参矩阵是相同的,均为第二外参矩阵。
步骤409,测试设备确定第三图像上的n个参考点的第三像素坐标,以及第四图像上的n个参考点的第四像素坐标。
此处,n个参考点的第三像素坐标为第三图像上的n个参考点在图像坐标系中的坐标。n个参考点的第四像素坐标为第四图像上n个参考点在图像坐标系中的坐标。
需要说明的是,上述确定第一像素坐标、第二像素坐标、第三像素坐标以及第四像素坐标的图像坐标系是相同的。例如,同一拍摄装置拍摄的图像均以图像(例如第一图像、第二图像、第三图像和第四图像)的左下角为原点,或者均以右下角为原点。
此处,第三像素坐标用
Figure PCTCN2020125535-appb-000054
表示,第四像素坐标用
Figure PCTCN2020125535-appb-000055
表示,确定第三像素坐标与第四像素坐标的方式可参见上述步骤405的介绍,此处不再重复赘述。
上述步骤405至步骤409均为可选步骤。
步骤410,测试设备根据第一外参矩阵、第三图像、第二外参矩阵以及所述第四图像,确定所述HUD虚像在所述车体坐标系中的坐标。
在一种可能的实现方式中,可根据n个参考点的第三像素坐标以及所述第一外参矩阵,确定第一坐标转换关系;其中,所述第一坐标转换关系为所述n个参考点的所述第三像素坐标与所述n个参考点在所述车体坐标系中的坐标之间的关系。
进一步,可选地,测试设备可根据上述公式1、公式2、所述n个参考点的第三像素坐标和所述n个参考点的第四像素坐标,确定所述HUD虚像上的n个参考点在所述车体坐标系中的坐标。
示例性地,可将n个参考点的第三像素坐标
Figure PCTCN2020125535-appb-000056
以及第一外参矩阵代入上述公式1和公式2,得到公式7和公式8,即可得到n个参考点的第三像素坐标
Figure PCTCN2020125535-appb-000057
与n个参考点在车体坐标系下的坐标(x T,y T,z T)之间的第一坐标转换关系。
Figure PCTCN2020125535-appb-000058
Figure PCTCN2020125535-appb-000059
基于相同的过程,可根据n个参考点的第四像素坐标以及所述第二外参矩阵,确定第 二坐标转换关系;其中,所述第二坐标转换关系为所述n个参考点的所述第四像素坐标与所述n个参考点在所述车体坐标系中的坐标之间的关系。
示例性地,可将n个参考点的第四像素坐标
Figure PCTCN2020125535-appb-000060
以及第二外参矩阵代入上述公式1和公式2,得到公式9和公式10,即可得到n个参考点的第四像素坐标
Figure PCTCN2020125535-appb-000061
与n个参考点在车体坐标系下的坐标(x T,y T,z T)之间的第二坐标转换关系。
Figure PCTCN2020125535-appb-000062
Figure PCTCN2020125535-appb-000063
进一步,可根据上述确定出的第一坐标转换关系和所述第二坐标转换关系,确定所述HUD虚像上的n个参考点在所述车体坐标系中的坐标。也就是说,基于上述公式7、公式8、公式9和公式10,可确定出HUD虚像上的n个参考点中各个参考点的车体坐标(x H,y H,z H),即(x 1,y 1,z 1)、(x 2,y 2,z 2)…(x n,y n,z n)。
需要说明的是,HUD虚像是存在于三维空间中的一个面,若HUD虚像垂直于X轴,则该HUD虚像上的n个参考点的x值是相同的。若HUD虚像不垂直于X轴时,则该HUD虚像上的n个参考点的x值可能是不相同的。
步骤411,测试设备根据所述HUD虚像在所述车体坐标系中的坐标,确定所述HUD虚像的成像参数。
在AR-HUD合格性判断时,需要提供一系列HUD虚像的成像参数(即检测指标),例如虚像距(virtual image distance,VID)、水平视场角、垂直视场角、中心位置、畸变率、旋转变形度等中的一个或多个。
通过在第一位置和第二位置拍摄四幅图像,即可确定出HUD虚像在所述车体坐标系中的坐标,从而可进一步确定出HUD虚像的成像参数。相比于现有技术的变焦方式,该图像标定方法简单、快速且准确。
下面,示例性示出了HUD虚像的成像参数的确定过程。为了便于方案的说明,HUD虚像上的参考点以上述图6为例。
成像参数一,虚像距。
虚像距是指HUD虚像到人眼的距离,请参阅图7。
在一种可能的实现方式中,可确定所述HUD虚像上的n个参考点中至少两个参考点在所述车体坐标系中的x坐标的平均值,以及眼盒的中心在所述车体坐标系中的x坐标;将所述平均值与所述眼盒中心位置在所述车体坐标系中的x坐标的差值的绝对值,确定为所述虚像距。需要说明的是,眼盒中心位置在车体坐标系中的坐标(x e,y e,z e)在AR-HUD设计时已确定。
也可以理解为,虚像距可通过下述公式11确定。
Figure PCTCN2020125535-appb-000064
其中,x e表示眼盒中心位置在车体坐标系中的x坐标,x 1、x 2、…x n表示所述HUD虚 像上的n个参考点中每个参考点在车体坐标系中的x坐标。
成像参数二,视场角。
视场角包括水平视场角(H_FOV)和垂直视场角(V_FOV)。水平视场角是指人眼在水平方向的最大可见范围,垂直视场角是人眼在垂直方向的最大可见范围。
在一种可能的实现方式中,可根据所述n个参考点中位于同一水平方向的至少两个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的水平方向上的长度;根据所述HUD虚像的水平方向上的长度和所述虚像距,确定所述水平视场角。
也可以理解为,可通过下述公式12确定水平视场角。
Figure PCTCN2020125535-appb-000065
结合图7,H_FOV=2×Arctan[HUD虚像的水平方向上的长度的一半/虚像距]
=2×Arctan[E 1E 2/2VID]
=2×Arctan[E 3E 4/2VID]
=2×Arctan[E 5E 6/2VID]
=2×Arctan[(E 1E 2+E 3E 4)/4VID]
=2×Arctan[(E 1E 2+E 5E 6)/4VID]
=2×Arctan[(E 3E 4+E 5E 6)/4VID]
=2×Arctan[(E 1E 2+E 3E 4+E 5E 6)/6VID]
在一种可能的实现方式中,可根据所述n个参考点中位于同一垂直方向的至少两个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的垂直方向上的长度;根据所述HUD虚像的垂直方向上的长度和所述虚像距,确定所述垂直视场角。
也可以理解为,可通过下述公式13确定垂直视场角。
Figure PCTCN2020125535-appb-000066
结合图7,V_FOV=2×Arctan[HUD虚像的垂直方向上的长度的一半/虚像距]
=2×Arctan[E 1E 3/VID]
=2×Arctan[E 3E 5/VID]
=2×Arctan[E 2E 4/VID]
=2×Arctan[E 4E 6/VID]
=2×Arctan[E 1E 5/2VID]
=2×Arctan[E 2E 6/2VID]
=2×Arctan[(E 1E 3+E 2E 4)/2VID]
=2×Arctan[(E 3E 5+E 4E 6)/2VID]
=2×Arctan[(E 1E 3+E 4E 6)/2VID]
=2×Arctan[(E 3E 5+E 2E 4)/2VID]
=2×Arctan[(E 1E 5+E 2E 6)/4VID]
成像参数三,HUD虚像的中心位置。
在一种可能的实现方式中,可将所述HUD虚像上的n个参考点中的中心参考点在所述车体坐标系中的坐标,确定为所述HUD虚像的中心位置。参考图7,参考点C 0的车体坐标即为HUD虚像的中心坐标。
成像参数四,第一参考点的畸变率。
此处,第一参考点可为n个参考点中的任一个或任多个。
通常,HUD虚像的中部区域的参考点不容易发生畸变。结合图7,通常C 0、C 1、C 2、C 3、C 4认为未发生畸变,可基于不容易发生畸变的参考点,确定中心参考点与第一参考点之间的预测距离,再确定中心参考点与第一参考点之间的实际距离,再根据实际距离和预测距离,确定第一参考点的畸变率。
也可以理解为,确定所述第一参考点与中心参考点的之间的实际距离,根据所述中心参考点以及所述中心参考点的周围的至少4个参考点,确定所述第一参考点的预测距离;第一参考点的畸变率可通过下述公式14确定。
Figure PCTCN2020125535-appb-000067
结合图7,以第一参考点为E 1为例,基于不容易发生畸变的参考点C 0、C 1、C 2、C 3、C 4,确定中心参考点C 0与第一参考点E 1之间的预测距离C 0E 1
Figure PCTCN2020125535-appb-000068
E 1的畸变率=[(实际距离C 0E 1/预测距离C 0E 1)–1]×100%。
可通过正负号标识畸变的方向,即正号表示畸变导致放大了原始图像,负号表示畸变导致缩小了原始图像。
进一步,可选地,可根据所述第一参考点的畸变率,确定所述HUD虚像的畸变率。示例性地,可通过加权平均n个参考点的畸变率,确定HUD虚像的畸变率;或者可将n个参考点中最大畸变点的畸变率确定为HUD虚像的畸变率;或者,将n个参考点中的中间区域的参考点(例如图6中的参考点C 0、C 1、C 2、C 3、C 4中的任一个或多个的平均值)畸变率和边缘顶点(例如图6中的E 1、E 2、E 3、E 4、E 5、E 6中的任一个或多个的平均值)的畸变率区分开均作为HUD虚像的畸变率。应理解,HUD虚像的中间区域的畸变较小,边缘区域的畸变较大。
成像参数五,旋转变形度。
由于AR-HUD的装配误差等,HUD虚像可能发生旋转变形度,旋转变形度用Φ表示。通常HUD虚像主要绕X轴旋转,因此,旋转变形度Φ可通过公式15确定。
Figure PCTCN2020125535-appb-000069
其中,所述第二参考点与所述第三参考点为所述n个参考点中同一水平方向上的两个参考点;所述第四参考点与所述第五参考点为所述n个参考点中同一垂直方向上的两个参考点。
结合图7,E 1E 2为同一水平方向上的两个参考点,E 3E 4为同一水平方向上的两个参考点,E 5E 6为同一水平方向上的两个参考点,其中,E 1E 2、E 3E 4、E 5E 6的旋转变形度是一致 的,为了便于方案的说明,HUD虚像旋转变形度以E 1E 2为例进行说明。
Φ=Arctan(|z E1-z E2|/|y E1-y E2|)
其中,z E1表示参考点E 1在车体坐标系中的z坐标,z E2表示参考点E 2在车体坐标系中的z坐标,y E1表示参考点E 1在车体坐标系中的y坐标,y E2表示参考点E 2在车体坐标系中的y坐标。
成像参数六,重影。
由于挡风玻璃内表面和外表面都会接收AR-HUD发射的光线,并从特定的角度将光线反射回驾驶员的眼睛,通过两个表面发射的光线会造成重影,请参阅图8。这种重影基本只在垂直方向(即Z轴方向)产生。结合上述图7,E 3E 4这条线的重影比较明显,E 1E 2和E 5E 6因为在HUD图像的边缘,可能有一部分光线反射到眼盒之外了。应理解,参考点E 3的重影和参考点E 4的重影之间的差异较小,可以认为这两个参考点的重影是一致的。
以下以HUD虚像上的参考点E 3为例,可通过公式17确定重影Ψ。
Figure PCTCN2020125535-appb-000070
其中,
Figure PCTCN2020125535-appb-000071
表示参考点E 3的主像的Z轴方向的位置,
Figure PCTCN2020125535-appb-000072
表示参考点E 3的副像的Z轴方向的位置,VID表示虚像距。
需要说明的是,虚像距VID越大,两个参考点E 3在人眼处的夹角越小。当虚像距VID较大时,重影可以忽略。
需要说明的是,上述参数的确定方式仅是示例,还可以基于确定出n个参考点的坐标用其它方式确定参数,本申请对此不做限定。
还需要说明的是,上述实施例中,E iE j表示在车体坐标系下,参考点E i与参考点E j之间的距离,可通过
Figure PCTCN2020125535-appb-000073
结合图7,i和j均可取1至6中任一整数,例如,E 1E 2表示i取1,j取2。C iC j表示在车体坐标系下,参考点C i与参考点C j之间的距离,可通过
Figure PCTCN2020125535-appb-000074
结合图7,i和j均可取0至4中任一整数,例如,C 1C 2表示i取1,j取2。
本申请中,测试设备确定出HUD虚像的成像参数后,可通过OBD端口将各个参数传输至HUD,以完成HUD虚像的标定。
可以理解的是,为了实现上述实施例中功能,图像标定装置或测试设备包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的模块及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用场景和设计约束条件。
基于上述内容和相同构思,图9和图10为本申请的提供的可能的图像标定装置的结构示意图。这些图像标定装置可以用于实现上述方法实施例中测试设备的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请中,该图像标定装置可以是如图3d所 示的测试设备,还可以是应用于测试设备的模块(如芯片)。
如图9所示,该图像标定装置900包括处理模块901和收发模块902。图像标定装置900用于实现上述图4所示的方法实施例中测试设备的功能。
当图像标定装置900用于实现图4所示的方法实施例的测试设备的功能时:收发模块902用于获取靶标的第一图像和抬头显示器HUD显示的HUD虚像的第三图像,第一图像为处于第一位置的拍摄装置拍摄靶标得到的,第三图像为处于第一位置的拍摄装置拍摄HUD虚像得到的;以及获取靶标的第二图像和HUD虚像的第四图像,第二图像为处于第二位置的拍摄装置拍摄靶标得到的,第四图像为处于第二位置的拍摄装置拍摄HUD虚像得到的;处理模块901用于根据第一图像和靶标在车体坐标系中的坐标,确定处于第一位置的拍摄装置的第一外参矩阵;根据第二图像和靶标在车体坐标系中的坐标,确定处于第二位置的拍摄装置的第二外参矩阵;以及根据第一外参矩阵、第三图像、第二外参矩阵以及第四图像,确定HUD虚像在车体坐标系中的坐标;以及根据HUD虚像在车体坐标系中的坐标,确定HUD虚像的成像参数。
有关上述处理模块901和收发模块902更详细的描述可以参考图4所示的方法实施例中相关描述直接得到,此处不再一一赘述。
应理解,本申请实施例中的处理模块901可以由处理器或处理器相关电路组件实现,收发模块902可以由收发器或收发器相关电路组件实现。
基于上述内容和相同构思,如图10所示,本申请还提供一种图像标定装置1000。该图像标定装置1000可包括处理器1001和收发器1002。处理器1001和收发器1002之间相互耦合。可以理解的是,收发器1002可以为接口电路或输入输出接口。可选地,图像标定装置1000还可包括存储器1003,用于存储处理器1001执行的指令或存储处理器1001运行指令所需要的输入数据或存储处理器1001运行指令后产生的数据。
当图像标定装置1000用于实现图4所示的方法时,处理器1001用于执行上述处理模块901的功能,收发器1002用于执行上述收发模块902的功能。
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(central processing unit,CPU),还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于图像标定装置中。当然,处理器和存储介质也可以作为分立组件存在于图像标定装置中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。 当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其它可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
本申请中,“均匀”不是指绝对的均匀,“垂直”不是指绝对的垂直,“水平”不是指绝对的水平,均可以允许有一定工程上的误差。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。在本申请的文字描述中,字符“/”,一般表示前后关联对象是一种“或”的关系。在本申请的公式中,字符“/”,表示前后关联对象是一种“相除”的关系。另外,在本申请中,“示例的”一词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。或者可理解为,使用示例的一词旨在以具体方式呈现概念,并不对本申请构成限定。
可以理解的是,在本申请中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。术语“第一”、“第二”等类似表述,是用于分区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、***、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的方案进行示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (22)

  1. 一种图像标定方法,其特征在于,包括:
    获取靶标的第一图像和抬头显示器HUD显示的HUD虚像的第三图像,所述第一图像为处于第一位置的拍摄装置拍摄所述靶标得到的,所述第三图像为处于第一位置的拍摄装置拍摄所述HUD虚像得到的;
    获取所述靶标的第二图像和所述HUD虚像的第四图像,所述第二图像为处于第二位置的拍摄装置拍摄所述靶标得到的,所述第四图像为处于第二位置的拍摄装置拍摄所述HUD虚像得到的;
    根据所述第一图像和所述靶标在车体坐标系中的坐标,确定处于所述第一位置的拍摄装置的第一外参矩阵;根据所述第二图像和所述靶标在车体坐标系中的坐标,确定处于所述第二位置的拍摄装置的第二外参矩阵;
    根据所述第一外参矩阵、所述第三图像、所述第二外参矩阵以及所述第四图像,确定所述HUD虚像在所述车体坐标系中的坐标;
    根据所述HUD虚像在所述车体坐标系中的坐标,确定所述HUD虚像的成像参数。
  2. 如权利要求1所述的方法,其特征在于,所述HUD虚像包括n个参考点,所述n为大于1的整数;
    根据所述第一外参矩阵、所述第三图像、所述第二外参矩阵和所述第四图像,确定所述HUD虚像的成像参数,包括:
    分别确定所述第三图像上的n个参考点的第三像素坐标、以及所述第四图像上的n个参考点的第四像素坐标;
    根据所述n个参考点的所述第三像素坐标以及所述第一外参矩阵,确定第一坐标转换关系;其中,所述第一坐标转换关系为所述n个参考点的所述第三像素坐标与所述n个参考点在所述车体坐标系中的坐标之间的关系;
    根据所述n个参考点的所述第四像素坐标以及所述第二外参矩阵,确定第二坐标转换关系;其中,所述第二坐标转换关系为所述n个参考点的所述第四像素坐标与所述n个参考点在所述车体坐标系中的坐标之间的关系;
    根据所述第一坐标转换关系和所述第二坐标转换关系,确定所述HUD虚像上的n个参考点在所述车体坐标系中的坐标。
  3. 如权利要求1或2所述的方法,其特征在于,所述HUD虚像的成像参数包括以下任一项或任多项:
    虚像距(VID)、水平视场角、垂直视场角、中心位置、畸变率或旋转变形。
  4. 如权利要求2或3所述的方法,其特征在于,所述成像参数包括虚像距;
    所述根据所述HUD虚像上的n个参考点在车体坐标系中的坐标,确定所述HUD虚像的成像参数,包括:
    确定所述HUD虚像上的n个参考点中至少两个参考点在所述车体坐标系中的x坐标的平均值,所述x坐标为车的前进或后退方向;
    确定眼盒的中心在所述车体坐标系中的x坐标;
    将所述平均值与所述眼盒中心位置在所述车体坐标系中的x坐标的差值的绝对值,确定为所述虚像距。
  5. 如权利要求4所述的方法,其特征在于,所述成像参数还包括水平视场角;
    所述根据所述HUD虚像上的n个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的成像参数,包括:
    根据所述n个参考点中位于同一水平方向的至少两个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的水平方向上的长度;
    根据所述HUD虚像的水平方向上的长度和所述虚像距,确定所述水平视场角。
  6. 如权利要求4或5所述的方法,其特征在于,所述成像参数还包括垂直视场角;
    所述根据所述HUD虚像上的n个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的成像参数,包括:
    根据所述n个参考点中位于同一垂直方向的至少两个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的垂直方向上的长度;
    根据所述HUD虚像的垂直方向上的长度和所述虚像距,确定所述垂直视场角。
  7. 如权利要求3至6任一项所述的方法,其特征在于,所述成像参数包括中心位置;
    所述根据所述HUD虚像上的n个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的成像参数,包括:
    将所述HUD虚像上的n个参考点中的中心参考点在所述车体坐标系中的坐标,确定为所述HUD虚像的中心位置。
  8. 如权利要求3至7任一项所述的方法,其特征在于,所述成像参数包括畸变率;
    所述根据所述HUD虚像上的n个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的成像参数,包括:
    确定第一参考点的畸变率,所述第一参考点为所述HUD虚像上的n个参考点中的至少一个;
    根据所述第一参考点的畸变率,确定所述HUD虚像的畸变率。
  9. 如权利要求8所述的方法,其特征在于,所述n为大于5的整数;
    所述确定第一参考点的畸变率,包括:
    确定所述第一参考点与中心参考点的之间的实际距离;
    根据所述中心参考点以及所述中心参考点的周围的至少4个参考点,确定所述第一参考点的预测距离;
    根据所述实际距离与所述预测距离,确定所述第一参考点的畸变率。
  10. 如权利要求2至9任一项所述的方法,其特征在于,所述成像参数包括旋转变形;
    所述根据所述HUD虚像上的n个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的成像参数,包括:
    确定第二参考点在所述车体坐标系中的z坐标、所述第二参考点在所述车体坐标系中的y坐标、第三参考点在所述车体坐标系中的z坐标以及所述第三参考点在所述车体坐标系中的y坐标;其中,所述第二参考点与所述第三参考点为所述n个参考点中同一水平方向上的两个参考点;
    根据所述第二参考点在所述车体坐标系中的z坐标、所述第二参考点在所述车体坐标系中的y坐标、所述第三参考点在所述车体坐标系中的z坐标以及所述第三参考点在所述车体坐标系中的y坐标,确定所述旋转变形。
  11. 如权利要求2至10任一项所述的方法,其特征在于,所述根据所述第一图像和所 述靶标在车体坐标系中的坐标,确定处于所述第一位置的拍摄装置的第一外参矩阵,包括:
    确定所述第一图像上的所述各靶点的第一像素坐标;
    根据所述各靶点的第一像素坐标、所述各靶点在所述车体坐标系中的坐标以及第三坐标转换关系,确定所述第一外参矩阵;所述第三坐标转换关系为所述各靶点的所述第一像素坐标与所述各靶点在所述车体坐标系中的坐标之间的关系;
    所述根据所述第二图像和所述靶标在车体坐标系中的坐标,确定处于所述第二位置的拍摄装置的第二外参矩阵,包括:
    确定所述第二图像上的所述各靶点的第二像素坐标;
    根据所述各靶点的第二像素坐标、所述各靶点在所述车体坐标系中的坐标以及第四坐标转换关系,确定所述第二外参矩阵,所述第四坐标转换关系为所述各靶点的所述第二像素坐标与所述各靶点在所述车体坐标系中的坐标之间的关系。
  12. 一种图像标定装置,其特征在于,包括收发模块和处理模块:
    所述收发模块,用于获取靶标的第一图像和抬头显示器HUD显示的HUD虚像的第三图像,所述第一图像为处于第一位置的拍摄装置拍摄所述靶标得到的,所述第三图像为处于第一位置的拍摄装置拍摄所述HUD虚像得到的;以及获取所述靶标的第二图像和所述HUD虚像的第四图像,所述第二图像为处于第二位置的拍摄装置拍摄所述靶标得到的,所述第四图像为处于第二位置的拍摄装置拍摄所述HUD虚像得到的;
    所述处理模块,用于根据所述第一图像和所述靶标在车体坐标系中的坐标,确定处于所述第一位置的拍摄装置的第一外参矩阵;根据所述第二图像和所述靶标在车体坐标系中的坐标,确定处于所述第二位置的拍摄装置的第二外参矩阵;以及根据所述第一外参矩阵、所述第三图像、所述第二外参矩阵以及所述第四图像,确定所述HUD虚像在所述车体坐标系中的坐标;以及根据所述HUD虚像在所述车体坐标系中的坐标,确定所述HUD虚像的成像参数。
  13. 如权利要求12所述的装置,其特征在于,所述HUD虚像包括n个参考点,所述n为大于1的整数;
    所述处理模块,具体用于:
    分别确定所述第三图像上的n个参考点在第三像素坐标、以及所述n个参考点在所述第四图像上的第四像素坐标;
    根据所述n个参考点的所述第三像素坐标以及所述第一外参矩阵,确定第一坐标转换关系;其中,所述第一坐标转换关系为n个参考点的所述第三像素坐标与所述n个参考点在所述车体坐标系中的坐标之间的关系;
    根据所述n个参考点的所述第四像素坐标以及所述第二外参矩阵,确定第二坐标转换关系;其中,所述第二坐标转换关系为所述n个参考点的所述第四像素坐标与所述n个参考点在所述车体坐标系中的坐标之间的关系;
    根据所述第一坐标转换关系和所述第二坐标转换关系,确定所述HUD虚像上的n个参考点在所述车体坐标系中的坐标。
  14. 如权利要求12或13所述的装置,其特征在于,所述HUD虚像的成像参数包括以下任一项或任多项:
    虚像距(VID)、水平视场角、垂直视场角、中心位置、畸变率或旋转变形。
  15. 如权利要求13或14所述的装置,其特征在于,所述成像参数包括虚像距;
    所述处理模块,具体用于:
    确定所述HUD虚像上的n个参考点中至少两个参考点在所述车体坐标系中的x坐标的平均值,所述x坐标为车的前进或后退方向;
    确定眼盒的中心在所述车体坐标系中的x坐标;
    将所述平均值与所述眼盒中心位置在所述车体坐标系中的x坐标的差值的绝对值,确定为所述虚像距。
  16. 如权利要求15所述的装置,其特征在于,所述成像参数还包括水平视场角;
    所述处理模块,具体用于:
    根据所述n个参考点中位于同一水平方向的至少两个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的水平方向上的长度;
    根据所述HUD虚像的水平方向上的长度和所述虚像距,确定所述水平视场角。
  17. 如权利要求15或16所述的装置,其特征在于,所述成像参数还包括垂直视场角;
    所述处理模块,具体用于:
    根据所述n个参考点中位于同一垂直方向的至少两个参考点在所述车体坐标系中的坐标,确定所述HUD虚像的垂直方向上的长度;
    根据所述HUD虚像的垂直方向上的长度和所述虚像距,确定所述垂直视场角。
  18. 如权利要求14至17任一项所述的装置,其特征在于,所述成像参数包括中心位置;
    所述处理模块,具体用于:
    将所述HUD虚像上的n个参考点中的中心参考点在所述车体坐标系中的坐标,确定为所述HUD虚像的中心位置。
  19. 如权利要求14至18任一项所述的装置,其特征在于,所述成像参数包括畸变率;
    处理模块,具体用于:
    确定第一参考点的畸变率,所述第一参考点为所述HUD虚像上的n个参考点中的至少一个;
    根据所述第一参考点的畸变率,确定所述HUD虚像的畸变率。
  20. 如权利要求19所述的装置,其特征在于,所述n为大于5的整数;
    所述处理模块,具体用于:
    确定所述第一参考点与中心参考点的之间的实际距离;
    根据所述中心参考点以及所述中心参考点的周围的至少4个参考点,确定所述第一参考点的预测距离;
    根据所述实际距离与所述预测距离,确定所述第一参考点的畸变率。
  21. 如权利要求13至20任一项所述的装置,其特征在于,所述成像参数包括旋转变形;
    处理模块,具体用于:
    确定第二参考点在所述车体坐标系中的z坐标、所述第二参考点在所述车体坐标系中的y坐标、第三参考点在所述车体坐标系中的z坐标以及所述第三参考点在所述车体坐标系中的y坐标;其中,所述第二参考点与所述第三参考点为所述n个参考点中同一水平方向上的两个参考点;
    根据所述第二参考点在所述车体坐标系中的z坐标、所述第二参考点在所述车体坐标 系中的y坐标、所述第三参考点在所述车体坐标系中的z坐标以及所述第三参考点在所述车体坐标系中的y坐标,确定所述旋转变形。
  22. 如权利要求13至21任一项所述的装置,其特征在于,所述处理模块,具体用于:
    确定所述第一图像上的所述各靶点的第一像素坐标;
    根据所述各靶点的第一像素坐标、所述各靶点在所述车体坐标系中的坐标以及第三坐标转换关系,确定所述第一外参矩阵;所述第三坐标转换关系为所述各靶点的所述第一像素坐标与所述各靶点在所述车体坐标系中的坐标之间的关系;
    所述处理模块,具体用于:
    确定所述第二图像上的所述各靶点的第二像素坐标;
    根据所述各靶点的第二像素坐标、所述各靶点在所述车体坐标系中的坐标以及第四坐标转换关系,确定所述第二外参矩阵,所述第四坐标转换关系为所述各靶点的所述第二像素坐标与所述各靶点在所述车体坐标系中的坐标之间的关系。
PCT/CN2020/125535 2020-10-30 2020-10-30 一种图像标定方法及装置 WO2022088103A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/125535 WO2022088103A1 (zh) 2020-10-30 2020-10-30 一种图像标定方法及装置
CN202080004865.9A CN112655024B (zh) 2020-10-30 2020-10-30 一种图像标定方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/125535 WO2022088103A1 (zh) 2020-10-30 2020-10-30 一种图像标定方法及装置

Publications (1)

Publication Number Publication Date
WO2022088103A1 true WO2022088103A1 (zh) 2022-05-05

Family

ID=75368404

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/125535 WO2022088103A1 (zh) 2020-10-30 2020-10-30 一种图像标定方法及装置

Country Status (2)

Country Link
CN (1) CN112655024B (zh)
WO (1) WO2022088103A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115802159A (zh) * 2023-02-01 2023-03-14 北京蓝色星际科技股份有限公司 一种信息显示方法、装置、电子设备及存储介质
CN116051647A (zh) * 2022-08-08 2023-05-02 荣耀终端有限公司 一种相机标定方法和电子设备
CN117073988A (zh) * 2023-08-18 2023-11-17 交通运输部公路科学研究所 抬头显示虚像距离测量***及方法、电子设备
WO2023226403A1 (zh) * 2022-05-27 2023-11-30 华为技术有限公司 标定板以及标定控制设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240592A (zh) * 2021-04-14 2021-08-10 重庆利龙科技产业(集团)有限公司 基于ar-hud动态眼位下计算虚像平面的畸变矫正方法
CN113034618A (zh) * 2021-04-20 2021-06-25 延锋伟世通汽车电子有限公司 一种汽车抬头显示器成像距离测量方法及***
CN113256739B (zh) * 2021-06-28 2021-10-19 所托(杭州)汽车智能设备有限公司 车载bsd摄像头的自标定方法、设备和存储介质
CN114155300A (zh) * 2021-10-29 2022-03-08 重庆利龙科技产业(集团)有限公司 一种车载hud***投影效果检测方法及装置
CN117033862A (zh) * 2023-10-08 2023-11-10 西安道达天际信息技术有限公司 地理坐标转换为ar坐标的转换方法、***及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169612A1 (en) * 2015-12-15 2017-06-15 N.S. International, LTD Augmented reality alignment system and method
CN108989794A (zh) * 2018-08-01 2018-12-11 上海玮舟微电子科技有限公司 基于抬头显示***的虚像信息测量方法及***
CN109859155A (zh) * 2017-11-30 2019-06-07 京东方科技集团股份有限公司 影像畸变检测方法和***
CN109884793A (zh) * 2017-12-06 2019-06-14 三星电子株式会社 用于估计虚拟屏幕的参数的方法和设备
CN111147834A (zh) * 2019-12-31 2020-05-12 深圳疆程技术有限公司 一种基于增强现实抬头显示的虚拟图像标定方法

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2608153A1 (en) * 2011-12-21 2013-06-26 Harman Becker Automotive Systems GmbH Method and system for playing an augmented reality game in a motor vehicle
JP6040564B2 (ja) * 2012-05-08 2016-12-07 ソニー株式会社 画像処理装置、投影制御方法及びプログラム
KR101484170B1 (ko) * 2013-05-06 2015-01-21 주식회사 이미지넥스트 Hud 영상 평가시스템 및 그 평가방법
JP2017157093A (ja) * 2016-03-03 2017-09-07 矢崎総業株式会社 車両用表示装置
CN106127714B (zh) * 2016-07-01 2019-08-20 南京睿悦信息技术有限公司 一种虚拟现实头戴显示器设备畸变参数的测量方法
CN110023817B (zh) * 2017-02-15 2021-08-10 麦克赛尔株式会社 平视显示装置
WO2019097918A1 (ja) * 2017-11-14 2019-05-23 マクセル株式会社 ヘッドアップディスプレイ装置およびその表示制御方法
CN108399640A (zh) * 2018-03-07 2018-08-14 中国工程物理研究院机械制造工艺研究所 一种基于摄像机标定的反射镜相对位姿测量方法
CN207894591U (zh) * 2018-03-12 2018-09-21 福耀集团(上海)汽车玻璃有限公司 一种hud前挡玻璃检测设备
CN110874135B (zh) * 2018-09-03 2021-12-21 广东虚拟现实科技有限公司 光学畸变的校正方法、装置、终端设备及存储介质
CN109472829B (zh) * 2018-09-04 2022-10-21 顺丰科技有限公司 一种物体定位方法、装置、设备和存储介质
KR20200057929A (ko) * 2018-11-19 2020-05-27 주식회사 스튜디오매크로그래프 캘리브레이트된 카메라들에 의해 캡쳐된 스테레오 영상들의 렉티피케이션 방법과 컴퓨터 프로그램
CN109712194B (zh) * 2018-12-10 2021-09-24 深圳开阳电子股份有限公司 车载环视***及其立体标定方法和计算机可读存储介质
DE102019202512A1 (de) * 2019-01-30 2020-07-30 Siemens Aktiengesellschaft Verfahren und Anordnung zur Ausgabe eines HUD auf einem HMD
CN111508027B (zh) * 2019-01-31 2023-10-20 杭州海康威视数字技术股份有限公司 摄像机外参标定的方法和装置
CN111476104B (zh) * 2020-03-17 2022-07-01 重庆邮电大学 动态眼位下ar-hud图像畸变矫正方法、装置、***
CN111443490B (zh) * 2020-04-15 2022-11-18 杭州赶梦科技有限公司 一种ar hud的虚像显示区域调节方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169612A1 (en) * 2015-12-15 2017-06-15 N.S. International, LTD Augmented reality alignment system and method
CN109859155A (zh) * 2017-11-30 2019-06-07 京东方科技集团股份有限公司 影像畸变检测方法和***
CN109884793A (zh) * 2017-12-06 2019-06-14 三星电子株式会社 用于估计虚拟屏幕的参数的方法和设备
CN108989794A (zh) * 2018-08-01 2018-12-11 上海玮舟微电子科技有限公司 基于抬头显示***的虚像信息测量方法及***
CN111147834A (zh) * 2019-12-31 2020-05-12 深圳疆程技术有限公司 一种基于增强现实抬头显示的虚拟图像标定方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023226403A1 (zh) * 2022-05-27 2023-11-30 华为技术有限公司 标定板以及标定控制设备
CN116051647A (zh) * 2022-08-08 2023-05-02 荣耀终端有限公司 一种相机标定方法和电子设备
CN115802159A (zh) * 2023-02-01 2023-03-14 北京蓝色星际科技股份有限公司 一种信息显示方法、装置、电子设备及存储介质
CN115802159B (zh) * 2023-02-01 2023-04-28 北京蓝色星际科技股份有限公司 一种信息显示方法、装置、电子设备及存储介质
CN117073988A (zh) * 2023-08-18 2023-11-17 交通运输部公路科学研究所 抬头显示虚像距离测量***及方法、电子设备
CN117073988B (zh) * 2023-08-18 2024-06-04 交通运输部公路科学研究所 抬头显示虚像距离测量***及方法、电子设备

Also Published As

Publication number Publication date
CN112655024A (zh) 2021-04-13
CN112655024B (zh) 2022-04-22

Similar Documents

Publication Publication Date Title
WO2022088103A1 (zh) 一种图像标定方法及装置
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
JP5455124B2 (ja) カメラ姿勢パラメータ推定装置
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
JP6518952B2 (ja) 車両用表示装置の位置調整方法
US7583307B2 (en) Autostereoscopic display
US20180184077A1 (en) Image processing apparatus, method, and storage medium
KR20160116075A (ko) 카메라로부터 획득한 영상에 대한 자동보정기능을 구비한 영상처리장치 및 그 방법
CN107113376A (zh) 一种图像处理方法、装置及摄像机
US20200294269A1 (en) Calibrating cameras and computing point projections using non-central camera model involving axial viewpoint shift
KR20200056721A (ko) 증강현실 기기의 광학 특성 측정 방법 및 장치
JP4679293B2 (ja) 車載パノラマカメラシステム
WO2022126430A1 (zh) 辅助对焦方法、装置及***
CN210986289U (zh) 四目鱼眼相机及双目鱼眼相机
WO2021104308A1 (zh) 全景深度测量方法、四目鱼眼相机及双目鱼眼相机
JP2011087319A (ja) 車載パノラマカメラシステム
CN109945840B (zh) 三维影像摄取方法及***
JP6854472B2 (ja) 撮像装置、及び撮像方法
CN114693807A (zh) 一种输电线路图像与点云的映射数据重构方法及***
TWI793584B (zh) 自動泊車建圖與定位的系統及其方法
CN218839318U (zh) 一种装载机360度全景多界面可视***
US20220398803A1 (en) Method for forming an image of an object, computer program product and image forming system for carrying out the method
CN115665400B (zh) 增强现实抬头显示成像方法、装置、设备以及存储介质
JP2010004227A (ja) 撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20959264

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20959264

Country of ref document: EP

Kind code of ref document: A1