WO2020259506A1 - 一种确定摄像头的畸变参数的方法及装置 - Google Patents

一种确定摄像头的畸变参数的方法及装置 Download PDF

Info

Publication number
WO2020259506A1
WO2020259506A1 PCT/CN2020/097761 CN2020097761W WO2020259506A1 WO 2020259506 A1 WO2020259506 A1 WO 2020259506A1 CN 2020097761 W CN2020097761 W CN 2020097761W WO 2020259506 A1 WO2020259506 A1 WO 2020259506A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
point
depth
target object
feature point
Prior art date
Application number
PCT/CN2020/097761
Other languages
English (en)
French (fr)
Inventor
魏志方
池清华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020259506A1 publication Critical patent/WO2020259506A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • G01M11/0257Testing optical properties by measuring geometrical properties or aberrations by analyzing the image formed by the object to be tested
    • G01M11/0264Testing optical properties by measuring geometrical properties or aberrations by analyzing the image formed by the object to be tested by using targets or reference patterns
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Definitions

  • This application relates to the field of unmanned driving technology, and in particular to a method and device for determining distortion parameters of a camera.
  • unmanned vehicles can measure the distance and speed of the target object through the camera and radar, and obtain the 6D information of the target object.
  • the 6D information of the target object is the three-dimensional position information and three-dimensional speed information of the target object in the vehicle body coordinate system or the world coordinate system. Then, the unmanned vehicle can determine whether it needs to decelerate according to the 6D information of the target object to avoid collision accidents. Because the pictures taken by the camera have large distortion, the 6D information of the target object measured by the unmanned vehicle through the camera will have a large error.
  • the camera is calibrated mainly by Zhang Zhengyou's calibration method to obtain the distortion parameters of the camera, so as to remove or compensate the distortion of the picture taken by the camera, and then obtain accurate 6D information of the target object.
  • the Zhang Zhengyou calibration method is a method to determine the distortion parameters of the camera when the camera is offline. First, the technician places a piece of black and white square grid paper as a template on a certain plane of the space. Then, the unmanned vehicle uses the camera to take images of the template in different directions. Afterwards, the unmanned vehicle determines the distortion parameters of the camera according to the two-dimensional coordinates of the feature points corresponding to the template in each image and the three-dimensional coordinates of the template in the world coordinate system.
  • the distortion parameters of the camera will also change accordingly. If you still use the distortion parameters of the camera obtained by the Zhang Zhengyou calibration method when the camera is offline to correct the picture taken by the camera, it will cause a large error in the 6D information of the target object measured by the unmanned vehicle through the camera.
  • the embodiments of the present application provide a method and device for determining the distortion parameters of a camera.
  • the camera can be based on the point cloud data collected by the radar and the camera The collected image data can determine the distortion parameters of the camera in real time, thereby reducing the error of the 6D information of the target object measured by the unmanned vehicle through the camera.
  • the technical scheme is as follows:
  • a method for determining the distortion parameters of the camera including:
  • the distortion parameter of the camera is determined in real time according to the distortion parameter algorithm.
  • the acquiring point cloud data collected by radar includes:
  • the point cloud data collected by the radar is acquired.
  • the acquiring point cloud data collected by radar includes:
  • the determining the scan point corresponding to the first feature point includes:
  • a first candidate scan point set is determined according to the first feature point, the first preset plane distance threshold and the first depth distance probability value, the first candidate scan point set includes a first scan point, and the first scan point The plane distance from the first feature point is less than the first preset plane distance threshold, and the first depth distance probability value is used to remove background scan points;
  • the first candidate scan points are collected to form three scan points of the triangle with the largest area and determined as scan points corresponding to the first feature point.
  • the real-time determination of the distortion parameters of the camera according to the distortion parameter algorithm includes:
  • an equation set is constructed, the equation set includes N n-order equations, and both the N and the n are greater than or equal to 1.
  • the n-th order equation is:
  • Y i is the i th real depth of the real depth of the first feature point
  • Y i is the i th measurement depth measured depth of the first feature point
  • a 0, a 1, a 2, a 3, ... a n is The distortion parameter of the camera
  • it also includes:
  • Z is the real depth of the target object's true depth
  • Z is the measured depth measurement depth of the target object
  • a 0, a 1, a 2, a 3, ... a n is the distortion parameter of the camera
  • the Said n is an integer greater than or equal to 1.
  • the radar is millimeter wave radar or lidar.
  • the determining the scan point corresponding to the first feature point includes:
  • the scan point corresponding to the first feature point is determined according to the first feature point and the second preset plane distance threshold, and the plane distance between the scan point corresponding to the first feature point and the first feature point is less than the The second preset plane distance threshold;
  • At least N first feature points are selected from the image data, the at least N first feature points correspond to at least N scanning points, and the N is an integer greater than or equal to 3.
  • the real-time determination of the distortion parameters of the camera according to the distortion parameter algorithm includes:
  • the distortion parameters are obtained by Zhang Zhengyou calibration method according to the coordinates of the at least N first feature points and the coordinates of the at least N scanning points.
  • it also includes:
  • the radar is a lidar.
  • it also includes:
  • a device for determining distortion parameters of a camera including:
  • the calibration module is used to calibrate the radar coordinate system and the camera coordinate system
  • the first acquisition module is configured to acquire point cloud data collected by radar, where the point cloud data includes at least one scanning point;
  • the second acquisition module is configured to acquire image data collected by the camera, where the image data includes at least one first feature point;
  • a first determining module configured to determine a scanning point corresponding to the first feature point
  • the second determining module is used to determine the distortion parameters of the camera in real time according to the distortion parameter algorithm.
  • the first obtaining module is specifically configured to:
  • the point cloud data collected by the radar is acquired.
  • the first obtaining module is specifically configured to:
  • the first determining module is specifically configured to:
  • a first candidate scan point set is determined according to the first feature point, the first preset plane distance threshold and the first depth distance probability value, the first candidate scan point set includes a first scan point, and the first scan point The plane distance from the first feature point is less than the first preset plane distance threshold, and the first depth distance probability value is used to remove background scan points;
  • the first candidate scan points are collected to form three scan points of the triangle with the largest area and determined as scan points corresponding to the first feature point.
  • the second determining module is specifically configured to:
  • an equation set is constructed, the equation set includes N n-order equations, and both the N and the n are greater than or equal to 1.
  • the n-th order equation is:
  • Y i is the i th real depth of the real depth of the first feature point
  • Y i is the i th measurement depth measured depth of the first feature point
  • a 0, a 1, a 2, a 3, ... a n is The distortion parameter of the camera
  • it also includes:
  • the third determining module is used to determine the target object and the measurement depth corresponding to the target object, and obtain the true depth of the target object according to the n-order equation, and the n-order equation is:
  • Z is the real depth of the target object's true depth
  • Z is the measured depth measurement depth of the target object
  • a 0, a 1, a 2, a 3, ... a n is the distortion parameter of the camera
  • the Said n is an integer greater than or equal to 1.
  • the radar is millimeter wave radar or lidar.
  • the first determining module is specifically configured to:
  • the scan point corresponding to the first feature point is determined according to the first feature point and the second preset plane distance threshold, and the plane distance between the scan point corresponding to the first feature point and the first feature point is less than the The second preset plane distance threshold;
  • At least N first feature points are selected from the image data, the at least N first feature points correspond to at least N scanning points, and the N is an integer greater than or equal to 1.
  • the second determining module is specifically configured to:
  • the distortion parameters are obtained by Zhang Zhengyou calibration method according to the coordinates of the at least N first feature points and the coordinates of the at least N scanning points.
  • it also includes:
  • the fourth determining module is used to determine the true coordinates of the target object in the image coordinate system through the distortion parameters
  • the fifth determining module is used to determine the true depth of the target object in the camera coordinate system through a monocular range finding formula.
  • the radar is a lidar.
  • the device further includes:
  • a sixth determining module configured to determine the first 6D information of the target object according to the distortion parameter of the camera and the image data of the target object collected by the camera;
  • the sending module is configured to send the first 6D information of the target object to the fusion module.
  • a method for obtaining the precise position of a target object including:
  • Obtain point cloud data and send the point cloud data to the camera, where the point cloud data includes at least one scanning point.
  • the method further includes:
  • the fourth aspect provides a method for obtaining the precise position of a target object, including:
  • Kalman filtering processing is performed on the first 6D information and the second 6D information to obtain target 6D information of the target object.
  • a device for determining distortion parameters of a camera including: a processor, a memory, and a communication interface; wherein the communication interface is used to communicate with other devices or a communication network, and the memory is used to store one or more programs, The one or more programs include computer-executable instructions.
  • the processor executes the computer-executable instructions stored in the memory to make the device execute the method of determining the distortion parameters of the camera as described in any one of the first aspect. method.
  • a system for determining distortion parameters of a camera including a camera, a radar, and the device for determining distortion parameters of a camera as described in any of the second aspect.
  • a computer-readable storage medium including a program and instructions.
  • the program or instruction runs on a computer, the method for determining a distortion parameter of a camera according to any one of the first aspect is implemented.
  • a chip system including a processor, the processor is coupled to a memory, the memory stores program instructions, and the first aspect is implemented when the program instructions stored in the memory are executed by the processor Any one of the methods for determining the distortion parameters of a camera.
  • a computer program product containing instructions, which when the computer program product runs on a computer, causes the computer to execute the method for determining a distortion parameter of a camera according to any one of the first aspects.
  • the embodiments of the present application provide a method and device for determining distortion parameters of a camera.
  • the camera in the unmanned vehicle calibrates the radar coordinate system and the camera coordinate system.
  • the camera acquires the point cloud data collected by the radar and the image data collected by the camera.
  • the point cloud data includes at least one scanning point
  • the image data includes at least one first feature point.
  • the camera determines the scanning point corresponding to the first feature point according to the point cloud data and the image data, and determines the distortion parameter of the camera in real time according to the distortion parameter algorithm.
  • the camera can determine the distortion parameters of the camera in real time based on the point cloud data collected by the radar and the image data collected by the camera, thereby reducing the unmanned The error of the 6D information of the target object measured by the driving vehicle through the camera.
  • FIG. 1 is a schematic structural diagram of an unmanned vehicle provided by an embodiment of the application
  • FIG. 2 is a flowchart of a method for determining distortion parameters of a camera provided by an embodiment of the application
  • 3A is a schematic diagram of determining a first candidate scan point of a first feature point according to an embodiment of the application
  • FIG. 3B is a schematic diagram of determining a scanning point corresponding to a first feature point according to an embodiment of the application
  • FIG. 4 is a schematic diagram of the structure of an apparatus for determining distortion parameters of a camera provided by an embodiment of the application;
  • FIG. 5 is a schematic diagram of the structure of an apparatus for determining distortion parameters of a camera provided by an embodiment of the application;
  • FIG. 6 is a schematic diagram of the structure of an apparatus for determining distortion parameters of a camera provided by an embodiment of the application;
  • FIG. 7 is a schematic structural diagram of an apparatus for determining a distortion parameter of a camera provided by an embodiment of the application.
  • the embodiment of the application provides a method for determining the distortion parameters of the camera.
  • the method can be applied to an unmanned vehicle, an assisted driving vehicle, or an intelligent driving vehicle, which is not limited in the embodiment of the application.
  • the embodiment of the present application takes the method applied to an unmanned vehicle as an example for introduction, and other situations are similar. Specifically, it can be applied to a camera in an unmanned vehicle, can also be applied to a radar in an unmanned vehicle, and can also be applied to a fusion module in an unmanned vehicle, which is not limited in the embodiment of the present application.
  • the embodiment of the present application takes the method applied to a camera in an unmanned vehicle as an example for introduction, and other situations are similar. Fig.
  • FIG. 1 is a schematic structural diagram of an unmanned vehicle provided by an embodiment of the application.
  • a camera 110, a radar 120, a fusion module 130 (not shown in the figure) and a decision-making module 140 are installed on the unmanned vehicle 100.
  • the fusion module 130 can be set in the camera 110 or the radar 120, or independently set in the unmanned vehicle; similarly, the decision module 140 can be set in the camera 110 or the radar 120, or independently set in the unmanned vehicle.
  • the camera 110 is used to collect image data in real time. Wherein, the image data includes at least one first feature point.
  • the radar 120 is used to collect point cloud data in real time. Wherein, the point cloud data includes at least one scanning point.
  • the camera 110 acquires the point cloud data collected by the radar 120 and the image data collected by itself. Then, the camera 110 maps the scanning points in the point cloud data to the image coordinate system corresponding to the image data, and determines the scanning point corresponding to the first feature point in the image data in the image coordinate system. After that, the camera 110 determines the distortion parameters of the camera 110 in real time according to the distortion parameter algorithm, the first feature point and the scanning point corresponding to the first feature point. Subsequently, the camera 110 may determine the target object in the image data through image recognition technology, and determine the real coordinates of the target object in real time according to the distortion parameters of the camera 110.
  • the camera 110 determines the position information of the target object in real time according to the monocular ranging formula and the real coordinates of the target object, and then obtains the real-time 6D information corresponding to the target object (hereinafter referred to as the first 6D information), and combines the first 6D information
  • the information is sent to the fusion module 130.
  • the radar 120 may perform clustering processing on the scan points corresponding to the target object collected in real time to obtain real-time 6D information corresponding to the target object (hereinafter referred to as second 6D information), and send the second 6D information to the fusion module 130.
  • the fusion module 130 may perform Kalman filter processing on the first 6D information and the second 6D information of the target object to obtain the target 6D information corresponding to the target object, and send the target 6D information corresponding to the target object to the decision-making module 140. After the module 140 receives the target 6D information corresponding to the target object, it can make a decision based on the target 6D information of the target object.
  • Step 201 Calibrate the radar coordinate system and the camera coordinate system.
  • the camera 110 and the radar 120 are usually installed at different positions on the unmanned vehicle. Therefore, the camera coordinate system corresponding to the camera 110 and the radar coordinate system corresponding to the radar 120 are different. In order to ensure the accuracy of the determined distortion parameters of the camera 110, before determining the distortion parameters, the radar coordinate system and the camera coordinate system must be calibrated, that is, the origin of the radar coordinate system and the origin of the camera coordinate system must be unified.
  • Step 202 Obtain point cloud data collected by radar.
  • the point cloud data includes at least one scanning point.
  • the camera 110 can obtain the point cloud data collected by the radar 120.
  • the point cloud data includes at least one scanning point, which is a point on the surface of the object collected by the radar 120.
  • the scanning point is represented by two-dimensional coordinates (ie (X, Z)).
  • the point cloud is represented by three-dimensional coordinates (ie (X, Y, Z)); the point cloud data is a collection of scan points.
  • the camera 110 needs to determine the distortion parameters.
  • the embodiment of the present application provides two situations where the camera 110 needs to determine the distortion parameters, which are specifically as follows:
  • the camera 110 may periodically obtain its current height information and angle information (such as yaw angle information and pitch angle information).
  • the camera 110 may send a point cloud data acquisition request to the radar 120.
  • the radar 120 After the radar 120 receives the point cloud data acquisition request, it sends a point cloud data acquisition response to the camera 110, and the point cloud data acquisition response carries the point cloud data collected by the radar 120.
  • the camera 110 After the camera 110 receives the point cloud data acquisition response, it can parse the point cloud data acquisition response to obtain the point cloud data collected by the radar 120 carried in the point cloud data acquisition response.
  • the camera 110 may send a point cloud data obtaining request to the radar 120.
  • the radar 120 After the radar 120 receives the point cloud data acquisition request, it sends a point cloud data acquisition response to the camera 110, and the point cloud data acquisition response carries the point cloud data collected by the radar 120.
  • the camera 110 After the camera 110 receives the point cloud data acquisition response, it can parse the point cloud data acquisition response to obtain the point cloud data collected by the radar 120 carried in the point cloud data acquisition response.
  • the second situation is to periodically obtain the point cloud data collected by the radar.
  • the camera 110 may directly send a point cloud data acquisition request to the radar 120 periodically according to a preset sampling period. After the radar 120 receives the point cloud data acquisition request, it sends a point cloud data acquisition response to the camera 110, and the point cloud data acquisition response carries the point cloud data collected by the radar 120. After the camera 110 receives the point cloud data acquisition response, it can parse the point cloud data acquisition response to obtain the point cloud data collected by the radar 120 carried in the point cloud data acquisition response.
  • Step 203 Obtain image data collected by the camera.
  • the image data includes at least one first feature point.
  • the camera 110 when the camera 110 determines the distortion parameter, it also needs to obtain the image data collected by itself.
  • the image data includes at least one first feature point, and the first feature point is a point where the gray value of the image changes drastically.
  • step 202 and step 203 are in no particular order.
  • the camera 110 may first perform step 202 and then perform step 203, or may first perform step 203 and then perform step 202, which is not limited in this embodiment of the application.
  • Step 204 Determine the scan point corresponding to the first feature point
  • the camera 110 after the camera 110 obtains the point cloud data collected by the radar 120 and the image data collected by itself, it can map each scan point in the point cloud data to the image coordinate system corresponding to the image data. Then, for each first feature point, the camera 110 may determine the scanning point corresponding to the first feature point in the image coordinate system. Among them, the camera 110 may determine the scan point corresponding to the first feature point in multiple ways. The embodiment of the present application provides two methods for the camera 110 to determine the scan point corresponding to the first feature point, which are specifically as follows:
  • Manner 1 The camera 110 determines the first candidate scan point set according to the first feature point, the first preset plane distance threshold, and the first depth distance probability value, and combines the first candidate scan points to form three scan points with the largest area triangle , Determine the scanning point corresponding to the first feature point.
  • the first candidate scan point set includes a first scan point
  • the plane distance between the first scan point and the first feature point is less than a first preset plane distance threshold
  • the first depth distance probability value is used to remove the background scan point.
  • the camera 110 may pre-store a first preset plane distance threshold and a first depth distance probability value.
  • the first preset plane distance threshold and the first depth distance probability value can be set by a technician based on experience.
  • the camera 110 may use the first feature point as the center of the circle in the image coordinate system, and the first preset plane distance threshold value is the scan point within the radius range, and determine the scan point corresponding to the first feature point.
  • the first candidate scan point is also center on the first feature point in the image coordinate system.
  • the camera 110 performs histogram statistics on the depth information of the first candidate scan points in the first preset plane according to the first preset interval (such as 0.3m), and the ordinate of the histogram is the first candidate scan in each interval.
  • the number of points count the number of first candidate scan points in each interval, and determine the depth corresponding to the interval where the first candidate scan points in the N consecutive intervals are very sparse (that is, the interval that satisfies the first depth distance probability value) Value, remove all the first candidate scan points behind the depth value (ie, remove the background scan points), and combine the remaining first candidate scan points into a first candidate scan point set corresponding to the first feature point.
  • the first candidate scan points may be grouped into the three scan points of the largest triangle in the image coordinate system to determine Is the scan point corresponding to the first feature point.
  • the radar is millimeter wave radar or lidar.
  • Manner 2 Determine the scan point corresponding to the first feature point according to the first feature point and the second preset plane distance threshold. Wherein, the planar distance between the scanning point corresponding to the first feature point and the first feature point is less than the second preset planar distance threshold. At least N first feature points are selected from the image data, at least N first feature points correspond to at least N scanning points, and N is an integer greater than or equal to 3.
  • the second preset plane distance threshold may be pre-stored in the camera 110.
  • the second preset plane distance threshold can be set by a technician based on experience.
  • the camera 110 may use the first feature point as the center of the circle in the image coordinate system, and the second preset plane distance threshold value is the scan point within the radius range, and determine the scan point corresponding to the first feature point. Scan point.
  • the camera 110 may select at least N first feature points from the first feature points and at least N first feature points corresponding to at least N scanning points. Among them, N is an integer greater than or equal to 1.
  • the radar is a lidar.
  • Step 205 Determine the distortion parameter of the camera in real time according to the distortion parameter algorithm.
  • the distortion parameter of the camera 110 can be determined in real time according to the distortion parameter algorithm.
  • the camera 110 can determine the distortion parameters in real time based on the point cloud data collected by the radar 120 and the image data collected by itself, thereby reducing the unmanned vehicle's measurement of the target object through the camera 110.
  • the error of 6D information based on the different ways in which the camera 110 determines the scanning point corresponding to the first feature point, the way in which the camera 110 determines the distortion parameters of the camera 110 in real time according to the distortion parameter algorithm is also different, and the details are as follows:
  • Step 1 Determine the measurement depth of the first feature point according to the monocular range finding formula.
  • the camera 110 may determine the measurement depth of the first feature point according to the coordinates of the first feature point and the monocular ranging formula.
  • the monocular ranging formula is as follows:
  • the measured depth of Yi is the measured depth of the i-th first feature point
  • f is the focal length of the camera 110
  • H is the height of the camera 110 from the ground
  • y is the ordinate of the i-th first feature point.
  • Step 2 Determine the true depth of the first feature point according to the depth information of the scanning point corresponding to the first feature point.
  • the camera 110 can determine the true depth of the first feature point according to the depth information of each scan point.
  • the camera 110 can determine the average value of the distances from each scanning point to the camera as the true depth of the first feature point; the camera 110 can also determine the first feature point according to the coordinate information of each scanning point and the least square method The true depth.
  • Step 3 According to the measured depth and real depth of at least N first feature points in the image data, an equation set is constructed.
  • the equation set includes N n-order equations, where N and n are integers greater than or equal to 1, and n-order equations for:
  • Y i is the i th real depth of the real depth of the first feature point
  • Y i is the i th measurement depth measured depth of the first feature point
  • a 0, a 1, a 2, a 3, ... a n is The distortion parameters of the camera.
  • an equation set can be constructed.
  • This system of equations includes N n-order equations. Both N and n are integers greater than or equal to 1.
  • the n-order equation is:
  • Y i is the i th real depth of the real depth of the first feature point
  • Y i is the i th measurement depth measured depth of the first feature point
  • a 0, a 1, a 2, a 3, ... a n is The distortion parameters of the camera.
  • Step four solving the equation to obtain the camera distortion parameters a 0, a 1, a 2 , a 3, ... a n.
  • the equations can be further solved to obtain the distortion parameters of the camera 110.
  • the true depth of the target object can be determined according to the distortion parameters.
  • the specific processing process is: the camera 110 determines the target object and the measurement depth corresponding to the target object, and obtains the true depth of the target object according to the n-order equation ,
  • the n-order equation is:
  • Z true depth as a true depth of the target object Z measured depth measurement depth of the target object, a 0, a 1, a 2, a 3, ... a n is the distortion parameter camera, n is an integer of 1, .
  • the camera 110 can determine the target object and the coordinates of the target object in the image data, and determine the measurement depth of the target object according to the monocular ranging formula and the coordinates of the target object. Then, the camera 110 can substitute the distortion parameters of the camera 110 and the measured depth of the target object into the n-order equation to obtain the true depth of the target object.
  • the monocular ranging formula is:
  • Z measurement depth is the measurement depth of the target object
  • f is the focal length of the camera 110
  • H is the height of the camera 110 from the ground
  • y is the ordinate of the target object.
  • Z true depth as a true depth of the target object Z measured depth measurement depth of the target object, a 0, a 1, a 2, a 3, ... a n is the distortion parameter camera, n is an integer of 1, .
  • the process of determining the distortion parameters of the camera 110 in real time by the camera 110 according to the distortion parameter algorithm is: calibrating by Zhang Zhengyou according to the coordinates of at least N first feature points and the coordinates of at least N scanning points Method to get the distortion parameters.
  • the distortion parameters corresponding to the camera 110 can be obtained according to Zhang Zhengyou's calibration method.
  • the camera 110 can determine the true depth of the target object according to the distortion parameters.
  • the specific processing process is:
  • Step 1 Determine the true coordinates of the target object in the image coordinate system through the distortion parameters.
  • the distortion parameters and the coordinates of the target object can be substituted into the N-order equation to obtain the true coordinates of the target object in the image coordinate system.
  • the N-order equation is as follows:
  • x coordinates transactions through real distortion processing target object abscissa x is the image of the target object has not elapsed after the abscissa of the coordinate measuring distortion processing
  • y is the real coordinates of the target object through real distortion processing ordinate
  • y is the image of the target object has not been ordinate coordinate measuring distortion-treatment
  • a 1, a 2, a 3, ... a n is a distortion parameter
  • Step 2 Determine the true depth of the target object in the camera coordinate system through the monocular range finding formula.
  • the camera 110 after the camera 110 obtains the real coordinates of the target object, it can determine the real depth of the target object according to the real coordinates of the target object and the monocular ranging formula.
  • the process of the camera determining the true depth of the target object according to the real coordinates of the target object and the monocular range finding formula is the same as that of the camera 110 determining the first characteristic point according to the coordinates of the first characteristic point and the monocular range finding formula in step 205
  • the process of measuring depth is similar, so I won't repeat it here.
  • the camera 110 can determine the first 6D information of the target object according to the distortion parameters of the camera and the image data of the target object collected by the camera, and send the first 6D information of the target object to the fusion module.
  • the camera 110 after the camera 110 obtains the distortion parameters, it can obtain the position information of the target object according to the distortion parameters and the image data of the target object collected by the camera in real time, thereby obtaining the real-time 6D information of the target object (ie, the first 6D information), and send the first 6D information of the target object to the fusion module 130, so that the fusion module 130 receives the first 6D information of the target object sent by the camera 110 and the second 6D information of the target object sent by the radar 120.
  • the first 6D information and the second 6D information can be processed by Kalman filtering to obtain the target 6D information of the target object.
  • the embodiment of the present application provides a method for determining the distortion parameters of the camera.
  • the camera 110 in the unmanned vehicle calibrates the radar coordinate system and the camera coordinate system.
  • the camera 110 obtains the point cloud data collected by the radar 120 and the image data collected by the camera 110.
  • the point cloud data includes at least one scanning point
  • the image data includes at least one first feature point.
  • the camera 110 determines the scanning point corresponding to the first feature point according to the point cloud data and the image data, and determines the distortion parameter of the camera in real time according to the distortion parameter algorithm.
  • the camera 110 can determine the distortion parameters of the camera in real time according to the point cloud data collected by the radar 120 and the image data collected by the camera. The error of the 6D information of the target object measured by the unmanned vehicle through the camera 110 is reduced.
  • the embodiment of the present application also provides a method for obtaining the precise position of a target object.
  • the method can be applied to the radar 120 in an unmanned vehicle.
  • the specific steps are as follows:
  • Step 1 Receive a point cloud data acquisition request sent by the camera.
  • the camera 110 when the camera 110 needs to determine the distortion parameter, the camera 110 may send a point cloud data acquisition request to the radar 120.
  • Step 2 Obtain the point cloud data and send the point cloud data to the camera.
  • the point cloud data includes at least one scanning point.
  • the radar 120 after the radar 120 receives the point cloud data acquisition request sent by the camera 110, it can acquire the point cloud data collected by itself, and send the point cloud data to the camera 110.
  • the point cloud data includes at least one scanning point.
  • the radar 120 may also determine the second 6D information of the target object according to the acquired point cloud data of the target object, and send the second 6D information of the target object to the fusion module.
  • the radar 120 performs clustering processing on the point cloud data of the target object collected in real time to obtain the real-time 6D information of the target object (that is, the second 6D information), and combine the second 6D information of the target object.
  • the information is sent to the fusion module 130.
  • the embodiment of the present application also provides a method for obtaining the precise position of a target object.
  • the method can be applied to the fusion module 130 in an unmanned vehicle.
  • the specific steps are: the fusion module 130 receives the first 6D information of the target object sent by the camera. Kalman filter processing is performed on the first 6D information and the second 6D information with the second 6D information of the target object sent by the radar to obtain the target 6D information of the target object.
  • the fusion module 130 after the fusion module 130 receives the first 6D information of the target object sent by the camera 110 and the second 6D information of the target object sent by the radar 120, it can perform processing on the first 6D information and the second 6D information. Kalman filter processing obtains the target 6D information of the target object, and sends the target 6D information corresponding to the target object to the decision module 140. In this way, after the decision module 140 receives the target 6D information corresponding to the target object, it can make a decision based on the target 6D information of the target object, thereby ensuring the driving safety of the unmanned vehicle.
  • an embodiment of the present application also provides a device for determining distortion parameters of a camera. As shown in FIG. 4, the device includes:
  • the calibration module 410 is used to calibrate the radar coordinate system and the camera coordinate system;
  • the first acquisition module 420 is configured to acquire point cloud data collected by radar, where the point cloud data includes at least one scanning point;
  • the second acquisition module 430 is configured to acquire image data collected by the camera, where the image data includes at least one first feature point;
  • the first determining module 440 is configured to determine the scanning point corresponding to the first feature point
  • the second determining module 450 is configured to determine the distortion parameters of the camera in real time according to the distortion parameter algorithm.
  • the first obtaining module 420 is specifically configured to:
  • the point cloud data collected by the radar is obtained.
  • the first obtaining module 420 is specifically configured to:
  • the first determining module 440 is specifically configured to:
  • the first candidate scan point set is determined according to the first feature point, the first preset plane distance threshold, and the first depth distance probability value.
  • the first candidate scan point set includes the first scan point, the first scan point and the first feature point
  • the plane distance is less than the first preset plane distance threshold, and the first depth distance probability value is used to remove the background scan points;
  • the first candidate scan points are collected to form three scan points of the triangle with the largest area, and are determined as scan points corresponding to the first feature point.
  • the second determining module 450 is specifically configured to:
  • an equation set is constructed.
  • the equation set includes N n-order equations. Both N and n are integers greater than or equal to 1.
  • the n-order equation is:
  • Y i is the i th real depth of the real depth of the first feature point
  • Y i is the i th measurement depth measured depth of the first feature point
  • a 0, a 1, a 2, a 3, ... a n is The distortion parameters of the camera
  • the device further includes:
  • the third determining module 460 is used to determine the target object and the measurement depth corresponding to the target object, and obtain the true depth of the target object according to the n-th order equation.
  • the n-th order equation is:
  • Z true depth as a true depth of the target object Z measured depth measurement depth of the target object, a 0, a 1, a 2, a 3, ... a n is the distortion parameter camera, n is an integer of 1, .
  • the radar is millimeter wave radar or lidar.
  • the first determining module 440 is specifically configured to:
  • At least N first feature points are selected from the image data, at least N first feature points correspond to at least N scanning points, and N is an integer greater than or equal to 1.
  • the second determining module 450 is specifically configured to:
  • the distortion parameter is obtained by Zhang Zhengyou calibration method according to the coordinates of at least N first feature points and the coordinates of at least N scanning points.
  • the device further includes:
  • the fourth determining module 470 is configured to determine the real coordinates of the target object in the image coordinate system through the distortion parameter;
  • the fifth determining module 480 is used to determine the true depth of the target object in the camera coordinate system through a monocular range finding formula.
  • the radar is a lidar.
  • the device further includes:
  • the sixth determining module 490 is configured to determine the first 6D information of the target object according to the distortion parameter of the camera and the image data of the target object collected by the camera;
  • the sending module 4100 is used to send the first 6D information of the target object to the fusion module.
  • the embodiment of the present application provides a device for determining the distortion parameters of the camera.
  • the camera 110 in the unmanned vehicle calibrates the radar coordinate system and the camera coordinate system.
  • the camera 110 obtains the point cloud data collected by the radar 120 and the image data collected by the camera 110.
  • the point cloud data includes at least one scanning point
  • the image data includes at least one first feature point.
  • the camera 110 determines the scanning point corresponding to the first feature point according to the point cloud data and the image data, and determines the distortion parameter of the camera in real time according to the distortion parameter algorithm.
  • the camera 110 can determine the distortion parameters of the camera in real time according to the point cloud data collected by the radar 120 and the image data collected by the camera. The error of the 6D information of the target object measured by the unmanned vehicle through the camera 110 is reduced.
  • an embodiment of the present application also provides a device for determining distortion parameters of a camera, including: a processor, a memory, and a communication interface; wherein the communication interface is used to communicate with other devices or a communication network, and the memory is used to Store one or more programs, the one or more programs include computer-executable instructions, when the device is running, the processor executes the computer-executable instructions stored in the memory to make the device execute the above-mentioned determining the distortion parameters of the camera Methods.
  • an embodiment of the present application also provides a system for determining distortion parameters of a camera, including a camera, a radar, and the aforementioned device for determining distortion parameters of a camera.
  • the embodiment of the present application also provides a computer-readable storage medium, including a program and instructions.
  • the program or instruction runs on a computer, the method for determining the distortion parameters of the camera described above is achieve.
  • an embodiment of the present application also provides a chip system, including a processor, the processor is coupled with a memory, the memory stores program instructions, and when the program instructions stored in the memory are processed The method for determining the distortion parameters of the camera described above is implemented when the device is executed.
  • the embodiments of the present application also provide a computer program product containing instructions.
  • the computer program product runs on a computer, the computer executes the aforementioned method for determining the distortion parameters of a camera. .
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

本申请实施例提供了一种确定摄像头的畸变参数的方法及装置,涉及无人驾驶技术领域,所述方法包括:将雷达坐标系和摄像头坐标系进行标定;获取雷达采集的点云数据,所述点云数据包括至少一个扫描点;获取摄像头采集的图像数据,所述图像数据包括至少一个第一特征点;确定所述第一特征点对应的扫描点;根据畸变参数算法实时确定所述摄像头的畸变参数。采用本申请当无人驾驶车辆在行驶过程中,摄像头的高度信息或角度信息发生变化时,摄像头可以根据雷达采集的点云数据和摄像头采集的图像数据,实时确定摄像头的畸变参数,从而降低无人驾驶车辆通过摄像头测量到的目标物体的6D信息的误差。

Description

一种确定摄像头的畸变参数的方法及装置
本申请要求在2019年6月27日提交国家专利局、申请号为201910565719.2、发明名称为“一种确定摄像头的畸变参数的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及无人驾驶技术领域,尤其涉及一种确定摄像头的畸变参数的方法及装置。
背景技术
在无人驾驶技术领域中,无人驾驶车辆可以通过摄像头和雷达对目标物体进行测距和测速,得到该目标物体的6D信息。其中,该目标物体的6D信息为该目标物体在车体坐标系或世界坐标系中的三维位置信息和三维速度信息。然后,无人驾驶车辆可以根据该目标物体的6D信息,判断是否需要进行减速,以避免碰撞事故的发生。由于摄像头拍摄的图片存在较大的畸变,因此,无人驾驶车辆通过摄像头测量到的目标物体的6D信息会存在较大的误差。
目前,主要通过张正友标定法对摄像头进行标定,得到摄像头的畸变参数,从而去除或者补偿摄像头拍摄的图片的畸变,进而得到目标物体的准确的6D信息。张正友标定法是一种在摄像头离线情况下确定摄像头的畸变参数的方法。首先,技术人员将一张黑白相间的方形格子的纸作为模板放置在空间的某一平面上。然后,无人驾驶车辆通过摄像头在不同方向上拍摄该模板的图像。之后,无人驾驶车辆根据各图像中该模板对应的特征点的二维坐标和该模板在世界坐标系中的三维坐标,确定摄像头的畸变参数。
然而,当无人驾驶车辆在行驶过程中,摄像头的状态发生变化(比如摄像头相对于地面的高度发生变化,或者摄像头发生了偏转)时,摄像头的畸变参数也会发生相应的变化。如果此时仍然使用通过张正友标定法在摄像头离线情况下得到的摄像头的畸变参数对摄像头拍摄的图片进行修正,将导致无人驾驶车辆通过摄像头测量到的目标物体的6D信息产生较大的误差。
发明内容
本申请实施例提供了一种确定摄像头的畸变参数的方法及装置,当无人驾驶车辆在行驶过程中,摄像头的高度信息或角度信息发生变化时,摄像头可以根据雷达采集的点云数据和摄像头采集的图像数据,实时确定摄像头的畸变参数,从而降低无人驾驶车辆通过摄像头测量到的目标物体的6D信息的误差。该技术方案如下:
第一方面,提供了一种确定摄像头的畸变参数的方法,包括:
将雷达坐标系和摄像头坐标系进行标定;
获取雷达采集的点云数据,所述点云数据包括至少一个扫描点;
获取摄像头采集的图像数据,所述图像数据包括至少一个第一特征点;
确定所述第一特征点对应的扫描点;
根据畸变参数算法实时确定所述摄像头的畸变参数。
在一种可能的实现方式中,所述获取雷达采集的点云数据,包括:
当所述摄像头的高度信息或角度信息变化时,获取雷达采集到的点云数据。
在一种可能的实现方式中,所述获取雷达采集的点云数据,包括:
周期性地获取雷达采集到的点云数据。
在一种可能的实现方式中,所述确定所述第一特征点对应的扫描点,包括:
根据所述第一特征点、第一预设平面距离阈值和第一深度距离概率值确定第一候选扫描点集,所述第一候选扫描点集包括第一扫描点,所述第一扫描点与所述第一特征点的平面距离小于所述第一预设平面距离阈值,所述第一深度距离概率值用于去除后景扫描点;
将所述第一候选扫描点集中组成面积最大三角形的三个扫描点,确定为所述第一特征点对应的扫描点。
在一种可能的实现方式中,所述根据畸变参数算法实时确定所述摄像头的畸变参数,包括:
根据单目测距公式确定所述第一特征点的测量深度;
根据所述第一特征点对应的扫描点的深度信息,确定所述第一特征点的真实深度;
根据所述图像数据中的至少N个第一特征点的的测量深度和真实深度,构建方程组,所述方程组包括N个n阶方程,所述N和所述n均为大于等于1的整数,所述n阶方程为:
Figure PCTCN2020097761-appb-000001
其中,Y i真实深度为第i个第一特征点的真实深度,Y i测量深度为第i个第一特征点的测量深度,a 0,a 1,a 2,a 3,…a n为所述摄像头的畸变参数;
求解所述方程组,得到所述摄像头的畸变参数a 0,a 1,a 2,a 3,…a n
在一种可能的实现方式中,还包括:
确定目标物体及所述目标物体对应的测量深度,根据n阶方程得到所述目标物体的真实深度,所述n阶方程为:
Figure PCTCN2020097761-appb-000002
其中,Z 真实深度为所述目标物体的真实深度,Z 测量深度为所述目标物体的测量深度,a 0,a 1,a 2,a 3,…a n为所述摄像头的畸变参数,所述n为大于等于1的整数。
在一种可能的实现方式中,所述雷达为毫米波雷达或激光雷达。
在一种可能的实现方式中,所述确定所述第一特征点对应的扫描点,包括:
根据所述第一特征点、第二预设平面距离阈值确定所述第一特征点对应的扫描点,所述第一特征点对应的扫描点与所述第一特征点的平面距离小于所述第二预设平面距离阈值;
从所述图像数据中选择至少N个第一特征点,所述至少N个第一特征点对应至少N个扫描点,所述N为大于等于3的整数。
在一种可能的实现方式中,所述根据畸变参数算法实时确定所述摄像头的畸变参数,包括:
在摄像头坐标系中,根据所述至少N个第一特征点的坐标和所述至少N个扫描点的坐标通过张正友标定法得到畸变参数。
在一种可能的实现方式中,还包括:
通过所述畸变参数确定目标物体在图像坐标系的真实坐标;
通过单目测距公式确定所述目标物体在所述摄像头坐标系下的真实深度。
在一种可能的实现方式中,所述雷达为激光雷达。
在一种可能的实现方式中,还包括:
根据所述摄像头的畸变参数和所述摄像头采集到的目标物体的图像数据,确定所述目标物体的第一6D信息;
向融合模块发送所述目标物体的所述第一6D信息。
第二方面,提供了一种确定摄像头的畸变参数的装置,包括:
标定模块,用于将雷达坐标系和摄像头坐标系进行标定;
第一获取模块,用于获取雷达采集的点云数据,所述点云数据包括至少一个扫描点;
第二获取模块,用于获取摄像头采集的图像数据,所述图像数据包括至少一个第一特征点;
第一确定模块,用于确定所述第一特征点对应的扫描点;
第二确定模块,用于根据畸变参数算法实时确定所述摄像头的畸变参数。
在一种可能的实现方式中,所述第一获取模块,具体用于:
当所述摄像头的高度信息或角度信息变化时,获取雷达采集到的点云数据。
在一种可能的实现方式中,所述第一获取模块,具体用于:
周期性地获取雷达采集到的点云数据。
在一种可能的实现方式中,所述第一确定模块,具体用于:
根据所述第一特征点、第一预设平面距离阈值和第一深度距离概率值确定第一候选扫描点集,所述第一候选扫描点集包括第一扫描点,所述第一扫描点与所述第一特征点的平面距离小于所述第一预设平面距离阈值,所述第一深度距离概率值用于去除后景扫描点;
将所述第一候选扫描点集中组成面积最大三角形的三个扫描点,确定为所述第一特征点对应的扫描点。
在一种可能的实现方式中,所述第二确定模块,具体用于:
根据单目测距公式确定所述第一特征点的测量深度;
根据所述第一特征点对应的扫描点的深度信息,确定所述第一特征点的真实深度;
根据所述图像数据中的至少N个第一特征点的的测量深度和真实深度,构建方程组,所述方程组包括N个n阶方程,所述N和所述n均为大于等于1的整数,所述n阶方程为:
Figure PCTCN2020097761-appb-000003
其中,Y i真实深度为第i个第一特征点的真实深度,Y i测量深度为第i个第一特征点的测量深度,a 0,a 1,a 2,a 3,…a n为所述摄像头的畸变参数;
求解所述方程组,得到所述摄像头的畸变参数a 0,a 1,a 2,a 3,…a n
在一种可能的实现方式中,还包括:
第三确定模块,用于确定目标物体及所述目标物体对应的测量深度,根据n阶方程得到所述目标物体的真实深度,所述n阶方程为:
Figure PCTCN2020097761-appb-000004
其中,Z 真实深度为所述目标物体的真实深度,Z 测量深度为所述目标物体的测量深度,a 0,a 1,a 2,a 3,…a n为所述摄像头的畸变参数,所述n为大于等于1的整数。
在一种可能的实现方式中,所述雷达为毫米波雷达或激光雷达。
在一种可能的实现方式中,所述第一确定模块,具体用于:
根据所述第一特征点、第二预设平面距离阈值确定所述第一特征点对应的扫描点,所述第一特征点对应的扫描点与所述第一特征点的平面距离小于所述第二预设平面距离阈值;
从所述图像数据中选择至少N个第一特征点,所述至少N个第一特征点对应至少N个扫描点,所述N为大于等于1的整数。
在一种可能的实现方式中,所述第二确定模块,具体用于:
在摄像头坐标系中,根据所述至少N个第一特征点的坐标和所述至少N个扫描点的坐标通过张正友标定法得到畸变参数。
在一种可能的实现方式中,还包括:
第四确定模块,用于通过所述畸变参数确定目标物体在图像坐标系的真实坐标;
第五确定模块,用于通过单目测距公式确定所述目标物体在所述摄像头坐标系下的真实深度。
在一种可能的实现方式中,所述雷达为激光雷达。
在一种可能的实现方式中,所述装置还包括:
第六确定模块,用于根据所述摄像头的畸变参数和所述摄像头采集到的目标物体的图像数据,确定所述目标物体的第一6D信息;
发送模块,用于向融合模块发送所述目标物体的所述第一6D信息。
第三方面,提供了一种获得目标物体精准位置的方法,包括:
接收摄像头发送的点云数据获取请求;
获取点云数据,向所述摄像头发送所述点云数据,所述点云数据包括至少一个扫描点。
在一种可能的实现方式中,所述方法还包括:
根据获取到的目标物体的点云数据,确定所述目标物体的第二6D信息;
向融合模块发送所述目标物体的所述第二6D信息。
第四方面,提供了一种获得目标物体精准位置的方法,包括:
接收摄像头发送的目标物体的第一6D信息和雷达发送的所述目标物体的第二6D 信息;
对所述第一6D信息和所述第二6D信息进行卡尔曼滤波处理,得到所述目标物体的目标6D信息。
第五方面,提供了一种确定摄像头的畸变参数的装置,包括:处理器、存储器和通信接口;其中,通信接口用于与其他设备或通信网络通信,存储器用于存储一个或多个程序,所述一个或多个程序包括计算机执行指令,当该装置运行时,处理器执行存储器存储的所述计算机执行指令以使该装置执行如第一方面任一项所述的确定摄像头的畸变参数的方法。
第六方面,提供了一种确定摄像头的畸变参数的***,包括摄像头、雷达和如第二方面任一所述的确定摄像头的畸变参数的装置。
第七方面,提供了一种计算机可读存储介质,包括程序和指令,当所述程序或指令在计算机上运行时,第一方面任一项所述的确定摄像头的畸变参数的方法被实现。
第八方面,提供了一种芯片***,包括处理器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现第一方面任一项所述的确定摄像头的畸变参数的方法。
第九方面,提供了一种包含指令的计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面任一项所述的确定摄像头的畸变参数的方法。
本申请的实施例提供了一种确定摄像头的畸变参数的方法及装置,首先,无人驾驶车辆中的摄像头将雷达坐标系和摄像头坐标系进行标定。然后,摄像头获取雷达采集的点云数据和摄像头采集的图像数据。其中,点云数据包括至少一个扫描点,图像数据包括至少一个第一特征点。之后,摄像头根据点云数据和图像数据确定第一特征点对应的扫描点,并根据畸变参数算法实时确定摄像头的畸变参数。这样,当无人驾驶车辆在行驶过程中,摄像头的高度信息或角度信息发生变化时,摄像头可以根据雷达采集的点云数据和摄像头采集的图像数据,实时确定摄像头的畸变参数,从而降低无人驾驶车辆通过摄像头测量到的目标物体的6D信息的误差。
附图说明
图1为本申请实施例提供的一种无人驾驶车辆的结构示意图;
图2为本申请实施例提供的一种确定摄像头的畸变参数的方法的流程图;
图3A为本申请实施例提供的一种确定第一特征点的第一候选扫描点的示意图;
图3B为本申请实施例提供的一种确定第一特征点对应的扫描点的示意图;
图4为本申请实施例提供的一种确定摄像头的畸变参数的装置的结构示意;
图5为本申请实施例提供的一种确定摄像头的畸变参数的装置的结构示意;
图6为本申请实施例提供的一种确定摄像头的畸变参数的装置的结构示意;
图7为本申请实施例提供的一种确定摄像头的畸变参数的装置的结构示意。
具体实施方式
本申请实施例提供了一种确定摄像头的畸变参数的方法,该方法可以应用于无人 驾驶车辆,也可以应用于辅助驾驶车辆,还可以应用于智能驾驶车辆,本申请实施例不作限定。本申请实施例以该方法应用于无人驾驶车辆为例进行介绍,其他情况与之类似。具体的,可以应用于无人驾驶车辆中的摄像头,也可以应用于无人驾驶车辆中的雷达,还可以应用于无人驾驶车辆中的融合模块,本申请实施例不作限定。本申请实施例以该方法应用于无人驾驶车辆中的摄像头为例进行介绍,其他情况与之类似。图1为本申请实施例提供的一种无人驾驶车辆的结构示意图。如图1所示,无人驾驶车辆100上安装有摄像头110、雷达120和融合模块130(图未示出)和决策模块140(图未示出)。其中,融合模块130可以设置在摄像头110或雷达120中,也可以独立设置在无人驾驶车辆中;同理,决策模块140可以设置在摄像头110或雷达120中,也可以独立设置在无人驾驶车辆中。摄像头110用于实时采集图像数据。其中,该图像数据包括至少一个第一特征点。雷达120用于实时采集点云数据。其中,该点云数据包括至少一个扫描点。当摄像头110的高度信息或角度信息变化时,摄像头110获取雷达120采集到的点云数据和自身采集到的图像数据。然后,摄像头110将点云数据中的扫描点映射至图像数据对应的图像坐标系中,并在图像坐标系中,确定图像数据中第一特征点对应的扫描点。之后,摄像头110根据畸变参数算法、第一特征点和第一特征点对应的扫描点实时确定摄像头110的畸变参数。后续,摄像头110可以通过图像识别技术在图像数据中确定目标物体,并根据摄像头110的畸变参数实时确定目标物体的真实坐标。然后,摄像头110根据单目测距公式和目标物体的真实坐标,实时确定目标物体的位置信息,进而得到目标物体对应的实时6D信息(以下称为第一6D信息),并将该第一6D信息发送至融合模块130。雷达120可以对实时采集到的目标物体对应的扫描点进行聚类处理,得到目标物体对应的实时6D信息(以下称为第二6D信息),并将该第二6D信息发送至融合模块130。后续,融合模块130可以对目标物体的第一6D信息和第二6D信息进行卡尔曼滤波处理,得到目标物体对应的目标6D信息,并将目标物体对应的目标6D信息发送至决策模块140,决策模块140接收到目标物体对应的目标6D信息后,可以根据目标物体的目标6D信息进行决策。
下面将结合具体实施方式,对本申请实施例提供的一种确定摄像头的畸变参数的方法进行详细的说明。如图2所示,具体步骤如下:
步骤201,将雷达坐标系和摄像头坐标系进行标定。
在本申请的实施例中,由于摄像头110和雷达120通常安装在无人驾驶车辆上的不同位置。因此,摄像头110对应的摄像头坐标系和雷达120对应的雷达坐标系不相同。为了保证确定出的摄像头110的畸变参数的准确性,在确定畸变参数之前,须将雷达坐标系与摄像头坐标系进行标定,也即将雷达坐标系的原点与摄像头坐标系的原点统一。
步骤202,获取雷达采集的点云数据。
其中,点云数据包括至少一个扫描点。
在本申请的实施例中,摄像头110将雷达坐标系和摄像头坐标系标定后,当摄像头110需要确定畸变参数时,摄像头110可以获取雷达120采集的点云数据。其中,点云数据包括至少一个扫描点,扫描点为雷达120采集到的物体表面的点,在毫米波雷达坐标系下,扫描点以二维坐标(即(X,Z))表示,在激光雷达坐标系下,点云 以三维坐标(即(X,Y,Z))表示;点云数据为各扫描点的集合。摄像头110需要确定畸变参数的情况可以有多种,本申请实施例提供了两种摄像头110需要确定畸变参数的情况,具体如下:
情况一,当摄像头的高度信息或角度信息变化时,获取雷达采集到的点云数据。
在本申请的实施例中,摄像头110可以周期性的获取自身当前的高度信息和角度信息(比如偏转角度信息和俯仰角度信息)。当摄像头110获取到的当前的高度信息与上一次获取到的高度信息不相同(也即摄像头的高度信息发生变化)时,摄像头110可以向雷达120发送点云数据获取请求。雷达120接收到点云数据获取请求后,向摄像头110发送点云数据获取响应,该点云数据获取响应中携带有雷达120采集的点云数据。摄像头110接收到点云数据获取响应后,可以对点云数据获取响应进行解析,得到点云数据获取响应中携带的雷达120采集到的点云数据。同理,当摄像头110获取到的当前的角度信息与上一次获取到的角度信息不同(也即摄像头的角度信息发生变化)时,摄像头110可以向雷达120发送点云数据获取请求。雷达120接收到点云数据获取请求后,向摄像头110发送点云数据获取响应,该点云数据获取响应中携带有雷达120采集的点云数据。摄像头110接收到点云数据获取响应后,可以对点云数据获取响应进行解析,得到点云数据获取响应中携带的雷达120采集到的点云数据。
情况二,周期性地获取雷达采集到的点云数据。
在本申请的实施例中,摄像头110可以直接根据预设的采样周期,周期性地向雷达120发送点云数据获取请求。雷达120接收到点云数据获取请求后,向摄像头110发送点云数据获取响应,该点云数据获取响应中携带有雷达120采集的点云数据。摄像头110接收到点云数据获取响应后,可以对点云数据获取响应进行解析,得到点云数据获取响应中携带的雷达120采集到的点云数据。
步骤203,获取摄像头采集的图像数据。
其中,图像数据包括至少一个第一特征点。
在本申请的实施例中,摄像头110在确定畸变参数时,还需要获取自身采集的图像数据。其中,图像数据包括至少一个第一特征点,第一特征点为图像灰度值发生剧烈变化的点。
需要说明的是,步骤202和步骤203的执行过程不分先后顺序,摄像头110可以先执行步骤202后执行步骤203,也可以先执行步骤203后执行步骤202,本申请实施例不作限定。
步骤204,确定第一特征点对应的扫描点;
在本申请的实施例中,摄像头110获取到雷达120采集的点云数据和自身采集的图像数据后,可以将点云数据中的各扫描点映射到图像数据对应的图像坐标系中。然后,针对每个第一特征点,摄像头110可以在该图像坐标系中确定该第一特征点对应的扫描点。其中,摄像头110确定第一特征点对应的扫描点的方式可以有多种,本申请实施例提供了两种摄像头110确定第一特征点对应的扫描点的方式,具体如下:
方式一,摄像头110根据第一特征点、第一预设平面距离阈值和第一深度距离概率值确定第一候选扫描点集,并将第一候选扫描点集中组成面积最大三角形的三个扫描点,确定为第一特征点对应的扫描点。
其中,第一候选扫描点集包括第一扫描点,第一扫描点与第一特征点的平面距离小于第一预设平面距离阈值,第一深度距离概率值用于去除后景扫描点。
在本申请的实施例中,摄像头110中可以预先存储有第一预设平面距离阈值和第一深度距离概率值。该第一预设平面距离阈值和第一深度距离概率值可以由技术人员根据经验进行设置。针对每个第一特征点,摄像头110可以在图像坐标系中将以该第一特征点为圆心,第一预设平面距离阈值为半径范围内的扫描点,确定为该第一特征点对应的第一候选扫描点。在一种可能的实现方式中,摄像头110在确定第一特征点对应的第一候选扫描点时,如图3A所示,摄像头110还可以在图像坐标系中以该第一特征点为中心,以预设的长度值和宽度值(即预设的第一预设平面距离阈值),确定该第一特征点对应的矩阵邻域,并将该矩阵邻域中的扫描点,确定为第一特征点对应的第一候选扫描点。然后,摄像头110根据第一预设的区间(如0.3m)对第一预设平面内的第一候选扫描点的深度信息作直方图统计,直方图的纵坐标为各区间中第一候选扫描点的数目,统计每个区间中第一候选扫描点的数目,确定连续N个区间中的第一候选扫描点都非常稀疏的区间(即满足第一深度距离概率值的区间)所对应的深度值,去除该深度值后面所有的第一候选扫描点(即去除后景扫描点),将剩下的第一候选扫描点组成第一特征点对应的第一候选扫描点集。
摄像头110得到该第一特征点对应的第一候选扫描点集后,如图3B所示,可以进一步在图像坐标系中,将第一候选扫描点集中组成面积最大三角形的三个扫描点,确定为第一特征点对应的扫描点。
需要说明的是,该实施方式一中,雷达为毫米波雷达或激光雷达。
方式二,根据第一特征点、第二预设平面距离阈值确定第一特征点对应的扫描点。其中,第一特征点对应的扫描点与第一特征点的平面距离小于第二预设平面距离阈值。从图像数据中选择至少N个第一特征点,至少N个第一特征点对应至少N个扫描点,N为大于等于3的整数。
在本申请的实施例中,摄像头110中可以预先存储有第二预设平面距离阈值。该第二预设平面距离阈值可以由技术人员根据经验进行设置。针对每个第一特征点,摄像头110可以在图像坐标系中将以该第一特征点为圆心,第二预设平面距离阈值为半径范围内的扫描点,确定为该第一特征点对应的扫描点。然后,摄像头110可以在第一特征点中选择至少N个第一特征点以及至少N个第一特征点对应至少N个扫描点。其中,N为大于等于1的整数。
需要说明的是,该实施方式二中,雷达为激光雷达。
步骤205,根据畸变参数算法实时确定摄像头的畸变参数。
在本申请的实施例中,摄像头110得到第一特征点对应的扫描点后,可以根据畸变参数算法实时确定摄像头110的畸变参数。这样,当无人驾驶车辆在行驶过程中,摄像头110可以根据雷达120采集的点云数据和自身采集的图像数据,实时确定畸变参数,从而降低无人驾驶车辆通过摄像头110测量到的目标物体的6D信息的误差。其中,基于摄像头110确定第一特征点对应的扫描点的不同方式,摄像头110根据畸变参数算法实时确定摄像头110的畸变参数的方式也不同,具体如下:
方式一,针对步骤204中的方式一,摄像头110根据畸变参数算法实时确定摄像头110的畸变参数的处理过程如下:
步骤一,根据单目测距公式确定第一特征点的测量深度。
在本申请的实施例中,针对每个第一特征点,摄像头110可以根据该第一特征点的坐标和单目测距公式,确定该第一特征点的测量深度。其中,单目测距公式如下:
Figure PCTCN2020097761-appb-000005
其中,Y i测量深度为第i个第一特征点的测量深度,f为摄像头110的焦距,H为摄像头110距离地面的高度,y为第i个第一特征点的纵坐标。
步骤二,根据第一特征点对应的扫描点的深度信息,确定第一特征点的真实深度。
在本申请的实施例中,针对每个第一特征点,摄像头110得到该第一特征点对应的扫描点后,可以根据各扫描点的深度信息,确定该第一特征点的真实深度。其中,摄像头110可以将各扫描点到摄像头的距离的平均值,确定为该第一特征点的真实深度;摄像头110还可以根据各扫描点的坐标信息和最小二乘法,确定该第一特征点的真实深度。
步骤三,根据图像数据中的至少N个第一特征点的的测量深度和真实深度,构建方程组,方程组包括N个n阶方程,N和n均为大于等于1的整数,n阶方程为:
Figure PCTCN2020097761-appb-000006
其中,Y i真实深度为第i个第一特征点的真实深度,Y i测量深度为第i个第一特征点的测量深度,a 0,a 1,a 2,a 3,…a n为摄像头的畸变参数。
在本申请的实施例中,摄像头110得到各第一特征点的测量深度和真实深度后,可以构建方程组。该方程组中包括N个n阶方程,N和n均为大于等于1的整数,n阶方程为:
Figure PCTCN2020097761-appb-000007
其中,Y i真实深度为第i个第一特征点的真实深度,Y i测量深度为第i个第一特征点的测量深度,a 0,a 1,a 2,a 3,…a n为摄像头的畸变参数。
相应的,方程组为:
Figure PCTCN2020097761-appb-000008
步骤四,求解方程组,得到摄像头的畸变参数a 0,a 1,a 2,a 3,…a n
在本申请的实施例中,摄像头110得到方程组后,可以进一步求解该方程组,得到摄像头110的畸变参数。
可选的,摄像头110得到畸变参数后,可以根据畸变参数确定目标物体的真实深度,具体处理过程为:摄像头110确定目标物体及目标物体对应的测量深度,根据n 阶方程得到目标物体的真实深度,n阶方程为:
Figure PCTCN2020097761-appb-000009
其中,Z 真实深度为目标物体的真实深度,Z 测量深度为目标物体的测量深度,a 0,a 1,a 2,a 3,…a n为摄像头的畸变参数,n为大于等于1的整数。
在本申请的实施例中,摄像头110可以在图像数据中确定目标物体和目标物体的坐标,并根据单目测距公式和目标物体的坐标确定目标物体测量深度。然后,摄像头110可以将摄像头110的畸变参数和目标物体的测量深度代入至n阶方程中,得到目标物体的真实深度。单目测距公式为:
Figure PCTCN2020097761-appb-000010
其中,Z 测量深度为目标物体的测量深度,f为摄像头110的焦距,H为摄像头110距离地面的高度,y为目标物体的纵坐标。
n阶方程为:
Figure PCTCN2020097761-appb-000011
其中,Z 真实深度为目标物体的真实深度,Z 测量深度为目标物体的测量深度,a 0,a 1,a 2,a 3,…a n为摄像头的畸变参数,n为大于等于1的整数。
方式二,针对步骤204中的方式二,摄像头110根据畸变参数算法实时确定摄像头110的畸变参数的处理过程为:根据至少N个第一特征点的坐标和至少N个扫描点的坐标通过张正友标定法得到畸变参数。
在本申请的实施例中,摄像头110得到N个第一特征点的坐标和各第一特征点对应的扫描点的坐标后,可以根据张正友标定法,得到摄像头110对应的畸变参数。
可选的,摄像头110得到畸变参数后,可以根据畸变参数确定目标物体的真实深度,具体处理过程为:
步骤一,通过畸变参数确定目标物体在图像坐标系的真实坐标。
在本申请的实施例中,摄像头110得到畸变参数后,可以将畸变参数和目标物体的坐标代入至N阶方程,得到目标物体在图像坐标系的真实坐标。其中,N阶方程如下:
Figure PCTCN2020097761-appb-000012
其中,x 真实为目标物体的经过畸变处理后的真实坐标的横坐标,x 图像为目标物体的未经过畸变处理后的测量坐标的横坐标,y 真实为目标物体的经过畸变处理后的真实坐标的纵坐标,y 图像为目标物体的未经过畸变处理后的测量坐标的纵坐标,a 1,a 2,a 3,…a n为畸变参数,
Figure PCTCN2020097761-appb-000013
步骤二,通过单目测距公式确定目标物体在摄像头坐标系下的真实深度。
在本申请的实施例中,摄像头110得到目标物体的真实坐标后,可以根据目标物体的真实坐标和单目测距公式确定目标物体的真实深度。其中,摄像头根据目标物体 的真实坐标和单目测距公式确定目标物体的真实深度的处理过程与步骤205中摄像头110根据第一特征点的坐标和单目测距公式,确定第一特征点的测量深度的处理过程类似,此处不再赘述。
可选的,摄像头110得到畸变参数后,可以根据摄像头的畸变参数和摄像头采集到的目标物体的图像数据,确定目标物体的第一6D信息,并向融合模块发送目标物体的第一6D信息。
在本申请的实施例中,摄像头110得到畸变参数后,可以根据畸变参数和摄像头实时采集到的目标物体的图像数据,得到目标物体的位置信息,从而得到目标物体的实时6D信息(即第一6D信息),并将该目标物体的第一6D信息发送给融合模块130,以便融合模块130融合模块130接收到摄像头110发送的目标物体的第一6D信息和雷达120发送的目标物体的第二6D信息后,可以对第一6D信息和第二6D信息进行卡尔曼滤波处理,得到目标物体的目标6D信息。
本申请的实施例提供了一种确定摄像头的畸变参数的方法,首先,无人驾驶车辆中的摄像头110将雷达坐标系和摄像头坐标系进行标定。然后,摄像头110获取雷达120采集的点云数据和摄像头110采集的图像数据。其中,点云数据包括至少一个扫描点,图像数据包括至少一个第一特征点。之后,摄像头110根据点云数据和图像数据确定第一特征点对应的扫描点,并根据畸变参数算法实时确定摄像头的畸变参数。这样,当无人驾驶车辆在行驶过程中,摄像头110的高度信息或角度信息发生变化时,摄像头110可以根据雷达120采集的点云数据和摄像头采集的图像数据,实时确定摄像头的畸变参数,从而降低无人驾驶车辆通过摄像头110测量到的目标物体的6D信息的误差。
本申请实施例还提供了一种获得目标物体精准位置的方法,该方法可以应用于无人驾驶车辆中的雷达120,具体步骤如下:
步骤一,接收摄像头发送的点云数据获取请求。
在本申请的实施例中,当摄像头110需要确定畸变参数时,摄像头110可以向雷达120发送点云数据获取请求。
步骤二,获取点云数据,向摄像头发送点云数据。
其中,点云数据包括至少一个扫描点。
在本申请的实施例中,雷达120接收到摄像头110发送的点云数据获取请求后,可以获取自身采集的点云数据,并向摄像头110发送该点云数据。其中,点云数据包括至少一个扫描点。
可选的,雷达120还可以根据获取到的目标物体的点云数据,确定目标物体的第二6D信息,并向融合模块发送目标物体的第二6D信息。
在本申请的实施例中,雷达120对实时采集到的目标物体的点云数据进行聚类处理,得到目标物体的实时6D信息(即第二6D信息),并将该目标物体的第二6D信息发送给融合模块130。
本申请实施例还提供了一种获得目标物体精准位置的方法,该方法可以应用于无人驾驶车辆中的融合模块130,具体步骤为:融合模块130接收摄像头发送的目标物体的第一6D信息和雷达发送的目标物体的第二6D信息,对第一6D信息和第二6D 信息进行卡尔曼滤波处理,得到目标物体的目标6D信息。
在本申请的实施例中,融合模块130接收到摄像头110发送的目标物体的第一6D信息和雷达120发送的目标物体的第二6D信息后,可以对第一6D信息和第二6D信息进行卡尔曼滤波处理,得到目标物体的目标6D信息,并将目标物体对应的目标6D信息发送至决策模块140。这样,决策模块140接收到目标物体对应的目标6D信息后,可以根据目标物体的目标6D信息进行决策,从而保证无人驾驶车辆的行驶安全。
基于相同的技术构思,本申请实施例还提供了一种确定摄像头的畸变参数的装置,如图4所示,该装置包括:
标定模块410,用于将雷达坐标系和摄像头坐标系进行标定;
第一获取模块420,用于获取雷达采集的点云数据,点云数据包括至少一个扫描点;
第二获取模块430,用于获取摄像头采集的图像数据,图像数据包括至少一个第一特征点;
第一确定模块440,用于确定第一特征点对应的扫描点;
第二确定模块450,用于根据畸变参数算法实时确定摄像头的畸变参数。
在一种可能的实现方式中,第一获取模块420,具体用于:
当摄像头的高度信息或角度信息变化时,获取雷达采集到的点云数据。
在一种可能的实现方式中,第一获取模块420,具体用于:
周期性地获取雷达采集到的点云数据。
在一种可能的实现方式中,第一确定模块440,具体用于:
根据第一特征点、第一预设平面距离阈值和第一深度距离概率值确定第一候选扫描点集,第一候选扫描点集包括第一扫描点,第一扫描点与第一特征点的平面距离小于第一预设平面距离阈值,第一深度距离概率值用于去除后景扫描点;
将第一候选扫描点集中组成面积最大三角形的三个扫描点,确定为第一特征点对应的扫描点。
在一种可能的实现方式中,第二确定模块450,具体用于:
根据单目测距公式确定第一特征点的测量深度;
根据第一特征点对应的扫描点的深度信息,确定第一特征点的真实深度;
根据图像数据中的至少N个第一特征点的的测量深度和真实深度,构建方程组,方程组包括N个n阶方程,N和n均为大于等于1的整数,n阶方程为:
Figure PCTCN2020097761-appb-000014
其中,Y i真实深度为第i个第一特征点的真实深度,Y i测量深度为第i个第一特征点的测量深度,a 0,a 1,a 2,a 3,…a n为摄像头的畸变参数;
求解方程组,得到摄像头的畸变参数a 0,a 1,a 2,a 3,…a n
在一种可能的实现方式中,如图5所示,该装置还包括:
第三确定模块460,用于确定目标物体及目标物体对应的测量深度,根据n阶方程得到目标物体的真实深度,n阶方程为:
Figure PCTCN2020097761-appb-000015
其中,Z 真实深度为目标物体的真实深度,Z 测量深度为目标物体的测量深度,a 0,a 1,a 2,a 3,…a n为摄像头的畸变参数,n为大于等于1的整数。
在一种可能的实现方式中,雷达为毫米波雷达或激光雷达。
在一种可能的实现方式中,第一确定模块440,具体用于:
根据第一特征点、第二预设平面距离阈值确定第一特征点对应的扫描点,第一特征点对应的扫描点与第一特征点的平面距离小于第二预设平面距离阈值;
从图像数据中选择至少N个第一特征点,至少N个第一特征点对应至少N个扫描点,N为大于等于1的整数。
在一种可能的实现方式中,第二确定模块450,具体用于:
在摄像头坐标系中,根据至少N个第一特征点的坐标和至少N个扫描点的坐标通过张正友标定法得到畸变参数。
在一种可能的实现方式中,如图6所示,该装置还包括:
第四确定模块470,用于通过畸变参数确定目标物体在图像坐标系的真实坐标;
第五确定模块480,用于通过单目测距公式确定目标物体在摄像头坐标系下的真实深度。
在一种可能的实现方式中,雷达为激光雷达。
在一种可能的实现方式中,如图7所示,该装置还包括:
第六确定模块490,用于根据摄像头的畸变参数和摄像头采集到的目标物体的图像数据,确定目标物体的第一6D信息;
发送模块4100,用于向融合模块发送目标物体的第一6D信息。
本申请的实施例提供了一种确定摄像头的畸变参数的装置,首先,无人驾驶车辆中的摄像头110将雷达坐标系和摄像头坐标系进行标定。然后,摄像头110获取雷达120采集的点云数据和摄像头110采集的图像数据。其中,点云数据包括至少一个扫描点,图像数据包括至少一个第一特征点。之后,摄像头110根据点云数据和图像数据确定第一特征点对应的扫描点,并根据畸变参数算法实时确定摄像头的畸变参数。这样,当无人驾驶车辆在行驶过程中,摄像头110的高度信息或角度信息发生变化时,摄像头110可以根据雷达120采集的点云数据和摄像头采集的图像数据,实时确定摄像头的畸变参数,从而降低无人驾驶车辆通过摄像头110测量到的目标物体的6D信息的误差。
基于相同的技术构思,本申请实施例还提供了一种确定摄像头的畸变参数的装置,包括:处理器、存储器和通信接口;其中,通信接口用于与其他设备或通信网络通信,存储器用于存储一个或多个程序,所述一个或多个程序包括计算机执行指令,当该装置运行时,处理器执行存储器存储的所述计算机执行指令以使该装置执行上述所述的确定摄像头的畸变参数的方法。
基于相同的技术构思,本申请实施例还提供了一种确定摄像头的畸变参数的***,包括摄像头、雷达和上述所述的确定摄像头的畸变参数的装置。
基于相同的技术构思,本申请实施例还提供了一种计算机可读存储介质,包括程序和指令,当所述程序或指令在计算机上运行时,上述所述的确定摄像头的畸变参数的方法被实现。
基于相同的技术构思,本申请实施例还提供了一种芯片***,包括处理器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现上述所述的确定摄像头的畸变参数的方法。
基于相同的技术构思,本申请实施例还提供了一种包含指令的计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述所述的确定摄像头的畸变参数的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (28)

  1. 一种确定摄像头的畸变参数的方法,其特征在于,包括:
    将雷达坐标系和摄像头坐标系进行标定;
    获取雷达采集的点云数据,所述点云数据包括至少一个扫描点;
    获取摄像头采集的图像数据,所述图像数据包括至少一个第一特征点;
    确定所述第一特征点对应的扫描点;
    根据畸变参数算法实时确定所述摄像头的畸变参数。
  2. 根据权利要求1所述的方法,其特征在于,所述获取雷达采集的点云数据,包括:
    当所述摄像头的高度信息或角度信息变化时,获取雷达采集到的点云数据。
  3. 根据权利要求1所述的方法,其特征在于,所述获取雷达采集的点云数据,包括:
    周期性地获取雷达采集到的点云数据。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述确定所述第一特征点对应的扫描点,包括:
    根据所述第一特征点、第一预设平面距离阈值和第一深度距离概率值确定第一候选扫描点集,所述第一候选扫描点集包括第一扫描点,所述第一扫描点与所述第一特征点的平面距离小于所述第一预设平面距离阈值,所述第一深度距离概率值用于去除后景扫描点;
    将所述第一候选扫描点集中组成面积最大三角形的三个扫描点,确定为所述第一特征点对应的扫描点。
  5. 根据权利要求4所述的方法,其特征在于,所述根据畸变参数算法实时确定所述摄像头的畸变参数,包括:
    根据单目测距公式确定所述第一特征点的测量深度;
    根据所述第一特征点对应的扫描点的深度信息,确定所述第一特征点的真实深度;
    根据所述图像数据中的至少N个第一特征点的的测量深度和真实深度,构建方程组,所述方程组包括N个n阶方程,所述N和所述n均为大于等于1的整数,所述n阶方程为:
    Figure PCTCN2020097761-appb-100001
    其中,Y i真实深度为第i个第一特征点的真实深度,Y i测量深度为第i个第一特征点的测量深度,a 0,a 1,a 2,a 3,…a n为所述摄像头的畸变参数;
    求解所述方程组,得到所述摄像头的畸变参数a 0,a 1,a 2,a 3,…a n
  6. 根据权利要求5所述的方法,其特征在于,还包括:
    确定目标物体及所述目标物体对应的测量深度,根据n阶方程得到所述目标物体的真实深度,所述n阶方程为:
    Figure PCTCN2020097761-appb-100002
    其中,Z 真实深度为所述目标物体的真实深度,Z 测量深度为所述目标物体的测量深度, a 0,a 1,a 2,a 3,…a n为所述摄像头的畸变参数,所述n为大于等于1的整数。
  7. 根据权利要求6所述的方法,其特征在于,所述雷达为毫米波雷达或激光雷达。
  8. 根据权利要求1-3任一项所述的方法,其特征在于,所述确定所述第一特征点对应的扫描点,包括:
    根据所述第一特征点、第二预设平面距离阈值确定所述第一特征点对应的扫描点,所述第一特征点对应的扫描点与所述第一特征点的平面距离小于所述第二预设平面距离阈值;
    从所述图像数据中选择至少N个第一特征点,所述至少N个第一特征点对应至少N个扫描点,所述N为大于等于3的整数。
  9. 根据权利要求8所述的方法,其特征在于,所述根据畸变参数算法实时确定所述摄像头的畸变参数,包括:
    在摄像头坐标系中,根据所述至少N个第一特征点的坐标和所述至少N个扫描点在摄像头坐标系中的坐标通过张正友标定法得到畸变参数。
  10. 根据权利要求9所述的方法,其特征在于,还包括:
    通过所述畸变参数确定目标物体在图像坐标系的真实坐标;
    通过单目测距公式确定所述目标物体在所述摄像头坐标系下的真实深度。
  11. 根据权利要求10所述的方法,其特征在于,所述雷达为激光雷达。
  12. 根据权利要求1-11任一所述的方法,其特征在于,还包括:
    根据所述摄像头的畸变参数和所述摄像头采集到的目标物体的图像数据,确定所述目标物体的第一6D信息;
    向融合模块发送所述目标物体的所述第一6D信息。
  13. 一种确定摄像头的畸变参数的装置,其特征在于,包括:
    标定模块,用于将雷达坐标系和摄像头坐标系进行标定;
    第一获取模块,用于获取雷达采集的点云数据,所述点云数据包括至少一个扫描点;
    第二获取模块,用于获取摄像头采集的图像数据,所述图像数据包括至少一个第一特征点;
    第一确定模块,用于确定所述第一特征点对应的扫描点;
    第二确定模块,用于根据畸变参数算法实时确定所述摄像头的畸变参数。
  14. 根据权利要求13所述的装置,其特征在于,所述第一获取模块,具体用于:
    当所述摄像头的高度信息或角度信息变化时,获取雷达采集到的点云数据。
  15. 根据权利要求13所述的装置,其特征在于,所述第一获取模块,具体用于:
    周期性地获取雷达采集到的点云数据。
  16. 根据权利要求13-15任一项所述的装置,其特征在于,所述第一确定模块,具体用于:
    根据所述第一特征点、第一预设平面距离阈值和第一深度距离概率值确定第一候选扫描点集,所述第一候选扫描点集包括第一扫描点,所述第一扫描点与所述第一特征点的平面距离小于所述第一预设平面距离阈值,所述第一深度距离概率值用于去除 后景扫描点;
    将所述第一候选扫描点集中组成面积最大三角形的三个扫描点,确定为所述第一特征点对应的扫描点。
  17. 根据权利要求16所述的装置,其特征在于,所述第二确定模块,具体用于:
    根据单目测距公式确定所述第一特征点的测量深度;
    根据所述第一特征点对应的扫描点的深度信息,确定所述第一特征点的真实深度;
    根据所述图像数据中的至少N个第一特征点的的测量深度和真实深度,构建方程组,所述方程组包括N个n阶方程,所述N和所述n均为大于等于1的整数,所述n阶方程为:
    Figure PCTCN2020097761-appb-100003
    其中,Y i真实深度为第i个第一特征点的真实深度,Y i测量深度为第i个第一特征点的测量深度,a 0,a 1,a 2,a 3,…a n为所述摄像头的畸变参数;
    求解所述方程组,得到所述摄像头的畸变参数a 0,a 1,a 2,a 3,…a n
  18. 根据权利要求17所述的装置,其特征在于,还包括:
    第三确定模块,用于确定目标物体及所述目标物体对应的测量深度,根据n阶方程得到所述目标物体的真实深度,所述n阶方程为:
    Figure PCTCN2020097761-appb-100004
    其中,Z 真实深度为所述目标物体的真实深度,Z 测量深度为所述目标物体的测量深度,a 0,a 1,a 2,a 3,…a n为所述摄像头的畸变参数,所述n为大于等于1的整数。
  19. 根据权利要求18所述的装置,其特征在于,所述雷达为毫米波雷达或激光雷达。
  20. 根据权利要求13-15任一项所述的装置,其特征在于,所述第一确定模块,具体用于:
    根据所述第一特征点、第二预设平面距离阈值确定所述第一特征点对应的扫描点,所述第一特征点对应的扫描点与所述第一特征点的平面距离小于所述第二预设平面距离阈值;
    从所述图像数据中选择至少N个第一特征点,所述至少N个第一特征点对应至少N个扫描点,所述N为大于等于3的整数;
  21. 根据权利要求20所述的装置,其特征在于,所述第二确定模块,具体用于:
    在摄像头坐标系中,根据所述至少N个第一特征点的坐标和所述至少N个扫描点的坐标通过张正友标定法得到畸变参数。
  22. 根据权利要求21所述的装置,其特征在于,还包括:
    第四确定模块,用于通过所述畸变参数确定目标物体在图像坐标系的真实坐标;
    第五确定模块,用于通过单目测距公式确定所述目标物体在所述摄像头坐标系下的真实深度。
  23. 根据权利要求22所述的装置,其特征在于,所述雷达为激光雷达。
  24. 根据权利要求13-23任一所述的装置,其特征在于,所述装置还包括:
    第六确定模块,用于根据所述摄像头的畸变参数和所述摄像头采集到的目标物体 的图像数据,确定所述目标物体的第一6D信息;
    发送模块,用于向融合模块发送所述目标物体的所述第一6D信息。
  25. 一种确定摄像头的畸变参数的装置,其特征在于,包括:处理器、存储器和通信接口;其中,通信接口用于与其他设备或通信网络通信,存储器用于存储一个或多个程序,所述一个或多个程序包括计算机执行指令,当该装置运行时,处理器执行存储器存储的所述计算机执行指令以使该装置执行如权利要求1-12任一项所述的确定摄像头的畸变参数的方法。
  26. 一种确定摄像头的畸变参数的***,其特征在于,包括摄像头、雷达和如权利要求13-24任一所述的确定摄像头的畸变参数的装置。
  27. 一种计算机可读存储介质,其特征在于,包括程序和指令,当所述程序或指令在计算机上运行时,如权利要求1-12任一项所述的确定摄像头的畸变参数的方法被实现。
  28. 一种芯片***,其特征在于,包括处理器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现权利要求1-12任一项所述的确定摄像头的畸变参数的方法。
PCT/CN2020/097761 2019-06-27 2020-06-23 一种确定摄像头的畸变参数的方法及装置 WO2020259506A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910565719.2 2019-06-27
CN201910565719.2A CN112146848B (zh) 2019-06-27 2019-06-27 一种确定摄像头的畸变参数的方法及装置

Publications (1)

Publication Number Publication Date
WO2020259506A1 true WO2020259506A1 (zh) 2020-12-30

Family

ID=73868642

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097761 WO2020259506A1 (zh) 2019-06-27 2020-06-23 一种确定摄像头的畸变参数的方法及装置

Country Status (2)

Country Link
CN (1) CN112146848B (zh)
WO (1) WO2020259506A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112764546A (zh) * 2021-01-29 2021-05-07 重庆子元科技有限公司 一种虚拟人物位移控制方法、装置及终端设备
CN112967344A (zh) * 2021-03-09 2021-06-15 北京百度网讯科技有限公司 相机外参标定的方法、设备、存储介质及程序产品
CN113077523A (zh) * 2021-03-31 2021-07-06 商汤集团有限公司 一种标定方法、装置、计算机设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487652B (zh) * 2021-06-22 2023-06-02 江西晶浩光学有限公司 安防监控方法、安防监控设备、存储介质以及计算机设备
CN115791794A (zh) * 2022-11-22 2023-03-14 福耀玻璃工业集团股份有限公司 光学元件检测方法、***和应用

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101345890A (zh) * 2008-08-28 2009-01-14 上海交通大学 基于激光雷达的摄像头标定方法
US20140132723A1 (en) * 2012-11-13 2014-05-15 Osmose Utilities Services, Inc. Methods for calibrating a digital photographic image of utility structures
CN108198223A (zh) * 2018-01-29 2018-06-22 清华大学 一种激光点云与视觉图像映射关系快速精确标定方法
CN108964777A (zh) * 2018-07-25 2018-12-07 南京富锐光电科技有限公司 一种高速相机校准***及方法
CN109146978A (zh) * 2018-07-25 2019-01-04 南京富锐光电科技有限公司 一种高速相机成像畸变校准装置及方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201316382Y (zh) * 2008-12-10 2009-09-30 魏志方 多功能无线报警拐杖
CN101497279B (zh) * 2009-02-26 2010-06-23 王晓宇 一种测量加工一体化的激光三维打标方法及装置
CN103458181B (zh) * 2013-06-29 2016-12-28 华为技术有限公司 镜头畸变参数调节方法、装置及摄像设备
CN103837869B (zh) * 2014-02-26 2016-06-01 北京工业大学 基于向量关系的单线激光雷达和ccd相机标定方法
CN106918306A (zh) * 2017-04-22 2017-07-04 许晟明 基于光场单相机的工业产品三维形貌实时检测***
CN108648240B (zh) * 2018-05-11 2022-09-23 东南大学 基于点云特征地图配准的无重叠视场相机姿态标定方法
CN109087382A (zh) * 2018-08-01 2018-12-25 宁波发睿泰科智能科技有限公司 一种三维重构方法及三维成像***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101345890A (zh) * 2008-08-28 2009-01-14 上海交通大学 基于激光雷达的摄像头标定方法
US20140132723A1 (en) * 2012-11-13 2014-05-15 Osmose Utilities Services, Inc. Methods for calibrating a digital photographic image of utility structures
CN108198223A (zh) * 2018-01-29 2018-06-22 清华大学 一种激光点云与视觉图像映射关系快速精确标定方法
CN108964777A (zh) * 2018-07-25 2018-12-07 南京富锐光电科技有限公司 一种高速相机校准***及方法
CN109146978A (zh) * 2018-07-25 2019-01-04 南京富锐光电科技有限公司 一种高速相机成像畸变校准装置及方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112764546A (zh) * 2021-01-29 2021-05-07 重庆子元科技有限公司 一种虚拟人物位移控制方法、装置及终端设备
CN112764546B (zh) * 2021-01-29 2022-08-09 重庆子元科技有限公司 一种虚拟人物位移控制方法、装置及终端设备
CN112967344A (zh) * 2021-03-09 2021-06-15 北京百度网讯科技有限公司 相机外参标定的方法、设备、存储介质及程序产品
CN112967344B (zh) * 2021-03-09 2023-12-08 阿波罗智联(北京)科技有限公司 相机外参标定的方法、设备、存储介质及程序产品
CN113077523A (zh) * 2021-03-31 2021-07-06 商汤集团有限公司 一种标定方法、装置、计算机设备及存储介质
CN113077523B (zh) * 2021-03-31 2023-11-24 商汤集团有限公司 一种标定方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN112146848A (zh) 2020-12-29
CN112146848B (zh) 2022-02-25

Similar Documents

Publication Publication Date Title
WO2020259506A1 (zh) 一种确定摄像头的畸变参数的方法及装置
US10964054B2 (en) Method and device for positioning
CN110148185B (zh) 确定成像设备坐标系转换参数的方法、装置和电子设备
WO2021098608A1 (zh) 传感器的标定方法、装置、***、车辆、设备及存储介质
JP2021534481A (ja) 障害物又は地面の認識及び飛行制御方法、装置、機器及び記憶媒体
CN110988849B (zh) 雷达***的标定方法、装置、电子设备及存储介质
CN110927708A (zh) 智能路侧单元的标定方法、装置及设备
WO2021253245A1 (zh) 识别车辆变道趋势的方法和装置
JP2023508705A (ja) データ伝送方法および装置
CN111815707A (zh) 点云确定方法、点云筛选方法、装置、计算机设备
CN112912932B (zh) 一种车载摄像头的标定方法、装置及终端设备
US20220270294A1 (en) Calibration methods, apparatuses, systems and devices for image acquisition device, and storage media
WO2022088104A1 (zh) 一种确定目标对象点云集的方法及装置
CN110231832B (zh) 用于无人机的避障方法和避障装置
WO2023040737A1 (zh) 目标位置确定方法、装置、电子设备以及存储介质
CN115376109B (zh) 障碍物检测方法、障碍物检测装置以及存储介质
CN112036359B (zh) 一种车道线的拓扑信息获得方法、电子设备及存储介质
CN113989766A (zh) 道路边缘检测方法、应用于车辆的道路边缘检测设备
CN114419165B (zh) 相机外参校正方法、装置、电子设备和存储介质
CN114488094A (zh) 一种车载多线激光雷达与imu外参数自动标定方法及装置
CN114897669A (zh) 一种标注方法、装置及电子设备
CN112946609A (zh) 激光雷达与相机的标定方法、装置、设备及可读存储介质
CN113534110B (zh) 一种多激光雷达***静态标定方法
JP5928010B2 (ja) 道路標示検出装置及びプログラム
CN116385994A (zh) 一种三维道路线提取方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20832841

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20832841

Country of ref document: EP

Kind code of ref document: A1