CN109544648B - Calibration method and device - Google Patents

Calibration method and device Download PDF

Info

Publication number
CN109544648B
CN109544648B CN201811458563.XA CN201811458563A CN109544648B CN 109544648 B CN109544648 B CN 109544648B CN 201811458563 A CN201811458563 A CN 201811458563A CN 109544648 B CN109544648 B CN 109544648B
Authority
CN
China
Prior art keywords
camera
target object
pixel position
external parameters
objective function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811458563.XA
Other languages
Chinese (zh)
Other versions
CN109544648A (en
Inventor
刘艺成
彭军
楼天城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaoma Huixing Technology Co ltd
Beijing PonyAi Science And Technology Co ltd
Original Assignee
Beijing Xiaoma Huixing Technology Co ltd
Beijing PonyAi Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaoma Huixing Technology Co ltd, Beijing PonyAi Science And Technology Co ltd filed Critical Beijing Xiaoma Huixing Technology Co ltd
Priority to CN201811458563.XA priority Critical patent/CN109544648B/en
Publication of CN109544648A publication Critical patent/CN109544648A/en
Application granted granted Critical
Publication of CN109544648B publication Critical patent/CN109544648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a calibration method and a calibration device, wherein the method comprises the steps of obtaining the spatial position of a target object, obtaining a current image of the target object shot by a camera arranged on an automatic driving automobile and reference external parameters of the camera; calculating a first pixel position of the target object in the reference image according to the space position of the target object and the reference external parameters of the camera; the reference image is an image obtained by shooting a target object by the camera when the external parameter is the reference external parameter; extracting a second pixel position of the target object in the current image; and determining the current external parameters of the camera according to the first pixel position and the second pixel position. According to the calibration method, the current external parameters of the camera installed on the automatic driving automobile are determined by utilizing the first pixel position of the target object obtained by calculating the reference external parameters of the camera and the second pixel position of the target object extracted from the current image, and the accuracy of sensing the surrounding environment of the automatic driving automobile is improved.

Description

Calibration method and device
Technical Field
The application relates to the technical field of vehicles, in particular to a calibration method and a calibration device.
Background
A Self-driving automobile (also called as an unmanned automobile, a computer-driven automobile, or a wheeled mobile robot), which is an intelligent automobile that can realize unmanned driving through a computer system. Autonomous vehicles rely on data generated by various sensors disposed thereon, which may determine key performance of the autonomous vehicle, such as driving safety, ride comfort, etc.
Specifically, the environment around the autonomous vehicle needs to be sensed in real time during driving, and the method includes: the type, state, etc. of the object to ensure safety of travel. The image information acquired by the camera is particularly critical, and most of the information depends on the image information acquired by the camera when the surrounding environment is sensed in real time. For example: the method comprises the steps of acquiring point cloud data of a target object acquired by a laser radar and image information of the target object acquired by a camera, matching the point cloud data and the image information based on the relative position of the laser radar and the camera, and determining the category and the state of the target object.
However, after the autonomous vehicle runs for a period of time or runs on a bumpy road, the position of the camera is changed, so that when the autonomous vehicle senses the surrounding environment through the sensor, deviation exists, and potential safety hazards exist in the driving process of the autonomous vehicle.
Disclosure of Invention
In view of this, an object of the embodiments of the present application is to provide a calibration method and apparatus, which can accurately calibrate a current external parameter of a camera installed on an autonomous vehicle, and improve accuracy of sensing a surrounding environment by the autonomous vehicle.
In a first aspect, an embodiment of the present application provides a calibration method, including:
acquiring a spatial position of a target object, acquiring a current image of the target object captured by a camera mounted on an automatic driving automobile, and acquiring a reference external parameter of the camera;
calculating a first pixel position of the target object in a reference image according to the space position of the target object and the reference external parameters of the camera; the reference image is an image obtained by shooting the target object by the camera when the external parameter is the reference external parameter;
extracting a second pixel position of the target object in the current image;
and determining the current external parameters of the camera according to the first pixel position and the second pixel position.
With reference to the first aspect, this embodiment provides a first possible implementation manner of the first aspect, where calculating a first pixel position of the target object in a reference image according to the spatial position of the target object and a reference external parameter of the camera includes:
inputting the spatial position of the target object and the reference external parameters of the camera into a preset first objective function, and calculating to obtain a first pixel position of the target object in a reference image;
the first objective function is a function in which the first pixel position is used as a dependent variable, the external parameter of the camera and the spatial position of the target object are used as independent variables, and the spatial position of a reference sensor on the autonomous vehicle and a preset internal parameter of the camera are used as constant terms.
With reference to the first possible implementation manner of the first aspect, this application provides a second possible implementation manner of the first aspect, where the determining, according to the first pixel position and the second pixel position, a current external parameter of the camera includes:
forming a second objective function according to the first objective function and the second pixel position;
calculating a minimum value of the second objective function;
and determining the external parameters of the camera as the current external parameters of the camera when the second objective function takes the minimum value.
With reference to the first aspect, an embodiment of the present application provides a third possible implementation manner of the first aspect, where the determining, according to the first pixel position and the second pixel position, a current external parameter of the camera includes:
calculating an error value between the first pixel location and the second pixel location;
transforming external parameters of the camera until an error value between the first pixel position and the second pixel position is a preset value;
and when the error value is a preset value, determining the corresponding transformed external parameters of the camera as the current external parameters of the camera.
With reference to the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where after determining the current external parameter of the camera according to the first pixel position and the second pixel position, the method further includes:
and determining the state information of the automatic driving automobile based on the current external parameters of the camera and sensor data acquired by other sensors in the automatic driving automobile.
In a second aspect, an embodiment of the present application further provides a calibration apparatus, where the calibration apparatus includes:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring the spatial position of a target object, acquiring a current image of the target object shot by a camera arranged on an automatic driving automobile and acquiring a reference external parameter of the camera;
the calculation module is used for calculating a first pixel position of the target object in a reference image according to the space position of the target object and the reference external parameters of the camera; the reference image is an image obtained by shooting the target object by the camera when the external parameter is the reference external parameter;
the extraction module is used for extracting a second pixel position of the target object in the current image;
and the first determining module is used for determining the current external parameters of the camera according to the first pixel position and the second pixel position.
With reference to the second aspect, an embodiment of the present application provides a first possible implementation manner of the second aspect, where the calculating module is specifically configured to:
inputting the spatial position of the target object and the reference external parameters of the camera into a preset first objective function, and calculating to obtain a first pixel position of the target object in a reference image;
the first objective function is a function in which the first pixel position is used as a dependent variable, the external parameter of the camera and the spatial position of the target object are used as independent variables, and the spatial position of a reference sensor on the autonomous vehicle and a preset internal parameter of the camera are used as constant terms.
With reference to the first possible implementation manner of the second aspect, an embodiment of the present application provides a second possible implementation manner of the second aspect, where the first determining module is specifically configured to:
forming a second objective function according to the first objective function and the second pixel position;
calculating a minimum value of the second objective function;
and determining the external parameters of the camera as the current external parameters of the camera when the second objective function takes the minimum value.
With reference to the second aspect, an embodiment of the present application provides a third possible implementation manner of the second aspect, where the first determining module is specifically configured to:
calculating an error value between the first pixel location and the second pixel location;
transforming external parameters of the camera until an error value between the first pixel position and the second pixel position is a preset value;
and when the error value is a preset value, determining the corresponding transformed external parameters of the camera as the current external parameters of the camera.
With reference to the second aspect, embodiments of the present application provide a fourth possible implementation manner of the second aspect, where the apparatus further includes:
and the second determination module is used for determining the state information of the automatic driving automobile based on the current external parameters of the camera and the sensor data acquired by other sensors in the automatic driving automobile.
According to the calibration method and device provided by the embodiment of the application, the spatial position of a target object is firstly obtained, a current image obtained by the target object shot by a camera installed on an automatic driving automobile is obtained, and reference external parameters of the camera are obtained; calculating a first pixel position of the target object in the reference image according to the space position of the target object and the reference external parameters of the camera; the reference image is an image obtained by shooting a target object by the camera when the external parameters are reference external parameters; then extracting a second pixel position of the target object in the current image; and finally, determining the current external parameters of the camera according to the first pixel position and the second pixel position. The calibration method provided by the embodiment of the application determines the current external parameters of the camera installed on the automatic driving automobile by utilizing the first pixel position of the target object obtained by calculating the reference external parameters of the camera and the second pixel position of the target object extracted from the current image, thereby avoiding the problems that the external parameters of the camera of the automatic driving automobile are changed and the surrounding environment cannot be accurately sensed after the automatic driving automobile runs for a period of time or runs on a bumpy road in the prior art, and improving the accuracy of sensing the surrounding environment by the automatic driving automobile.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a calibration method provided in an embodiment of the present application;
FIG. 2 is a flow chart illustrating another calibration method provided by an embodiment of the present application;
FIG. 3 is a flow chart of another calibration method provided by the embodiments of the present application;
fig. 4 is a schematic structural diagram illustrating a calibration apparatus provided in an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an in-vehicle device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In general, after an autonomous vehicle travels for a certain period of time or travels on a bumpy road, external parameters of a camera are changed, and the original external parameters are continuously used while sensing the surrounding environment according to the external parameters of the camera, so that the autonomous vehicle cannot accurately sense the surrounding environment. In view of the above problems, the calibration method and device provided in the embodiments of the present application can determine the current external parameters of the camera installed on the autonomous vehicle, and improve the accuracy of sensing the surrounding environment by the autonomous vehicle.
For the convenience of understanding the embodiments of the present application, a detailed description will be given to a calibration method disclosed in the embodiments of the present application.
As shown in fig. 1, a flowchart of a calibration method according to an embodiment of the present application includes the following specific steps:
s101, acquiring the space position of a target object, acquiring a current image of the target object shot by a camera installed on an automatic driving automobile, and acquiring reference external parameters of the camera.
Here, when the calibration method according to the embodiment of the present application is used for calibration, the target object may be any object, for example: traffic lights, indicator signs, etc. The spatial position of the target object is coordinate information of the target object in a three-dimensional coordinate system, and the spatial position may be obtained in real time by a laser radar installed on the autonomous vehicle, or may be obtained from a pre-stored three-dimensional map, which is not limited to this embodiment of the present invention.
After the target object is determined, the camera mounted on the automatic driving automobile shoots the target object to obtain a current image containing the target object, of course, different target objects can be selected for shooting to obtain a plurality of calibration results, and the plurality of calibration results are subjected to averaging operation (or other operations) to enable the calibration results to be more accurate.
Wherein the reference external parameter includes a relative position between the camera and the laser radar, a relative position between the camera and the sound sensor, and the like. And, the reference external parameter may be determined when the camera is initially installed, or may be recorded after last calibration.
S102, calculating a first pixel position of the target object in a reference image according to the space position of the target object and reference external parameters of the camera; the reference image is an image obtained by shooting a target object by the camera when the external parameter is the reference external parameter.
Here, the first pixel position of the target object in the reference image of the target object captured by the camera may be calculated according to the spatial position of the target object and the reference external parameter of the camera. The reference image may be actually captured, may be virtually synthesized, and does not affect the calculation of the first pixel position.
Further, the calculation process of the first pixel position specifically includes:
and inputting the spatial position of the target object and the reference external parameters of the camera into a preset first objective function, and calculating to obtain a first pixel position of the target object in the reference image.
Here, the first objective function is a functional expression stored in advance for calculating a first pixel position of the object in the reference image.
The first objective function is a function with the first pixel position as a dependent variable, the external parameter of the camera and the spatial position of the target object as independent variables, and the spatial position of the reference sensor on the autonomous vehicle and the preset internal parameter of the camera as constant terms.
Here, since the target is randomly selected, the spatial position of the target is an independent variable. The reference sensor on the autonomous driving vehicle may include other sensors such as a laser radar and a sound sensor, which is not particularly limited in the embodiments of the present application. The spatial position of the reference sensor can be determined, among other things, by the positioning system of the autonomous vehicle and the relative position of the reference sensor within the autonomous vehicle. Since the reference sensor is stationary relative to the autonomous vehicle during calibration, the spatial position of the reference sensor can be used as a constant term for the first objective function.
The internal parameters of the camera are determined by the camera itself, i.e. preset, and the preset internal parameters of the camera are used as constant terms of the first objective function. The preset internal parameters comprise a parameter matrix and a distortion coefficient.
S103, extracting a second pixel position of the target object in the current image.
Here, a pixel coordinate system corresponding to the current image is established based on the current image, and a pixel position of the object in the pixel coordinate system, which is the second pixel position, can be extracted from the current image by taking the upper left corner of the image as the origin of the pixel coordinate system.
And S104, determining the current external parameters of the camera according to the first pixel position and the second pixel position.
Here, the embodiments of the present application provide two methods for determining the current external parameters of the camera, and the specific methods are respectively described in detail below, which are not described herein again.
Optionally, based on the determined current external parameters of the camera, the following steps may be further performed:
and S105, determining the state information of the automatic driving automobile based on the current external parameters of the camera and sensor data acquired by other sensors in the automatic driving automobile.
Various sensors are arranged on the automatic driving automobile, and the surrounding environment is sensed by analyzing and calculating sensor data collected by the sensors, so that the automatic driving automobile can be ensured to run safely on a road.
For example: by using the image collected by the camera, the point cloud data collected by the laser radar and the current external parameters (relative position between the camera and the laser radar) of the camera, the state information of the environment where the automatic driving automobile is located, such as whether an indicating sign exists in the front or not, if so, the meaning of the content indicated by the indicating sign, and the like, can be determined.
According to the embodiment of the application, the current external parameters of the camera installed on the automatic driving automobile can be determined through the first pixel position of the target object obtained through calculation of the reference external parameters of the camera and the second pixel position of the target object extracted from the current image, and the accuracy of sensing the surrounding environment of the automatic driving automobile is improved.
As shown in fig. 2, a first method for determining current external parameters of a camera provided in the embodiment of the present application includes the following specific steps:
s201, forming a second objective function according to the first objective function and the second pixel position;
s202, calculating the minimum value of the second objective function;
s203, determining the external parameters of the camera as the current external parameters of the camera when the second objective function takes the minimum value.
Here, the first objective function is subtracted from the second pixel location to form a new function, i.e., the second objective function.
And determining the external parameters of the independent variable camera by minimizing the second objective function, namely minimizing the value of the second objective function, wherein the corresponding external parameters of the camera when the second objective function takes the minimum value are the current external parameters of the camera. Of course, other optimization algorithms may also be used, such as: gradient descent method, etc., and the examples of the present application do not limit this.
As shown in fig. 3, a second method for determining the current external parameters of the camera provided in the embodiment of the present application includes the following specific steps:
s301, calculating an error value between the first pixel position and the second pixel position.
S302, external parameters of the camera are changed until an error value between the first pixel position and the second pixel position is a preset value.
And S303, when the error value is a preset value, determining the corresponding external parameters of the converted camera as the current external parameters of the camera.
Here, the second pixel position obtained by extraction may be used as a reference value, and a difference value between the first pixel position and the second pixel position obtained by calculation may be calculated to obtain an error value of the first pixel position relative to the second pixel position.
Since the external parameters of the camera are independent variables, the resulting first pixel position can be changed by transforming the external parameters of the camera. And transforming the external parameters of the camera until the obtained error value between the first pixel position and the second pixel position reaches a preset value, and determining the transformed external parameters of the camera as the current external parameters of the camera.
After the error value between the first pixel position and the second pixel position reaches the preset value, calculation can be continued, so that the error value between the first pixel position and the second pixel position is smaller than the preset value, and the calibrated current external parameter is more accurate.
The method comprises the steps of forming a second objective function by a first objective function stored in advance and a second pixel position of a target object extracted from a current image, and performing optimization operation on the second objective function to determine current external parameters of a camera installed on an automatic driving automobile; the current external parameters of the camera mounted on the autonomous vehicle may also be determined by causing an error value between the first pixel location and the second pixel location to reach a preset value. By the method, the accuracy of sensing the surrounding environment of the automatic driving automobile is improved.
Based on the same inventive concept, the embodiment of the present application further provides a calibration apparatus corresponding to the calibration method, and since the principle of the apparatus in the embodiment of the present application for solving the problem is similar to the calibration method described above in the embodiment of the present application, the implementation of the apparatus can refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 4, a calibration apparatus according to another embodiment of the present application includes:
an obtaining module 401, configured to obtain a spatial position of a target object, obtain a current image of the target object captured by a camera installed in an autonomous vehicle, and obtain a reference external parameter of the camera;
a calculating module 402, configured to calculate a first pixel position of the target object in the reference image according to the spatial position of the target object and the reference external parameter of the camera; the reference image is an image obtained by shooting a target object by the camera when the external parameters are reference external parameters;
an extracting module 403, configured to extract a second pixel position of the target object in the current image;
a first determining module 404, configured to determine a current external parameter of the camera according to the first pixel position and the second pixel position.
In an embodiment, the calculating module 402 is specifically configured to:
inputting the spatial position of the target object and the reference external parameters of the camera into a preset first objective function, and calculating to obtain a first pixel position of the target object in a reference image;
the first objective function is a function with the first pixel position as a dependent variable, the external parameter of the camera and the spatial position of the target object as independent variables, and the spatial position of the reference sensor on the autonomous vehicle and the preset internal parameter of the camera as constant terms.
In another embodiment, the first determining module 404 is specifically configured to:
forming a second objective function according to the first objective function and the second pixel position;
calculating a minimum value of the second objective function;
and determining the external parameters of the camera when the second objective function takes the minimum value as the current external parameters of the camera.
In another embodiment, the first determining module 404 is specifically configured to: :
calculating an error value between the first pixel location and the second pixel location;
changing external parameters of the camera until an error value between the first pixel position and the second pixel position is a preset value;
and when the error value is a preset value, determining the corresponding transformed external parameters of the camera as the current external parameters of the camera.
In another embodiment, the calibration apparatus further includes:
a second determination module 405, configured to determine the state information of the autonomous vehicle based on the current external parameters of the camera and sensor data collected by other sensors in the autonomous vehicle.
As shown in fig. 5, a schematic structural diagram of an on-board device provided in an embodiment of the present application includes: a processor 501, a memory 502 and a bus 503, the memory 502 storing executable instructions, the processor 501 and the memory 502 communicating via the bus 503 when the vehicle is in operation, the machine readable instructions when executed by the processor 501 performing the following:
acquiring a spatial position of a target object, acquiring a current image of the target object captured by a camera mounted on an automatic driving automobile, and acquiring a reference external parameter of the camera;
calculating a first pixel position of the target object in the reference image according to the space position of the target object and the reference external parameters of the camera; the reference image is an image obtained by shooting a target object by the camera when the external parameters are reference external parameters;
extracting a second pixel position of the target object in the current image;
and determining the current external parameters of the camera according to the first pixel position and the second pixel position.
Optionally, the processor 501 executes a method for calculating a first pixel position of the target object in the reference image according to the spatial position of the target object and the reference external parameter of the camera, including:
inputting the spatial position of the target object and the reference external parameters of the camera into a preset first objective function, and calculating to obtain a first pixel position of the target object in a reference image;
the first objective function is a function with the first pixel position as a dependent variable, the external parameter of the camera and the spatial position of the target object as independent variables, and the spatial position of the reference sensor on the autonomous vehicle and the preset internal parameter of the camera as constant terms.
Optionally, the processor 501 executes a method for determining the current external parameters of the camera according to the first pixel position and the second pixel position, which includes:
forming a second objective function according to the first objective function and the second pixel position;
calculating a minimum value of the second objective function;
and determining the external parameters of the camera when the second objective function takes the minimum value as the current external parameters of the camera.
Optionally, the processor 501 executes a method for determining the current external parameters of the camera according to the first pixel position and the second pixel position, which includes:
calculating an error value between the first pixel location and the second pixel location;
changing external parameters of the camera until an error value between the first pixel position and the second pixel position is a preset value;
and when the error value is a preset value, determining the corresponding transformed external parameters of the camera as the current external parameters of the camera.
Optionally, the method executed by the processor 501, after determining the current external parameter of the camera according to the first pixel position and the second pixel position, further includes:
the state information of the autonomous vehicle is determined based on current external parameters of the camera and sensor data collected by other sensors in the autonomous vehicle.
The computer program product of the calibration method and apparatus provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method in the foregoing method embodiment, and specific implementation may refer to the method embodiment, which is not described herein again.
Specifically, the storage medium can be a general storage medium, such as a removable disk, a hard disk, or the like, and when a computer program on the storage medium is executed, the calibration method can be executed, so that the current external parameters of a camera installed on the autonomous driving vehicle can be determined, and the accuracy of sensing the surrounding environment by the autonomous driving vehicle can be improved.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A calibration method, comprising:
acquiring a spatial position of a target object, acquiring a current image of the target object captured by a camera mounted on an automatic driving automobile, and acquiring a reference external parameter of the camera; the spatial position is obtained in real time by a laser radar installed on the automatic driving automobile;
calculating a first pixel position of the target object in a reference image according to the space position of the target object and the reference external parameters of the camera; the reference image is an image obtained by shooting the target object by the camera when the external parameter is the reference external parameter;
extracting a second pixel position of the target object in the current image;
determining the current external parameters of the camera according to the first pixel position and the second pixel position;
calculating a first pixel position of the target object in a reference image according to the spatial position of the target object and the reference external parameters of the camera, and the method comprises the following steps:
inputting the spatial position of the target object and the reference external parameters of the camera into a preset first objective function, and calculating to obtain a first pixel position of the target object in a reference image;
the first objective function is a function in which the first pixel position is used as a dependent variable, the external parameter of the camera and the spatial position of the target object are used as independent variables, and the spatial position of a reference sensor on the autonomous vehicle and a preset internal parameter of the camera are used as constant terms.
2. The method of claim 1, wherein determining the current external parameters of the camera based on the first pixel location and the second pixel location comprises:
forming a second objective function according to the first objective function and the second pixel position;
calculating a minimum value of the second objective function;
determining the external parameters of the camera as the current external parameters of the camera when the second objective function takes the minimum value;
forming a second objective function according to the first objective function and the second pixel position, including:
and subtracting the first objective function from the second pixel position to form a second objective function.
3. The method of claim 1, wherein determining the current external parameters of the camera based on the first pixel location and the second pixel location comprises:
calculating an error value between the first pixel location and the second pixel location;
transforming external parameters of the camera until an error value between the first pixel position and the second pixel position is a preset value;
and when the error value is a preset value, determining the corresponding transformed external parameters of the camera as the current external parameters of the camera.
4. The method of claim 1, wherein determining the current external parameters of the camera based on the first pixel location and the second pixel location further comprises:
and determining the state information of the automatic driving automobile based on the current external parameters of the camera and sensor data acquired by other sensors in the automatic driving automobile.
5. A calibration device, comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring the spatial position of a target object, acquiring a current image of the target object shot by a camera arranged on an automatic driving automobile and acquiring a reference external parameter of the camera; the spatial position is obtained in real time by a laser radar installed on the automatic driving automobile;
the calculation module is used for calculating a first pixel position of the target object in a reference image according to the space position of the target object and the reference external parameters of the camera; the reference image is an image obtained by shooting the target object by the camera when the external parameter is the reference external parameter;
the extraction module is used for extracting a second pixel position of the target object in the current image;
the first determining module is used for determining the current external parameters of the camera according to the first pixel position and the second pixel position;
the calculation module is specifically configured to:
inputting the spatial position of the target object and the reference external parameters of the camera into a preset first objective function, and calculating to obtain a first pixel position of the target object in a reference image;
the first objective function is a function in which the first pixel position is used as a dependent variable, the external parameter of the camera and the spatial position of the target object are used as independent variables, and the spatial position of a reference sensor on the autonomous vehicle and a preset internal parameter of the camera are used as constant terms.
6. The apparatus of claim 5, wherein the first determining module is specifically configured to:
forming a second objective function according to the first objective function and the second pixel position;
calculating a minimum value of the second objective function;
determining the external parameters of the camera as the current external parameters of the camera when the second objective function takes the minimum value;
forming a second objective function according to the first objective function and the second pixel position, including:
and subtracting the first objective function from the second pixel position to form a second objective function.
7. The apparatus of claim 5, wherein the first determining module is specifically configured to:
calculating an error value between the first pixel location and the second pixel location;
transforming external parameters of the camera until an error value between the first pixel position and the second pixel position is a preset value;
and when the error value is a preset value, determining the corresponding transformed external parameters of the camera as the current external parameters of the camera.
8. The apparatus of claim 5, further comprising:
and the second determination module is used for determining the state information of the automatic driving automobile based on the current external parameters of the camera and the sensor data acquired by other sensors in the automatic driving automobile.
CN201811458563.XA 2018-11-30 2018-11-30 Calibration method and device Active CN109544648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811458563.XA CN109544648B (en) 2018-11-30 2018-11-30 Calibration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811458563.XA CN109544648B (en) 2018-11-30 2018-11-30 Calibration method and device

Publications (2)

Publication Number Publication Date
CN109544648A CN109544648A (en) 2019-03-29
CN109544648B true CN109544648B (en) 2021-07-13

Family

ID=65852009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811458563.XA Active CN109544648B (en) 2018-11-30 2018-11-30 Calibration method and device

Country Status (1)

Country Link
CN (1) CN109544648B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109946703B (en) * 2019-04-10 2021-09-28 北京小马智行科技有限公司 Sensor attitude adjusting method and device
US11182623B2 (en) * 2019-04-30 2021-11-23 Baidu Usa Llc Flexible hardware design for camera calibration and image pre-procesing in autonomous driving vehicles

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991704A (en) * 2017-03-24 2017-07-28 深圳市圆周率软件科技有限责任公司 A kind of many scene calibration method and system of panorama camera
CN107328411A (en) * 2017-06-30 2017-11-07 百度在线网络技术(北京)有限公司 Vehicle positioning system and automatic driving vehicle

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927754B (en) * 2014-04-21 2016-08-31 大连理工大学 A kind of scaling method of vehicle-mounted vidicon
CN104361599B (en) * 2014-11-25 2017-08-01 深圳市哈工大交通电子技术有限公司 The demarcation of monopod video camera and image pickup method
US10529083B2 (en) * 2016-12-08 2020-01-07 Lighmetrics Technologies Pvt. Ltd. Methods and systems for estimating distance of an object from a moving vehicle
CN107437264B (en) * 2017-08-29 2020-06-19 重庆邮电大学 Automatic detection and correction method for external parameters of vehicle-mounted camera
CN108226906B (en) * 2017-11-29 2019-11-26 深圳市易成自动驾驶技术有限公司 A kind of scaling method, device and computer readable storage medium
CN108564629A (en) * 2018-03-23 2018-09-21 广州小鹏汽车科技有限公司 A kind of scaling method and system of vehicle-mounted camera external parameter

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991704A (en) * 2017-03-24 2017-07-28 深圳市圆周率软件科技有限责任公司 A kind of many scene calibration method and system of panorama camera
CN107328411A (en) * 2017-06-30 2017-11-07 百度在线网络技术(北京)有限公司 Vehicle positioning system and automatic driving vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Investigation on image precision for camera parameters using target and target-free calibration;Hung-Lin Lin等;《2016 International Conference on Advanced Materials for Science and Engineering 》;20170206;第271-274页 *
基于Matlab的计算机视觉测量中摄像机标定方法研究;张伟波等;《数字技术与应用》;20141231;第53-56页 *

Also Published As

Publication number Publication date
CN109544648A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
EP3620823B1 (en) Method and device for detecting precision of internal parameter of laser radar
CN112086010B (en) Map generation method, map generation device, map generation equipment and storage medium
US11731649B2 (en) High precision position estimation method through road shape classification-based map matching and autonomous vehicle thereof
CN109544648B (en) Calibration method and device
CN111127584A (en) Method and device for establishing visual map, electronic equipment and storage medium
US20150003669A1 (en) 3d object shape and pose estimation and tracking method and apparatus
US10032084B2 (en) Image processing apparatus
CN110796118B (en) Method for obtaining attitude adjustment parameters of transportation equipment, transportation equipment and storage medium
JP2010224755A (en) Moving object and position estimating method of the same
CN111539484A (en) Method and device for training neural network
CN113762406A (en) Data mining method and device and electronic equipment
WO2020049737A1 (en) Driving skill evaluation system, method, and program
CN110843775B (en) Obstacle identification method based on pressure sensor
CN115755869A (en) Automatic parking test method and device, storage medium and equipment
CN113469042A (en) Truth value data determination, neural network training and driving control method and device
CN112991550A (en) Obstacle position detection method and device based on pseudo-point cloud and electronic equipment
CN109710594B (en) Map data validity judging method and device and readable storage medium
KR20230120615A (en) Apparatus and method for determining location of pedestrain
CN110827337B (en) Method and device for determining posture of vehicle-mounted camera and electronic equipment
CN111860512A (en) Vehicle identification method and device, electronic equipment and computer readable storage medium
JP6232883B2 (en) Own vehicle position recognition device
US20240160222A1 (en) Method and system for localizing a mobile robot
CN110032172B (en) Vehicle driving control system precision detection method and device
CN116724248A (en) System and method for generating a modeless cuboid
CN114359386A (en) Point cloud data processing method, processing device, storage medium and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant