CN116917936A - External parameter calibration method and device for binocular camera - Google Patents

External parameter calibration method and device for binocular camera Download PDF

Info

Publication number
CN116917936A
CN116917936A CN202180094173.2A CN202180094173A CN116917936A CN 116917936 A CN116917936 A CN 116917936A CN 202180094173 A CN202180094173 A CN 202180094173A CN 116917936 A CN116917936 A CN 116917936A
Authority
CN
China
Prior art keywords
image
straight lines
binocular camera
instance
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180094173.2A
Other languages
Chinese (zh)
Inventor
黄海晖
何启盛
张建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN116917936A publication Critical patent/CN116917936A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to the field of artificial intelligence, in particular to the field of automatic driving, and provides a method and a device for calibrating external parameters of a binocular camera, wherein the method comprises the following steps: acquiring a binocular image shot by a binocular camera; respectively extracting m straight lines from a first image and a second image of the binocular image; reconstructing a plurality of straight lines matched in the first image and the second image into a three-dimensional space based on external parameters of the binocular camera to obtain reconstructed straight lines, wherein the plurality of straight lines matched in the binocular image are projections of the plurality of straight lines in a shooting scene; and adjusting external parameters of the binocular camera according to a reconstruction error, wherein the reconstruction error is determined according to the position relationship between the reconstructed straight lines and the position relationship between the straight lines in the shooting scene. According to the scheme, the external parameters of the binocular camera are adjusted through geometric constraint among a plurality of straight lines in the three-dimensional space, and the accuracy of the external parameter calibration of the binocular camera is improved.

Description

External parameter calibration method and device for binocular camera Technical Field
The application relates to the field of data processing, in particular to a method and a device for calibrating external parameters of a binocular camera.
Background
Camera calibration represents the process of acquiring camera parameters. The camera parameters include an internal parameter, which is a parameter of the camera itself, and an external parameter, which is a parameter related to the installation position of the camera, such as a pitch angle, a roll angle, a yaw angle, and the like.
The binocular camera can acquire dense depth information, and the functions of ranging the target object and the like are realized. However, due to the small binocular base line length, small variations in angle in the external reference can significantly affect the outcome of binocular camera operation, e.g., ranging results. Therefore, the accuracy of the external reference calibration directly influences the accuracy of the working result of the binocular camera.
The external parameter calibration of the binocular camera on the traditional production line usually depends on targets, such as a checkerboard calibration plate or a two-dimensional code calibration plate. The scheme is that the angles of the targets are adjusted manually or through a mechanical arm, so that the binocular camera obtains images of various angles of the checkerboard calibration plate, and the external parameters of the binocular camera are calibrated. This solution is complex and time consuming to operate and relies on expensive equipment. The other scheme is that a large number of targets are arranged, the binocular camera is positioned on the vehicle, and external parameter calibration is completed in the running process of the vehicle. This solution requires maintenance of a large number of targets, and high-precision calibration of the targets is difficult to achieve.
In addition, in the existing scheme, feature point extraction and matching between binocular cameras can be utilized, and external parameters can be calibrated through epipolar constraint relations. However, under the condition that the number of the characteristic points in the environment is small or the characteristic points are unstable and controllable, the accuracy of the calibration result is difficult to ensure by the scheme. For example, under the condition that the environment in a factory is not controlled, the scheme cannot ensure that the result of each calibration is an accurate calibration result, namely, the stability of the calibration result is difficult to ensure, and the beat number of the production line is influenced.
Therefore, how to improve the accuracy of the external parameter calibration of the binocular camera is a problem to be solved.
Disclosure of Invention
The application provides a method and a device for calibrating external parameters of a binocular camera, which are used for adjusting the external parameters of the binocular camera through geometric constraint among a plurality of straight lines in a three-dimensional space and improving the precision of the external parameter calibration of the binocular camera.
In a first aspect, a method for calibrating external parameters of a binocular camera is provided, the method comprising: acquiring a first image and a second image, wherein the first image is obtained by shooting a shooting scene by a first camera in a binocular camera, and the second image is obtained by shooting the shooting scene by a second camera in the binocular camera; extracting m straight lines from the first image and the second image respectively, wherein m is an integer greater than 1, and the m straight lines of the first image and the m straight lines of the second image have a corresponding relation; reconstructing n straight lines of the m straight lines of the first image and n straight lines of the m straight lines of the second image into a three-dimensional space based on external parameters of the binocular camera to obtain n reconstructed straight lines, wherein the n straight lines of the first image and the n straight lines of the second image are projections of the n straight lines in a shooting scene, n is more than 1 and less than or equal to m, and n is an integer; and adjusting external parameters of the binocular camera according to a reconstruction error, wherein the reconstruction error is determined according to the position relationship among the n lines after reconstruction and the position relationship among the n lines in the shooting scene.
The first image and the second image are two images in the binocular image, namely, two images synchronously shot by two cameras in the binocular camera.
The shooting scene comprises one or more calibration objects, and the calibration objects refer to objects shot by the binocular camera. That is, the first image and the second image comprise imaging of the calibration object.
Illustratively, the calibration object includes a level object or a vertical object, or the like.
For example, the level includes a roadway line.
For example, the upright includes a shaft or column, etc.
There is a correspondence between the m straight lines of the first image and the m straight lines of the second image, which means that the m straight lines of the first image and the m straight lines of the second image are projections of the m straight lines in the photographed scene. Projection of m lines in a photographed scene is also understood as imaging of m lines in a photographed scene in a binocular camera.
For different binocular images, m may be the same or different. That is, the number of straight lines extracted in different binocular images may be the same or different.
N may be the same or different for different binocular images. That is, the number of lines reconstructed in different binocular images may be the same or different.
The obtaining of the reconstructed straight line can also be understood as obtaining the spatial position of the reconstructed n straight lines. For example, the binocular camera may be an onboard camera, and the spatial positions of the n lines after reconstruction may be represented by coordinates of the n lines in the own vehicle coordinate system.
Adjusting the external parameters of the binocular camera according to the reconstruction errors may be adjusting the external parameters of the binocular camera according to the reconstruction errors of one or more frames of binocular images.
In the embodiment of the application, the position relation among the straight lines in the shooting scene is used as a geometric constraint, the external parameters of the binocular camera are adjusted according to the difference between the position relation among the reconstructed n straight lines and the position relation among the n straight lines in the shooting scene, the position coordinates of the straight lines in the shooting scene in the three-dimensional space are not required to be accurately positioned, the influence of the accuracy of the position coordinates of the straight lines in the shooting scene on the calibration result is avoided, and the calibration accuracy of the external parameters of the binocular camera is improved.
In addition, in the scheme of the embodiment of the application, the calibration can be completed only by shooting the position relation of two straight lines in the scene, so that the calculated amount is reduced, and the calibration efficiency is improved.
In addition, the scheme of the embodiment of the application can further improve the calibration precision by adding geometric constraint.
In addition, the scheme of the embodiment of the application has strong generalization capability and is suitable for various calibration scenes. Taking a vehicle-mounted binocular camera as an example, the calibration object can adopt common elements in an open road, such as a street lamp post or a pavement line, and the like, and the calibration scene is not required to be arranged in advance, for example, a target is not required to be preset, so that the cost is reduced.
With reference to the first aspect, in certain implementations of the first aspect, adjusting the extrinsic parameters of the binocular camera according to the reconstruction error includes: and adjusting the external parameters of the binocular camera according to the sum of the reconstruction errors of the multi-frame binocular images.
Or, adjusting external parameters of the binocular camera according to the reconstruction error, including: and adjusting the external parameters of the binocular camera according to the average value of the reconstruction errors of the multi-frame binocular images.
Because certain errors may exist in the straight line detection, the external parameters of the binocular camera are adjusted through the accumulated result of the reconstruction errors of the multi-frame binocular images in the embodiment of the application, the influence caused by the errors of the straight line detection can be reduced, and the accuracy of the external parameter calibration of the binocular camera is improved.
With reference to the first aspect, in certain implementations of the first aspect, the reconstruction error includes at least one of: an angle error between the reconstructed n straight lines or a distance error between the reconstructed n straight lines is determined according to a difference between an angle between at least two straight lines of the reconstructed n straight lines and an angle between at least two straight lines of the n straight lines in the shooting scene; the distance error between the reconstructed n lines is determined according to a difference between a distance between at least two lines of the reconstructed n lines and a distance between at least two lines of the n lines in the photographed scene.
With reference to the first aspect, in certain implementations of the first aspect, the at least two straight lines in the photographed scene include at least two straight lines that are parallel to each other.
The external parameters are adjusted according to the parallel error and the distance error, so that the accuracy of the external parameters can be ensured. Meanwhile, the scheme of the embodiment of the application can realize the calibration of the external parameters of the binocular camera by shooting two parallel lines with known distances in the scene, can reduce error items in reconstruction errors, further reduce calculated amount, and improve the adjustment speed of the external parameters, namely improve the calibration efficiency of the external parameters. In addition, the scheme of the embodiment of the application can realize the calibration of the external parameters of the binocular camera by only shooting two parallel lines with known distances in the scene, does not need to accurately position the position of the straight line in the three-dimensional space, does not need to preset a target, reduces the requirement on shooting the scene and reduces the calibration cost.
With reference to the first aspect, in certain implementations of the first aspect, extracting m lines in the first image and the second image, respectively, includes: respectively carrying out instance segmentation on the first image and the second image to obtain an instance in the first image and an instance in the second image; m straight lines are extracted from the instance in the first image and the instance in the second image respectively, and the correspondence between the m straight lines in the first image and the m straight lines in the second image is determined according to the correspondence between the instance in the first image and the instance in the second image.
In the embodiment of the application, the corresponding relation between the straight lines is determined by the corresponding relation between the examples in the two images, so that the accuracy of straight line matching can be improved, the accuracy of external parameter calibration is improved, the calculation complexity is reduced, and the efficiency of external parameter calibration is improved.
With reference to the first aspect, in certain implementations of the first aspect, extracting m straight lines in an instance in the first image and an instance in the second image, respectively, includes: extracting a plurality of original straight lines in the examples of the first image and the second image respectively; fitting a plurality of original straight lines of the same side edge of the instance in the first image to an item mark straight line of the side edge of the instance in the first image; a plurality of original straight lines of the same side edge of the instance in the second image are fitted to an item mark straight line of the side edge of the instance in the second image, and m straight lines belong to the target straight line.
In the scheme of the embodiment of the application, the original straight lines on the same side of the example are fitted, so that a more accurate target straight line can be obtained, and the target straight line is utilized to calibrate the external parameters of the binocular camera, thereby being beneficial to improving the accuracy of the calibration result of the external parameters of the binocular camera.
With reference to the first aspect, in some implementations of the first aspect, performing instance segmentation on the first image and the second image to obtain an instance in the first image and an instance in the second image, respectively, includes: respectively carrying out semantic segmentation on the first image and the second image to obtain a semantic segmentation result of the first image and a semantic segmentation result of the second image, wherein the semantic segmentation result of the first image comprises a horizontal object or a vertical object in the first image, and the semantic segmentation result of the second image comprises a horizontal object or a vertical object in the second image; performing instance segmentation on the first image based on the semantic segmentation result of the first image to obtain an instance in the first image, and performing instance segmentation on the second image based on the semantic segmentation result of the second image to obtain an instance in the second image.
In the embodiment of the application, the horizontal objects or the vertical objects in the image are distinguished through semantic segmentation, so that the adjustment of the external parameters of the binocular camera can be realized by utilizing geometric constraints, such as vertical constraints, between straight lines in the horizontal objects or the vertical objects in the shooting scene. If the binocular camera is a vehicle-mounted camera, the roadway line or the rod-shaped object and the like are common in an open road, calibration objects in the open road can be adopted to realize adjustment of external parameters of the binocular camera, a calibration site is not required to be arranged in advance, and cost is reduced.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and controlling a display to display the calibration condition of the external parameters of the binocular camera.
For example, the display may be an in-vehicle display.
According to the scheme provided by the embodiment of the application, the current calibration condition can be displayed in real time, so that a user can know the current calibration progress, and the user experience is improved.
With reference to the first aspect, in certain implementations of the first aspect, the calibration of the binocular camera external parameters includes at least one of: the current calibration progress, the current reconstruction error condition or the reconstructed p straight lines are obtained by reconstructing p straight lines in m straight lines of the first image and p straight lines in m straight lines of the second image into a three-dimensional space based on the external parameters of the current binocular camera, wherein p is more than 1 and less than or equal to m, and p is an integer.
For example, the external parameters of the current binocular camera may be external parameters of the adjusted binocular camera.
Alternatively, the external parameters of the current binocular camera may be the external parameters of the binocular camera that are optimal in the adjustment process. The external parameters of the binocular camera that are optimal in the adjustment process may be external parameters that minimize the reconstruction error in the adjustment process.
In the embodiment of the application, the calibration result is visualized, and the three-dimensional space position of the reconstructed straight line is displayed, so that the user can intuitively feel the current calibration condition.
With reference to the first aspect, in certain implementations of the first aspect, the current calibration schedule includes at least one of: the external parameters of the current binocular camera or the current calibration completion degree.
With reference to the first aspect, in certain implementations of the first aspect, the current case of a reconstruction error includes at least one of: current reconstruction error, current distance error, or current angle error.
In a second aspect, there is provided an apparatus for binocular camera extrinsic calibration, the apparatus comprising: the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a first image and a second image, the first image is obtained by shooting a shooting scene by a first camera in a binocular camera, and the second image is obtained by shooting the shooting scene by a second camera in the binocular camera; the processing unit is used for extracting m straight lines from the first image and the second image respectively, wherein m is an integer greater than 1, and the m straight lines of the first image and the m straight lines of the second image have a corresponding relation; reconstructing n straight lines of the m straight lines of the first image and n straight lines of the m straight lines of the second image into a three-dimensional space based on external parameters of the binocular camera to obtain n reconstructed straight lines, wherein the n straight lines of the first image and the n straight lines of the second image are projections of the n straight lines in a shooting scene, n is more than 1 and less than or equal to m, and n is an integer; and adjusting external parameters of the binocular camera according to a reconstruction error, wherein the reconstruction error is determined according to the position relationship among the n lines after reconstruction and the position relationship among the n lines in the shooting scene.
Optionally, as an embodiment, the reconstruction error includes at least one of: an angle error between the reconstructed n straight lines or a distance error between the reconstructed n straight lines is determined according to a difference between an angle between at least two straight lines of the reconstructed n straight lines and an angle between at least two straight lines of the n straight lines in the shooting scene; the distance error between the reconstructed n lines is determined according to a difference between a distance between at least two lines of the reconstructed n lines and a distance between at least two lines of the n lines in the photographed scene.
Optionally, as an embodiment, the at least two straight lines in the shooting scene include at least two straight lines parallel to each other.
Optionally, as an embodiment, the processing unit is specifically configured to: respectively carrying out instance segmentation on the first image and the second image to obtain an instance in the first image and an instance in the second image; m straight lines are extracted from the instance in the first image and the instance in the second image respectively, and the correspondence between the m straight lines in the first image and the m straight lines in the second image is determined according to the correspondence between the instance in the first image and the instance in the second image.
Optionally, as an embodiment, the processing unit is specifically configured to: respectively carrying out semantic segmentation on the first image and the second image to obtain a semantic segmentation result of the first image and a semantic segmentation result of the second image, wherein the semantic segmentation result of the first image comprises a horizontal object or a vertical object in the first image, and the semantic segmentation result of the second image comprises a horizontal object or a vertical object in the second image; performing instance segmentation on the first image based on the semantic segmentation result of the first image to obtain an instance in the first image, and performing instance segmentation on the second image based on the semantic segmentation result of the second image to obtain an instance in the second image.
Optionally, as an embodiment, the apparatus further includes: and the display unit is used for displaying the calibration condition of the external parameters of the binocular camera.
Optionally, as an embodiment, the calibration condition of the external parameters of the binocular camera includes at least one of the following: the current calibration progress, the current reconstruction error condition or the reconstructed p straight lines are obtained by reconstructing p straight lines in m straight lines of the first image and p straight lines in m straight lines of the second image into a three-dimensional space based on the external parameters of the current binocular camera, wherein p is more than 1 and less than or equal to m, and p is an integer.
Optionally, as an embodiment, the current calibration schedule includes at least one of: the external parameters of the current binocular camera or the current calibration completion degree.
Optionally, as an embodiment, the current case of reconstruction errors includes at least one of: current reconstruction error, current distance error, or current angle error.
Optionally, the binocular camera is a vehicle-mounted camera, and the vehicle carried by the binocular camera can be in a static state or a moving state.
In a third aspect, there is provided an apparatus for binocular camera extrinsic calibration, the apparatus comprising a processor coupled to a memory for storing a computer program or instructions, the processor for executing the computer program or instructions stored by the memory, such that the method of the first aspect or any one of the implementations of the first aspect is performed.
Optionally, the apparatus includes one or more processors.
Optionally, a memory coupled to the processor may also be included in the apparatus.
Alternatively, the apparatus may comprise one or more memories.
Alternatively, the memory may be integrated with the processor or provided separately.
Optionally, the apparatus may further comprise a data interface.
In a fourth aspect, a computer readable medium is provided, the computer readable medium storing program code for execution by a device, the program code comprising instructions for performing the method of the first aspect or any implementation of the first aspect.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or any of the implementations of the first aspect.
In a sixth aspect, a chip is provided, the chip comprising a processor and a data interface, the processor reading instructions stored on a memory through the data interface, performing the method of the first aspect or any implementation of the first aspect.
Optionally, as an implementation manner, the chip may further include a memory, where the memory stores instructions, and the processor is configured to execute the instructions stored on the memory, where the instructions, when executed, are configured to perform the method in the first aspect or any implementation manner of the first aspect.
A seventh aspect provides a terminal comprising the apparatus of any of the second aspect and the implementation manner of the second aspect.
Optionally, the terminal further comprises a binocular camera.
The terminal may be a vehicle, for example.
Drawings
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of a calibration method of binocular camera external parameters provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a principle of binocular camera imaging provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of another calibration method of binocular camera external parameters provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a calibration site provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a semantic segmentation result of a binocular image provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of example labeling results of binocular images provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a target line in a binocular image provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of the spatial position of a reconstructed straight line provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of the current calibration scenario provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of a calibration device for external parameters of a binocular camera according to an embodiment of the present application;
Fig. 12 is a schematic diagram of another calibration device for external parameters of a binocular camera according to an embodiment of the present application.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings.
The scheme of the embodiment of the application can be applied to intelligent equipment. A smart device refers to any device, appliance, or machine having computing processing capabilities. The intelligent device in the embodiment of the application can be a robot (robot), an automatic driving automobile (autonomous vehicles), an intelligent auxiliary driving automobile, an unmanned aerial vehicle (unmanned aerial vehicle), an intelligent auxiliary airplane, a smart home (smart home) device and the like. The application does not limit the intelligent device. Any device that can mount a binocular camera may be included within the scope of the smart device of the present application.
The method provided by the embodiment of the application can be applied to automatic driving, unmanned aerial vehicle navigation, robot navigation, industrial non-contact detection, three-dimensional reconstruction, virtual reality and other scenes needing binocular camera calibration. Specifically, the method of the embodiment of the application can be applied to an automatic driving scene, and the automatic driving scene is briefly introduced below.
As shown in fig. 1, vehicle 110 may be configured in a fully or partially autonomous mode. For example, the vehicle 110 may control itself while in the automatic driving mode, and the current state of the vehicle and its surrounding environment may be determined by human operation, determine possible behaviors of at least one other vehicle in the surrounding environment, and determine a confidence level corresponding to the likelihood that the other vehicle performs the possible behaviors, and control the vehicle 110 based on the determined information. While the vehicle 110 is in the autonomous mode, the vehicle 100 may be placed into operation without interaction with a person.
The mobile data center 120 (mobile data center, MDC) acts as an autopilot computing platform for processing various sensor data to provide decision support for autopilot.
Vehicle 110 includes a sensor system. The sensor system includes several sensors that sense information about the environment surrounding the vehicle 110. For example, the sensor system may include a positioning system (which may be a GPS system, or a beidou system or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a radar, a laser rangefinder, a binocular camera, and the like. It should be appreciated that while only a binocular camera 111 is shown in fig. 1, other sensors may also be included in the vehicle 110.
To obtain a better sensing result, the information of a plurality of sensors needs to be fused. Specifically, different sensors can be unified under the same coordinate system through external parameters among the sensors, so that fusion of information of a plurality of sensors is realized.
The calibration module 121 is used to determine the external parameters of the sensor. In an embodiment of the present application, the sensor system includes a binocular camera, and the calibration module 121 is configured to determine external parameters of the binocular camera. As shown in fig. 1, the calibration module 121 may implement calibration of the external parameters of the binocular camera according to the binocular image acquired by the binocular camera.
The upper layer function module 122 may implement a corresponding function based on external parameters of the binocular camera. That is, the external parameter calibration result of the binocular camera may be provided to an upper layer service of the autopilot. For example, the ranging function module may determine a distance between the obstacle and the vehicle from images acquired by the binocular camera based on external parameters of the binocular camera. For another example, the obstacle avoidance function may identify, evaluate, and avoid or otherwise traverse potential obstacles in the environment of the vehicle from images acquired by the binocular camera based on external parameters of the binocular camera.
Further, the current calibration condition may be displayed through a human-machine interaction interface (human machine interface, HMI) 130 on the vehicle.
For example, the HMI130 may be an on-board display.
It should be noted that fig. 1 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship among devices, apparatuses, modules, etc. shown in the drawing is not limited in any way, for example, in fig. 1, the calibration module 121 and the upper layer functional module 122 are located in the MDC120, and in other cases, the calibration module 121 or the upper layer functional module may also be located in other processors of the vehicle 110. Alternatively, some of the processes described above are performed by a processor disposed within vehicle 110 and others are performed by a remote processor. As another example, in fig. 1, the MDC120 is located outside the vehicle 110 and may be in wireless communication with the vehicle 110. In other cases, the MDC may be located inside the vehicle 110. HMI130 may be located within vehicle 110.
The scheme of the embodiment of the application can be applied to the calibration module 121. The scheme of the embodiment of the application can be executed by the calibration module 121. By adopting the scheme of the embodiment of the application, the external parameters of the binocular camera can be calibrated, the external parameter calibration efficiency of the binocular camera is improved, the external parameter calibration value of the binocular camera in the system is updated, and high-precision external parameters are provided for upper-layer business, so that the accuracy of the upper-layer business is improved, and the automatic driving performance is further improved.
In order to facilitate understanding of the embodiments of the present application, concepts related to the embodiments of the present application are described below for illustration.
The camera calibration (camera calibration) is also referred to as camera calibration.
Based on the imaging principle of the camera, a corresponding relation exists between a three-dimensional space point in the geometric model imaged by the camera and a two-dimensional image point on the image plane, and the corresponding relation is determined by parameters of the camera. The process of obtaining parameters of the camera is called camera calibration. The imaging principle of the camera is prior art, and this is not described in detail herein.
As an example, assume that three-dimensional space points in a geometric model imaged by a camera are noted as X W The two-dimensional image point on the image plane in the geometric model imaged by the camera is marked as X P Three-dimensional space point X W And two-dimensional image point X P The relationship between these can be expressed as follows:
X P =MX W
wherein M represents a three-dimensional space point X W And two-dimensional image point X P The conversion matrix in between may be referred to as a projection matrix. Some elements in the projection matrix M characterize the parameters of the camera. The camera calibration is to acquire the projection matrix M.
Parameters of the camera include internal participation and external participation. The internal parameters are parameters of the camera, such as focal length. The external parameters are parameters related to the installation position of the camera, such as pitch angle (pitch), roll angle (roll), yaw angle (yaw) and the like.
The conversion matrix corresponding to the internal reference may be referred to as an internal reference conversion matrix, and the conversion matrix corresponding to the external reference may be referred to as an external reference conversion matrix.
Camera calibration generally requires calibration of a reference (which may also be referred to as a calibration object or reference object). The calibration reference object represents an object shot by the camera in the camera calibration process.
For example, in the above example, three-dimensional spatial point X W Can be the coordinates of a calibration reference object in a world coordinate system, and a two-dimensional image point X P May be a two-dimensional coordinate of the calibration reference on the image plane of the camera.
In image measurement or machine vision application, calibration of camera parameters is a very critical link, and accuracy of a calibration result directly influences accuracy of a result generated by camera work.
Binocular cameras may also be referred to as binocular cameras or binocular sensors. The binocular camera includes two cameras: left camera and right camera. The binocular camera is capable of obtaining depth information of a scene and reconstructing a three-dimensional shape and position of surrounding scenes. The purpose of the binocular camera calibration is mainly to obtain the internal parameters and the external parameters of the left camera and the right camera. The external references of the left and right cameras refer to the relative positional relationship of the left and right cameras, for example, the translation vector and rotation matrix of the right camera relative to the left camera. The rotation matrix may also be expressed as pitch angle (pitch), roll angle (roll), and yaw angle (yaw).
The external parameter calibration of the binocular camera is a key link in image measurement or machine vision application, and the accuracy of a calibration result directly influences the accuracy of a result generated by the operation of the binocular camera.
The embodiment of the application provides a method and a device for calibrating external parameters of a binocular camera, which can improve the precision of the external parameter calibration of the binocular camera.
Fig. 2 is a schematic diagram of a method 200 for calibrating external parameters of a binocular camera according to an embodiment of the present application. The method 200 includes steps S210 to S240.
For example, the method 200 may be applied to calibration of an in-vehicle camera, the method 200 may be performed by a calibration module, which may be located on an in-vehicle computer platform.
S210, at least one frame of binocular image shot by the binocular camera is acquired.
The binocular image in the embodiment of the application refers to two images synchronously shot by two cameras in the binocular camera. Two images captured simultaneously can also be understood as two images captured at the same time. For example, the time stamps of two of the binocular images are identical. In this case, acquiring a binocular image captured by a binocular camera may also be understood as acquiring two images of the same timestamp captured by the binocular camera.
Any binocular image of the at least one frame of binocular images includes a first image and a second image. The first image is obtained by shooting a shooting scene by a first camera in the binocular camera, and the second image in the binocular image is obtained by shooting the shooting scene by a second camera in the binocular camera.
The shooting scene comprises one or more calibration objects, and the calibration objects refer to objects shot by the binocular camera. That is, the first image and the second image comprise imaging of the calibration object.
Illustratively, the calibration object includes a level object or a vertical object, or the like.
For example, the level includes a roadway line.
For example, the upright includes a shaft or column, etc.
The first camera may be a left camera and the second camera may be a right camera. Alternatively, the first camera may be a right camera, and the second camera may be a left camera, which is not limited in the embodiment of the present application. The image captured by the left camera may also be referred to as a left eye image, and the image captured by the right camera may also be referred to as a right eye image.
It should be noted that, the "first" and "second" in the "first image" and the "second image" in the embodiment of the present application are only used to distinguish different images in a frame of binocular image, and have no other limiting effect. The first image of the different binocular images is a different image and the second image of the different binocular images is a different image.
The at least one frame of binocular image may be obtained by photographing different photographing scenes or may be obtained by photographing the same photographing scene.
In the embodiment of the application, the shooting scenes are the same, namely the calibration objects in the shooting scenes are the same, and the shooting scenes are different, namely the calibration objects in the shooting scenes are different. That is, the markers in different binocular images may be the same or different.
For example, the binocular camera may be an on-board camera, and the at least one frame of binocular image may be a plurality of frames of binocular images captured during a vehicle traveling on a calibrated field.
For brevity and clarity of description, only one frame of binocular image is illustrated in step S220 to step S240, and other binocular images may be processed in the same manner.
S220, extracting m straight lines from the first image and the second image respectively, wherein m is an integer greater than 1.
There is a correspondence between the m straight lines of the first image and the m straight lines of the second image.
The straight line in the shooting scene is a straight line in the three-dimensional space. The imaging of the straight line in the three-dimensional space in the image coordinate system is the straight line in the image. Alternatively, the straight line in the image is a projection of the straight line in three-dimensional space.
There is a correspondence between the m straight lines of the first image and the m straight lines of the second image, which means that the m straight lines of the first image and the m straight lines of the second image are projections of the m straight lines in the photographed scene. Projection of m lines in a photographed scene is also understood as imaging of m lines in a photographed scene in a binocular camera.
That is, there is a correspondence between the m straight lines of the first image, the m straight lines of the second image, and the m straight lines in the photographed scene.
The plurality of straight lines in the photographed scene may be understood as straight lines in the calibration object in the photographed scene. The plurality of straight lines may be straight lines in one calibration object or straight lines in a plurality of calibration objects.
If the at least one frame of binocular image includes a plurality of frames of binocular images, step S220 may be understood as extracting a straight line from each frame of binocular image in the plurality of frames of binocular images. Alternatively, step S220 is performed on the multi-frame binocular image.
It will be appreciated that m may be the same or different for different binocular images. That is, the number of straight lines extracted in different binocular images may be the same or different, which is not limited in the embodiment of the present application.
S230, reconstructing n straight lines in the first image and n straight lines in the second image into a three-dimensional space based on external parameters of the binocular camera, and obtaining n reconstructed straight lines. The n straight lines of the first image and the n straight lines of the second image are projections of the n straight lines in the photographed scene. N is more than 1 and less than or equal to m, and n is an integer.
The n straight lines in the first image belong to the m straight lines of the first image. The n straight lines in the second image belong to the m straight lines of the second image.
The obtaining of the reconstructed straight line can also be understood as obtaining the spatial position of the reconstructed n straight lines. For example, the binocular camera may be an onboard camera, and the spatial positions of the n lines after reconstruction may be represented by coordinates of the n lines in the own vehicle coordinate system.
For a straight line in the three-dimensional space, an image obtained by photographing the straight line by the camera includes imaging of the straight line, namely, the straight line in the image. According to the imaging principle of the camera, the straight line in the three-dimensional space, the straight line in the image and the optical center of the camera are positioned on the same plane. If two cameras in the binocular camera shoot a straight line in the three-dimensional space at the same time, the straight line in the three-dimensional space, the straight line in the left-eye image shot by the left camera and the optical center of the left camera are in a plane 1#, and the straight line in the three-dimensional space, the straight line in the right-eye image shot by the right camera and the optical center of the right camera are in a plane 2#. The intersection of plane 1# and plane 2# is the straight line in three-dimensional space.
As shown in fig. 3, if two cameras in the binocular camera shoot a straight line in the three-dimensional space at the same time, the left eye image and the right eye image include projections of the same straight line, based on internal parameters and external parameters of the binocular camera, a plane where the straight line in the left eye image and the optical center of the left camera are located, and a plane where the straight line in the right eye image and the optical center of the right camera are located can be obtained, and the straight line obtained by intersecting the two planes is the reconstructed straight line. The process is a process of reconstructing straight lines in an image into a three-dimensional space. The more accurate the external reference of the binocular camera, the closer the spatial position of the reconstructed straight line is to the spatial position of the straight line in three-dimensional space.
For example, n straight lines in the photographed scene include straight line 1# and straight line 2#, and projections of straight line 1# and straight line 2# in the first image are straight line 1# in the first image and straight line 2# in the first image, respectively. The projections of straight line 1# and straight line 2# in the second image are referred to as straight line 1# in the second image and straight line 2# in the second image, respectively.
Reconstructing straight line 1# in the first image and straight line 1# in the second image into space to obtain reconstructed straight line 1#. Similarly, the straight line 2# in the first image and the straight line 2# in the second image are reconstructed into the space, and the reconstructed straight line 2# can be obtained.
If the at least one frame of binocular image includes a plurality of frames of binocular images, step S230 may be understood as reconstructing straight lines in the plurality of frames of binocular images into space based on external parameters of the binocular camera, so as to obtain reconstructed straight lines in the plurality of frames of binocular images. Alternatively, step S230 is performed on the multi-frame binocular image.
It will be appreciated that n may be the same or different for different binocular images. That is, the number of lines reconstructed in different binocular images may be the same or different, which is not limited by the embodiment of the present application.
S240, adjusting external parameters of the binocular camera according to a reconstruction error, wherein the reconstruction error is determined according to the position relationship between the reconstructed n straight lines and the position relationship between the n straight lines in the shooting scene.
Step S240 may be understood as targeting the reduction of the reconstruction error, or alternatively, targeting the minimization of the reconstruction error, adjusting the external parameters of the binocular camera. That is, the equation of the reconstruction error is constructed with the external parameters of the binocular camera as arguments, with the goal of obtaining the external parameters of the binocular camera that minimize the reconstruction error.
In the embodiment of the application, the position relation among the straight lines in the shooting scene is used as a geometric constraint, the external parameters of the binocular camera are adjusted according to the difference between the position relation among the reconstructed n straight lines and the position relation among the n straight lines in the shooting scene, the position coordinates of the straight lines in the shooting scene in the three-dimensional space are not required to be accurately positioned, the influence of the accuracy of the position coordinates of the straight lines in the shooting scene on the calibration result is avoided, and the calibration accuracy of the external parameters of the binocular camera is improved.
In addition, in the scheme of the embodiment of the application, the calibration can be completed only by shooting the position relation of two straight lines in the scene, so that the calculated amount is reduced, and the calibration efficiency is improved.
In addition, the scheme of the embodiment of the application can further improve the calibration precision by adding geometric constraint.
In addition, the scheme of the embodiment of the application has strong generalization capability and is suitable for various calibration scenes. Taking a vehicle-mounted binocular camera as an example, the calibration object can adopt common elements in an open road, such as a street lamp post or a pavement line, and the like, and the calibration scene is not required to be arranged in advance, for example, a target is not required to be preset, so that the cost is reduced.
Adjusting the external parameters of the binocular camera according to the reconstruction errors may be adjusting the external parameters of the binocular camera according to the reconstruction errors of one or more frames of binocular images. For convenience of description, the error in reconstructing a frame of binocular image is described first.
Specifically, for one frame of binocular image, a reconstruction error of the frame of binocular image is used to indicate a difference between a positional relationship between n straight lines after reconstruction and a positional relationship between n straight lines in a shooting scene. The smaller the reconstruction error is, the more the position relation among n straight lines after reconstruction accords with the position relation among n straight lines in a shooting scene, and the higher the accuracy of the external parameters of the current binocular camera is.
In one implementation, the adjusted external parameters of the binocular camera may be used as external parameter calibration values for the binocular camera.
In another implementation manner, the external parameters of the binocular camera in the step S230 may be updated to the adjusted external parameters of the binocular camera, and the steps S230 to S240 are repeatedly executed until the external parameters of the binocular camera meeting the preset condition are obtained, and the external parameters of the binocular camera are used as the external parameter calibration values of the binocular camera. For example, the preset condition may be an external parameter that reconstructs the error less than or equal to the error threshold.
For example, the external parameters of the binocular camera that minimize the reconstruction error may be searched in the external parameter pose space of the binocular camera, and the searched external parameters may be used as external parameter calibration values of the binocular camera.
Alternatively, the external parameters of the binocular camera may be adjusted in a nonlinear optimization manner, and the adjusted external parameters are used as external parameter calibration values of the binocular camera.
It should be understood that the above is only an example, and other ways of solving the optimal solution are also applicable to the solution of the embodiment of the present application.
The positional relationship between n straight lines in the photographed scene may be regarded as a geometric constraint between the n straight lines. The smaller the reconstruction error, the more the reconstructed n straight lines can meet the geometric constraint.
In this case, step S240 may be understood as adjusting the external parameters of the binocular camera so that the reconstructed n straight lines satisfy the geometric constraint between the n straight lines in the photographed scene as much as possible. Or, adjusting the external parameters of the binocular camera according to geometric constraints among n straight lines in the shooting scene.
The geometric constraint is expressed in terms of the positional relationship between n straight lines in the photographed scene.
Taking two straight lines in a shooting scene as an example, the position relationship between the two straight lines in the shooting scene is two straight lines with an included angle of 60 degrees, and the geometric constraint satisfied by the two straight lines can include: the two straight lines intersect at an included angle of 60 degrees. In this case, the external parameters of the binocular camera may be adjusted so that the two straight lines after reconstruction satisfy the geometric constraint as much as possible.
Optionally, the reconstruction error includes at least one of: angle errors between the n straight lines after reconstruction or distance errors between the n straight lines after reconstruction.
The angle error between the reconstructed n lines is determined from a difference between the angle between at least two of the reconstructed n lines and the angle between at least two of the n lines in the photographed scene. The at least two reconstructed straight lines correspond to at least two straight lines in the shooting scene.
The angle error between the n lines after reconstruction is used to constrain the angle between the n lines after reconstruction. I.e. the angle error may act as an angle constraint.
The angle error of the two reconstructed straight lines is determined according to the difference between the angle between the two reconstructed straight lines and the angle between the two straight lines in the shooting scene.
For example, the angle between two straight lines in a shooting scene is a. The angle error of the two reconstructed straight lines is the absolute value of the difference between the angle between the two reconstructed straight lines and a.
For example, if the number of the reconstructed at least two lines is 2, the angle error between the reconstructed n lines may be the angle error of the reconstructed two lines.
That is, two straight lines may be selected from the n straight lines after reconstruction, and a reconstruction error of the two straight lines may be used as an angle error between the n straight lines after reconstruction.
For example, if the number of the reconstructed at least two lines is greater than 2, the angle error between the reconstructed n lines may be a sum of the angle errors of the reconstructed at least two lines, or an average value of the angle errors of the reconstructed at least two lines.
That is, 3 or more straight lines may be selected from the n straight lines after reconstruction, and the sum of the angle errors between the selected straight lines, or the average value of the angle errors between the selected straight lines may be taken as the angle error between the n straight lines after reconstruction.
For example, the reconstructed at least two straight lines include reconstructed straight line 1#, reconstructed straight line 2# and reconstructed straight line 3#. The angle error between the reconstructed straight line 1# and the reconstructed straight line 2# is the angle error 1#, and the angle error between the reconstructed straight line 1# and the reconstructed straight line 3# is the angle error 2#. The angle error of the n straight lines after reconstruction may be the sum of the angle error 1# and the angle error 2# or the angle error may be the average value of the angle error 1# and the angle error 2#.
The distance error between the reconstructed n lines is determined according to a difference between a distance between at least two lines of the reconstructed n lines and a distance between at least two lines of the n lines in the photographed scene. The at least two reconstructed straight lines correspond to at least two straight lines in the shooting scene.
The at least two straight lines in the shooting scene employed in calculating the distance error include at least two parallel straight lines. For example, the at least two lines in the shooting scene may be parallel to each other, or the at least two lines include multiple groups of parallel lines, where multiple lines in the same group are parallel to each other and different groups of lines are not parallel, which is not limited by the embodiment of the present application.
It should be understood that at least two straight lines used for the distance error between the n straight lines after reconstruction and at least two straight lines used for the angle error between the n straight lines after reconstruction may be the same or different.
The distance error between the n reconstructed lines is used to constrain the distance between the reconstructed lines. I.e. the distance error may act as a distance constraint.
The distance error of the two reconstructed straight lines is determined according to the difference between the distance between the two reconstructed straight lines and the distance between the two straight lines in the shooting scene.
For example, the distance between two straight lines in the photographed scene is b, and the distance error of the two reconstructed straight lines may be determined according to the difference between the distance between the two reconstructed straight lines and b.
For example, the distance between two straight lines after reconstruction may be determined from the distance between one or more points on one of the straight lines and the other straight line.
The one or more points may be set as desired, illustratively, determined from the depth values. For example, a point having a depth of 0 meters and a point having a depth of 30 meters on one straight line are selected.
For example, an average value of a plurality of distances between a plurality of points on one of the two straight lines after reconstruction and the other straight line is taken as the distance between the two straight lines after reconstruction.
Taking two points as an example, taking two points from the straight line 1# in the two reconstructed straight lines, respectively calculating the distance from the two points to the other straight line 2# and taking the average value of the distances from the two points to the other straight line 2# as the distance between the two reconstructed straight lines. Alternatively, a point is taken from the straight line 1# of the two reconstructed straight lines, the distance from the point to the other straight line 2# is calculated, a point is taken from the straight line 2#, the distance from the point to the straight line 1# is calculated, and the average value of the two distances is taken as the distance between the two reconstructed straight lines.
It should be understood that the foregoing is merely an example, and the distance between two straight lines after reconstruction may be determined in other manners, which is not limited by the embodiment of the present application.
For example, if the number of the reconstructed at least two lines is 2, the distance error between the reconstructed n lines may be the distance error between the reconstructed two lines.
For example, if the number of the reconstructed at least two lines is greater than 2, the distance error between the reconstructed n lines may be a sum of the distance errors of the reconstructed at least two lines, or an average value of the distance errors of the reconstructed at least two lines.
For example, the reconstructed at least two straight lines include reconstructed straight line 1#, reconstructed straight line 2# and reconstructed straight line 3#. The distance error between the reconstructed straight line 1# and the reconstructed straight line 2# is the distance error 1#, and the distance error between the reconstructed straight line 1# and the reconstructed straight line 3# is the distance error 2#. The distance error between the n straight lines after reconstruction may be the sum of the distance error 1# and the distance error 2# or the distance error may be the average value of the distance error 1# and the distance error 2#.
Optionally, the at least two straight lines in the shooting scene include at least two straight lines parallel to each other.
In this case, the angle error includes a parallel error.
Illustratively, the reconstruction errors include parallel errors and distance errors.
The parallel error is used to constrain the parallel relationship between the reconstructed lines. I.e. the parallelism error can be regarded as a kind of parallelism constraint.
Illustratively, the angle between two mutually parallel straight lines may be 0. The parallelism error may be the angle between the reconstructed straight lines.
For example, the distance between two straight lines parallel to each other in a shooting scene can be determined by a high-precision map.
Alternatively, the distance between two parallel straight lines in the photographed scene may be measured by other sensors. The method for determining the distance between the straight lines in the shooting scene is not limited.
For example, at least two straight lines in a shooting scene are two straight lines parallel to each other. That is, the reconstructed straight line is constrained by the positional relationship between two straight lines parallel to each other in the photographed scene. The reconstruction error may include a parallel error and a distance error between the two straight lines after reconstruction.
The external parameters are adjusted according to the parallel error and the distance error, so that the accuracy of the external parameters can be ensured. Meanwhile, the scheme of the embodiment of the application can realize the calibration of the external parameters of the binocular camera by shooting two parallel lines with known distances in the scene, can reduce error items in reconstruction errors, further reduce calculated amount, and improve the adjustment speed of the external parameters, namely improve the calibration efficiency of the external parameters. In addition, the scheme of the embodiment of the application can realize the calibration of the external parameters of the binocular camera by only shooting two parallel lines with known distances in the scene, does not need to accurately position the position of the straight line in the three-dimensional space, does not need to preset a target, reduces the requirement on shooting the scene and reduces the calibration cost.
Further, the at least two straight lines in the shooting scene include at least two straight lines perpendicular to each other.
In this case, the angle error includes a vertical error.
The vertical error is used to constrain the vertical relationship between the reconstructed lines. I.e. the vertical error can be taken as a vertical constraint.
Specifically, the vertical error is used to calculate the difference between the angle between the two straight lines after reconstruction and the angle between the two straight lines perpendicular to each other in the shooting scene.
For example, if the angle between two straight lines perpendicular to each other is 90 degrees, the term of the perpendicular error between the two straight lines after reconstruction may be the difference between the angle between the two straight lines after reconstruction and 90 degrees.
As described above, step S240 may adjust the external parameters of the binocular camera according to the reconstruction errors of the multi-frame binocular image.
Illustratively, step S240 includes: and adjusting the external parameters of the binocular camera according to the sum of the reconstruction errors of the multi-frame binocular images.
Alternatively, step S240 includes: and adjusting the external parameters of the binocular camera according to the average value of the reconstruction errors of the multi-frame binocular images.
In this case, step S240 may be understood as constructing an equation of the reconstruction error of the multi-frame binocular image using the external parameters of the binocular camera as variables, calculating the external parameters of the binocular camera that minimizes the reconstruction error of the multi-frame binocular image, and using the external parameters as external reference values of the binocular camera.
The reconstruction error of each frame of binocular image can be calculated according to the foregoing description, and will not be repeated here.
Because certain errors may exist in the straight line detection, the external parameters of the binocular camera are adjusted through the accumulated result of the reconstruction errors of the multi-frame binocular images in the embodiment of the application, the influence caused by the errors of the straight line detection can be reduced, and the accuracy of the external parameter calibration of the binocular camera is improved.
Optionally, the method 200 further comprises: and controlling a display to display the calibration condition of the external parameters of the binocular camera.
The display may be an in-vehicle display, for example.
That is, the vehicle-mounted display can display the calibration condition of the external parameters of the binocular camera in real time.
Therefore, the method is beneficial to enabling the user to know the calibration condition of the external parameters in real time, and improving the user experience.
Optionally, the calibration of the binocular camera external parameters includes at least one of: current calibration progress, current reconstruction error condition or p straight lines after reconstruction.
The p straight lines after reconstruction are obtained by reconstructing p straight lines in m straight lines of the first image and p straight lines in m straight lines of the second image into a three-dimensional space based on the external parameters of the current binocular camera. P is more than 1 and less than or equal to m, and p is an integer.
The p straight lines in the first image and the p straight lines in the second image are matched straight lines. P straight lines in the first image and p straight lines in the second image are projected into a three-dimensional space.
For example, the external parameters of the current binocular camera may be external parameters of the adjusted binocular camera.
Alternatively, the external parameters of the current binocular camera may be the external parameters of the binocular camera that are optimal in the adjustment process. The external parameters of the binocular camera that are optimal in the adjustment process may be external parameters that minimize the reconstruction error in the adjustment process.
That is, the calibration result is visualized by reconstructing the spatial position of the straight line.
It should be noted that, the straight line reconstructed in the process of visualizing the calibration result and the straight line reconstructed in the process of adjusting the external parameter may correspond to the same straight line in the shooting scene, or may correspond to different straight lines in the shooting scene, which is not limited in the embodiment of the present application.
In the existing calibration scheme, the re-projection error is usually given after the calibration is finished, and the current calibration condition cannot be given in real time.
In the embodiment of the application, the calibration result is visualized, and the three-dimensional space position of the reconstructed straight line is displayed, so that the user can intuitively feel the current calibration condition.
Optionally, the current calibration schedule includes at least one of: the external parameters of the current binocular camera or the current calibration completion degree.
For example, the external parameters of current binocular cameras may be expressed in terms of yw, pitch, and roll. Alternatively, the external parameters of the current binocular camera may be represented in the form of a rotation matrix. The embodiment of the present application is not limited thereto.
For example, the current calibration completion may be determined based on the current reconstruction error.
The current reconstruction error refers to a value of a reconstruction error corresponding to the external parameters of the current binocular camera, i.e., a value of a reconstruction error obtained based on the external parameters of the current binocular camera. For example, the external parameter of the current binocular camera may be the external parameter of the optimal binocular camera in the adjustment process, and the current reconstruction error is the minimum reconstruction error in the adjustment process.
For example, the current calibration completion may be determined based on the difference between the current reconstruction error and the error threshold. The current calibration completion may be the difference, or a percentage determined from the difference. That is, the smaller the difference between the current reconstruction error and the error threshold, the higher the current calibration completion.
For example, the current calibration completion may be determined based on the number of searches of the current external parameters.
As described previously, in step S240, the external parameters of the binocular camera that minimize the reconstruction error may be searched in the external parameter pose space of the binocular camera. The current calibration completion may be determined based on the current number of searches and a search number threshold. The closer the current search times are to the search times threshold, the higher the current calibration completion.
For example, the current calibration completion may be determined based on the number of frames of the currently processed binocular image.
As described above, in step S240, the external parameters of the binocular camera may be adjusted according to the reconstruction errors of the multi-frame binocular image. The current calibration completion may be determined according to the number of frames of the currently processed binocular image and the total number of frames of the binocular image to be processed. The more the number of frames of the currently processed binocular image is close to the total number of frames of the binocular image to be processed, the higher the current calibration completion degree. For example, the total frame number of the binocular image to be processed is 50 frames, and 30 frames in the 50 frames of the image are currently processed, the current calibration completion may be 60%.
For another example, the current calibration completion may be the current reconstruction error.
Optionally, the current reconstruction error condition may include at least one of: current reconstruction error, current distance error, or current angle error.
That is, the constraint items currently used for calibration can be displayed in the case of the current reconstruction error.
For example, the reconstruction error is determined according to the angle error and the distance error, and then the current reconstruction error may include the current reconstruction error, the current angle error, and the current distance error. The current reconstruction error is a value determined according to the current angle error and the current distance error.
The current calibration progress and the current reconstruction error can be quantitatively displayed.
According to the scheme provided by the embodiment of the application, the current calibration condition can be displayed in real time, so that a user can know the current calibration progress, and the user experience is improved.
It should be understood that the foregoing is merely an example, and other display items may be set during the calibration process according to need, which is not limited in this embodiment of the present application.
Projections of a plurality of straight lines in the photographed scene in the first image and the second image can be acquired through step S221. Alternatively, the correspondence between the straight line in the shooting scene and the straight line in the first image and the straight line in the second image can be obtained in step S221.
Alternatively, step S221 includes steps S2211 to S2213 (not shown in the figure). Steps S2211 to S2213 are described below. For brevity and clarity of description, only one frame of binocular image is taken as an example in step S2211 to step S2213, and straight lines may be extracted in the same manner in other binocular images, which is not described herein.
S2211, performing instance segmentation on the first image and the second image respectively to obtain an instance in the first image and an instance in the second image.
The image is subjected to instance segmentation, so that different instances in the image can be obtained. Or, the image is subjected to instance segmentation, so that an instance of the pixel in the image can be obtained.
Alternatively, step S2211 may be implemented by step 11) and step 12).
And 11) respectively carrying out semantic segmentation on the first image and the second image to obtain a semantic segmentation result of the first image and a semantic segmentation result of the second image.
The semantic segmentation result of the image includes semantic information corresponding to pixels in the image. The semantic information corresponding to a pixel can also be understood as the category to which the pixel belongs.
Illustratively, each image may be processed through a semantic segmentation network to obtain a semantic segmentation result.
The semantic segmentation network may employ existing neural network models, such as deeplabv3, and the like.
The semantic segmentation network may be trained using public data sets. The specific training process is the prior art and will not be described in detail here.
Inputting the image into a semantic segmentation network, and obtaining semantic information corresponding to pixels in the image.
Illustratively, the categories of semantic segmentation network output may include horizons and verticals. That is, the semantic segmentation network can distinguish pixels in an image as belonging to a horizon or a horizon.
Optionally, the semantic segmentation result of the first image comprises a level or a vertical in the first image. The semantic segmentation result of the second image comprises a level or a vertical in the second image.
Or, performing semantic segmentation on the first image to obtain pixels belonging to the horizontal object and pixels belonging to the vertical object in the first image. And carrying out semantic segmentation on the second image to obtain pixels belonging to the horizontal object and pixels belonging to the vertical object in the second image.
As described above, the binocular camera in the embodiment of the present application may be a vehicle-mounted camera. In this case, the level may include a roadway line. For example, the roadway line may include a solid roadway line or a dashed roadway line. The upright may comprise a shaft or column or the like. For example, the shaft may include a light pole or the like.
In the embodiment of the application, the horizontal objects or the vertical objects in the image are distinguished through semantic segmentation, so that the adjustment of the external parameters of the binocular camera can be realized by utilizing geometric constraints, such as vertical constraints, between straight lines in the horizontal objects or the vertical objects in the shooting scene. If the binocular camera is a vehicle-mounted camera, the roadway line or the rod-shaped object and the like are common in an open road, calibration objects in the open road can be adopted to realize adjustment of external parameters of the binocular camera, a calibration site is not required to be arranged in advance, and cost is reduced.
It should be understood that the above semantic segmentation results are only examples, and the semantic information may be set according to the category of the calibration object. If the calibration field includes other types of calibration objects, the semantic segmentation network may be trained to output other types of semantic information, for example, the semantic segmentation network may also be trained to distinguish triangular objects or square objects, which is not limited in the embodiment of the present application.
Step 12), performing instance segmentation on the first image according to the semantic segmentation result of the first image to obtain an instance in the first image; and carrying out instance segmentation on the second image according to the semantic segmentation result of the second image to obtain an instance in the second image.
The image is subjected to instance segmentation according to the semantic segmentation result, namely different individuals are distinguished in pixels of the same semantic, namely the instances to which the pixels in the image belong are distinguished. One example represents an individual.
That is, in step 12), the input may be coordinates of all pixels having the same semantics, and the output may be an instance to which the pixels belong.
Illustratively, different individuals may be distinguished in pixels of the same semantic by a clustering method. For example, different individuals may be distinguished by density-based clustering with noise (density-based spatial clustering of applications with noise, dbscan) or the like.
Specifically, in the case where the distance between two pixel points having the same semantics is less than or equal to the interval threshold value, the two pixel points belong to the same instance.
It should be understood that other example segmentation methods in the prior art may be used to segment the image, and the embodiment of the present application does not limit the specific implementation manner of the example segmentation.
The correspondence between the instance in the first image and the instance in the second image may be determined from the location of the instance in the first image and the location of the instance in the second image.
The instance with the corresponding relation corresponds to the same calibration object in the shooting scene. Or, the example with the corresponding relation in the image is the projection of the same calibration object in the shooting scene.
The location of the instance in the image may be an absolute location of the instance in the image, e.g., coordinates of the instance in the image. Alternatively, the position of an instance in an image may also be the relative position between multiple instances in the image.
The first image and the second image are images shot by the same shooting scene by two cameras of the binocular camera, and the difference between the two images is small. That is, the projections of the same calibration object in the two images are closely located. Accordingly, the correspondence between the instances in the two images can be determined by the position.
For example, there is a correspondence between the leftmost instance of the plurality of verticals in the first image and the leftmost instance of the plurality of verticals in the second image.
Further, the first image and the second image can be respectively subjected to instance annotation, so that annotation information of the instance in the first image and annotation information of the instance in the second image are obtained.
Instance annotation of an image refers to annotating an instance in the image. Different annotation information in an image is used to indicate different instances in the image.
For example, the annotation information may be an instance number. Different instance numbers in an image are used to indicate different instances in the image.
Specifically, the instances in the image are annotated according to their locations.
For example, in the first image and the second image, the same instance numbers are given to instances whose relative positions are the same.
In this way, the correspondence between the instance in the first image and the instance in the second image may be indicated by the annotation information of the instance. For example, there is a correspondence between instances in the first image and the second image where instance numbers are the same.
And matching the instance in the first image with the instance in the second image to obtain the corresponding relation between the instance in the first image and the instance in the second image.
That is, matching the instance in the first image with the instance in the second image may be accomplished by instance annotation of the first image and the second image, respectively.
It should be understood that the correspondence between the instances in the first image and the instances in the second image may be a correspondence between all of the instances in the first image and all of the instances in the second image, or a correspondence between a portion of the instances in the first image and a portion of the instances in the second image.
S2212, m straight lines are extracted from the instance in the first image and the instance in the second image, respectively.
The correspondence between the m straight lines in the first image and the m straight lines in the second image is determined from the correspondence between the instances in the first image and the instances in the second image.
Alternatively, step S2212 may be implemented by steps 21) to 23).
21 A plurality of original straight lines are extracted in the instances of the first image and the second image, respectively.
Illustratively, a plurality of original straight lines may be extracted in an example by a machine vision method.
For example, the original straight line may be extracted in an example by hough transform (hough transform).
Specifically, at the edges of each instance in the image, for example, at the left and right sides of each instance, a small region of interest (region of interest, ROI) is set, the instance edge pixels in the ROI are projected to a straight line parameter space, and by setting the threshold of the parameter space points, the straight line in the instance, that is, the original straight line in the instance, is extracted.
Instance information of an original straight line in the extracted instance is used to indicate a position of the original straight line in the instance, for example, to the left or right of the instance.
It should be understood that the position of the original straight line in the example is related to the ROI setting manner, and the embodiment of the present application is not limited to this.
22 Fitting a plurality of original straight lines of the same side edge of the instance in the first image to an item mark straight line of the side edge of the instance in the first image; a plurality of original straight lines of the same side edge of the instance in the second image are fitted to an item mark straight line of the side edge of the instance in the second image.
One side edge of an instance may extract original straight lines, for example, on the left side of an instance. In order to further improve the accuracy of the straight line required by calibration, a plurality of original straight lines extracted from one side edge of the example can be fitted into one straight line, and the straight line obtained after fitting is the target straight line of the side edge of the example.
Step 22) is an optional step. In the case where step S2212 does not include step S22, one original straight line may also be selected from a plurality of original straight lines extracted from one side edge of the example as the target straight line of the side edge of the example.
If one side edge of an instance includes only one original straight line, the original straight line may be taken as the target straight line for the side edge of the instance.
Illustratively, the fitting may be performed in a random sample consensus (random sample consensus, RANSAC) manner.
For example, two points are randomly selected from boundary points of a plurality of original straight lines on the same side of an example, and a straight line is determined according to the two points, and the straight line can be represented by a slope k and an intercept b, for example, the straight line can be represented as (k, b). The number of midpoints of the plurality of original lines lying on the line is determined. The number of midpoints of the plurality of original straight lines located on the straight line may also be understood as the number of midpoints of the plurality of original straight lines through which the straight line passes. The line passing through the midpoints of the plurality of original lines with the largest number may be the target line.
In this way, the original straight lines on the same side of the example are fitted, a more accurate target straight line can be obtained, and the target straight line is utilized to calibrate the external parameters of the binocular camera, so that the accuracy of the calibration result of the external parameters of the binocular camera is improved.
Further, the straight line having the largest number of midpoints passing through the plurality of original straight lines may be processed to obtain the target straight line.
For convenience of description, the line passing through the midpoints of the plurality of original lines in the largest number is referred to as an intermediate line.
Traversing each line in the middle straight line, determining the pixel point with the largest pixel gradient in the pixels around the center point of each line by taking the point of each line in the middle straight line as the center point, and fitting again by using the pixel point with the largest pixel gradient in each line as the original point in a RANSAC mode to obtain the target straight line.
For example, with the point of each line in the middle straight line as the center point, the point where the pixel gradient is determined to be the largest among the 5 pixel points on the left side and the 5 pixel points on the right side of the center point in each line, the pixel point where the pixel gradient is the largest in each line as the original point, two points are arbitrarily selected from the original points, one straight line is determined from the two points, the number of original points through which the straight line passes is determined, and the straight line with the largest number of the original points can be used as the target straight line.
The pixel gradient at the boundary is usually larger, and the pixel point with the largest pixel gradient is searched, so that more accurate boundary can be found, and the accuracy of straight line extraction is improved.
Instance information of a target straight line of the extracted instance is used to indicate a position of the target straight line in the instance, for example, the target straight line is located on the left or right side of the instance.
It should be understood that the above is illustrated by way of example only, and that straight lines may be extracted in the same manner in other examples.
S2213, matching the target straight line in the first image with the target straight line in the second image to obtain a corresponding relation between m straight lines in the first image and m straight lines in the second image.
I.e. determining the correspondence of the target line in the first image and the target line in the second image. The two straight lines with the corresponding relation are the projections of the same straight line in the shooting scene in the first image and the second image.
Or, the projections of the same straight line in the photographed scene in the first image and the second image are matched.
It should be understood that in step S2213, all the target straight lines in the first image and all the target straight lines in the second image may be matched. Alternatively, a portion of the target straight line in the first image and the target straight line in the second image may be matched, which is not limited in the embodiment of the present application.
That is, in the embodiment of the present application, it is not necessary to determine the correspondence between each item of the target line in the first image and each item of the target line in the second image. As long as the correspondence between the m-item target line in the first image and the m-item target line in the second image is determined.
The m straight lines in the first image and the second image are target straight lines with corresponding relation in the first image and the second image.
The correspondence between the target straight line in the first image and the target straight line in the second image is determined from the instance information of the target straight line.
Specifically, the correspondence between the target straight line in the first image and the target straight line in the second image is determined according to the correspondence of the examples in the first image and the second image and the position of the target straight line in the examples.
In the example with the corresponding relation in the first image and the second image, the target straight lines with the same positions are straight lines with the corresponding relation in the first image and the second image.
For example, example 1# in the first image and example 1# in the second image are corresponding. The example information of the straight line a in the first image indicates that the straight line a is located on the left side of the example 1# in the first image, the example of the straight line 1# in the second image indicates that the straight line b in the second image is located on the left side of the example 1# in the second image, and the straight line a in the first image and the straight line b in the second image are straight lines having a correspondence relationship.
It should be understood that, steps S2211 to S2213 are only one possible straight line matching manner, and the correspondence between straight lines in the two images may be determined by other manners in the prior art, which is not limited by the embodiment of the present application.
In the embodiment of the application, the corresponding relation between the straight lines is determined by the corresponding relation between the examples in the two images, so that the accuracy of straight line matching can be improved, the accuracy of external parameter calibration is improved, the calculation complexity is reduced, and the efficiency of external parameter calibration is improved.
Similarly, the correspondence between the straight lines in the image and the straight lines in the shooting scene may be determined by the relative positions between the straight lines in the shooting scene and the relative positions between the straight lines in the image, which will not be described here.
Fig. 4 illustrates another calibration method 400 for binocular camera external parameters provided by an embodiment of the present application. Method 400 may be viewed as one specific implementation of method 400. Therefore, reference may be made to the method 200 above for details of the method 400, and descriptions thereof will be omitted herein for brevity.
As described above, the method for calibrating external parameters of a binocular camera provided by the embodiment of the application can be applied to an on-board camera system. For example, in the embodiment of the present application, the binocular camera is a vehicle-mounted camera, and the vehicle in which the binocular camera is located may be in a stationary state or a moving state. In the method 400, the calibration method of the external parameters of the binocular camera is described by taking the binocular camera as a vehicle-mounted camera as an example, and the application scene composition of the embodiment of the application is not limited.
After the vehicle enters the calibration site, the method 400 is executed to complete the calibration of the external parameters of the binocular camera, the external parameters of the binocular camera are updated by using the calibration value, and high-precision external parameters are provided for the upper layer business, so that the accuracy of the upper layer business is improved, and the automatic driving performance is further improved. Fig. 5 shows a schematic diagram of a calibration site. The calibration site of fig. 5 is provided with a roadway line and a vertical bar. The pavement line or the vertical rod can be used as a calibration object of the binocular camera.
The method 400 includes steps S410 to S440. Step S410 to step S440 are explained below.
S410, acquiring binocular images.
Specifically, at least one frame of binocular image captured by a binocular camera is acquired. The left eye image in the binocular image is captured by the left camera in the binocular camera and the right eye image in the binocular image is captured by the right camera in the binocular camera.
The at least one frame of binocular image may be, for example, a plurality of frames of binocular images taken by the vehicle during the course of the nominal field travel.
And the vehicle can obtain multi-frame binocular images after running for a small distance, and the calibration of the external parameters of the binocular camera is completed.
Step S410 corresponds to step S210, and the detailed description refers to step S210, and is not repeated here.
S420, extracting m straight lines from the left eye image and the right eye image respectively, wherein m is an integer greater than 1.
There is a correspondence between the m straight lines of the left eye image and the m straight lines of the right eye image.
Illustratively, step S420 includes steps S421 through S425.
S421, carrying out semantic segmentation on the binocular image to obtain a semantic segmentation result of the binocular image.
Specifically, semantic segmentation is performed on the left-eye image and the right-eye image respectively, so that a semantic segmentation result of the left-eye image and a semantic segmentation result of the right-eye image are obtained.
Illustratively, the semantic segmentation network is utilized to process the left-eye image and the right-eye image respectively, so as to obtain a semantic segmentation result of the left-eye image and a semantic segmentation result of the right-eye image.
For example, the output result of the semantic segmentation network includes two types of semantics: a roadway line or upright. That is, the semantic segmentation network can distinguish that pixels of an object in an image belong to a roadway line or upright. Fig. 6 (a) shows the semantic division result of the left-eye image, and fig. 6 (b) shows the semantic division result of the right-eye image. As shown in fig. 6, the verticals and the pavement lines in the left-eye image and the right-eye image are distinguished by semantic segmentation.
Step S421 corresponds to step 11) in step S2211, and the detailed description may refer to the previous description, which is not repeated here.
S422, performing instance labeling according to the semantic segmentation result.
Specifically, the example labeling is carried out on the left-eye image according to the semantic segmentation result of the left-eye image, so that labeling information of the example in the left-eye image is obtained. And carrying out instance annotation on the right-eye image according to the semantic segmentation result of the right-eye image to obtain annotation information of the instance in the right-eye image.
Specifically, performing instance segmentation on the first image according to a semantic segmentation result of the first image to obtain an instance in the first image; and labeling the instance in the first image according to the position of the instance in the first image, and obtaining labeling information of the instance.
Performing instance segmentation on the second image according to the semantic segmentation result of the second image to obtain an instance in the second image; and labeling the examples in the second image according to the positions of the examples in the second image, and obtaining labeling information of the examples.
Fig. 7 (a) shows labeling information of an example in a left-eye image, and fig. 7 (b) shows labeling information of an example in a right-eye image. As shown in fig. 7, the labeling information of the example includes: left 1 column, left 2 column, right 1 column, right 2 column, roadway line 1, roadway line 2, roadway line 3, roadway line 4, and roadway line 5. The labeling information of the instance having the correspondence relationship in the left-eye image and the right-eye image is the same. That is, the labeling information of the instance in the left-eye image and the labeling information of the instance in the right-eye image are used to indicate the correspondence between the instance in the left-eye image and the instance in the right-eye image.
As shown in fig. 7, only some examples are labeled in step S422, and in practical applications, more or fewer examples may be labeled as needed.
Step S422 corresponds to step 12) in step S2211, and the detailed description may refer to the previous description, which is not repeated here.
S423, extracting a plurality of original straight lines in the example of the binocular image.
Specifically, a plurality of original straight lines are extracted in the examples of the left-eye image and the right-eye image, respectively.
In step S423, the example labeled in step S422 may be subjected to straight line extraction. Or, the example with the labeling information is extracted in a straight line.
Step S423 corresponds to step 21) in step S2212, and the detailed description may refer to the previous description, which is not repeated here.
S424, performing straight line fitting on the plurality of original straight lines to obtain a target straight line.
Specifically, a plurality of original straight lines of the same side edge of the instance in the left-eye image are fitted to an item target straight line of the side edge of the instance in the left-eye image. A plurality of original straight lines of the same side edge of the instance in the right eye image are fitted to an item target straight line of the side edge of the instance in the right eye image.
Fig. 8 (a) shows a target straight line in the left-eye image, and fig. 8 (b) shows a target straight line in the right-eye image.
As shown in fig. 8, for an example, the straight lines of both side edges of the example may be extracted, or only the straight line of one side edge of the example may be extracted, which is not limited in the present application.
Step S424 corresponds to step 22) in step S2212, and the detailed description may refer to the previous description, which is not repeated here.
S425, matching the target straight line in the left eye image and the target straight line in the right eye image to obtain the corresponding relation between the m straight lines in the left eye image and the m straight lines in the right eye image.
Or, the projection of m straight lines in the shooting scene in the left-eye image and the right-eye image is obtained.
For example, as shown in fig. 8, m may be 12. Namely, the correspondence between 12 straight lines in (a) of fig. 8 and 12 straight lines in (b) of fig. 8 is obtained.
Step S425 corresponds to step S2213, and the detailed description may refer to the previous description, which is not repeated here.
S430, reconstructing n straight lines in the left eye image and n straight lines in the right eye image into a three-dimensional space based on external parameters of the binocular camera, and obtaining n reconstructed straight lines. The n straight lines of the left eye image and the n straight lines of the right eye image are projections of the n straight lines in the photographed scene.
For example, as shown in fig. 9, n is 12. The straight lines in the left-eye image and the right-eye image in fig. 8 are reconstructed into space, and the spatial positions of the 12 reconstructed straight lines are obtained.
S440, adjusting external parameters of the binocular camera according to the reconstruction error.
The reconstruction error is determined from the positional relationship between the n lines after reconstruction and the positional relationship between the n lines in the shooting scene.
The reconstruction error is illustratively determined from the positional relationship between two of the n straight lines after reconstruction and the positional relationship between two of the n straight lines in the photographing scene.
As previously described, the reconstruction error includes at least one of: angle errors or distance errors.
Below with the reconstructed horizontal line l 1 And horizontal line l 4 For example, a reconstruction error will be described.
The reconstruction error satisfies the following formula:
wherein f i (yaw, pitch, roll) represents the reconstruction error of the i-frame binocular image based on the external parameters of the binocular camera, and yaw, pitch, roll is the euler angle representation of the external parameters of the binocular camera.Horizontal line l after reconstruction 1 And horizontal line l 4 The angle between the two is used for calculating the reconstructed horizontal line l 1 And horizontal line l 4 Angle error between them. Dis (l) 1 ,l 4 ) -d1| is used to calculate the reconstructed horizon l 1 And horizontal line l 4 Distance error between two straight lines after reconstruction, wherein dis () is used to calculate the distance between two straight lines after reconstruction, e.g., dis (l) 1 ,l 4 ) For calculating the reconstructed horizon l 1 And horizontal line l 4 Distance between them, d1 represents horizontal line l in the photographed scene 1 And horizontal line l 4 The distance between, i.e. horizontal line l 1 And horizontal line l 4 Actual distance between them.
It should be understood that the above formula is only used to shoot the horizontal line l in the scene 1 And horizontal line l 4 For example, the reconstruction error may be calculated from other parallel lines in the photographed scene, for example, the reconstruction error may be calculated from the perpendicular line L1 and the perpendicular line L5. The reconstruction error between other parallel lines can also be the above formula, as long as the straight line in the above formula is replaced by the corresponding straight line.
Further, the reconstruction error may be determined from a positional relationship between q straight lines out of the n straight lines after reconstruction and a positional relationship between q straight lines out of the n straight lines in the shooting scene. q is an integer of 2 or more and n or less.
The reconstruction error will be described below using 12 straight lines in fig. 9 as an example, i.e., q is 12.
The reconstruction error satisfies the following formula:
Wherein the first term is used to calculate the angular error between the reconstructed horizontal lines. Specifically, in the above formula, the angle error between the 4 reconstructed horizontal lines is the respective horizontal lines, i.e., l after reconstruction 2 、l 3 、l 4 And l after reconstruction 1 Between which are locatedIs a sum of angular errors of (a). It should be understood that the first term in the above formula is only an example, and the angle error between the reconstructed horizontal lines may also be calculated by other manners, for example, the angle error between the 4 reconstructed horizontal lines may also be calculated by reconstructing each horizontal line and reconstructing l 1 An average of the angular errors between. For another example, the angle error between the 4 reconstructed horizontal lines may also be the reconstructed horizontal lines and the reconstructed l 2 And the sum of the angular errors between them. The specific calculation method of the angle error between the reconstructed horizontal lines is not limited, as long as the angle error between the reconstructed horizontal lines can restrict the parallel relation between the reconstructed horizontal lines.
The second term is used to calculate the angular error between the reconstructed perpendicular lines. Specifically, in the above formula, the angle error between the 8 reconstructed perpendicular lines is the perpendicular lines, i.e., the reconstructed L 2 、L 3 、L 4 、L 5 、L 6 、L 7 And L after reconstruction 1 And the sum of the angular errors between them. It should be understood that the second term in the above formula is merely an example, and the angle error between the reconstructed perpendicular lines may be calculated in other ways, for example, the angle error between the 4 reconstructed perpendicular lines may be calculated by reconstructing each perpendicular line and reconstructing L 1 An average of the angular errors between. For another example, the angle error between the 4 reconstructed perpendicular lines may be the reconstructed perpendicular lines and the reconstructed L 2 And the sum of the angular errors between them. The specific calculation method of the angle error between the reconstructed vertical lines is not limited, as long as the angle error between the reconstructed vertical lines can restrict the parallel relation between the reconstructed vertical lines.
The third term is used to calculate the angle error between the reconstructed vertical and horizontal lines. Specifically, in the above formula, the angle error between the reconstructed vertical line and the horizontal line is the reconstructed horizontal line l 1 And L after reconstruction 1 Angle error between them. It should be understood that the third term in the above formula is onlyFor example, the angle error between the reconstructed vertical line and the horizontal line may also be calculated in other manners, for example, the angle error between the reconstructed vertical line and the horizontal line may also be the angle error between other reconstructed vertical lines and the horizontal line. For another example, the angle error between the reconstructed vertical line and the horizontal line may be the sum of the angle errors between the reconstructed vertical lines and the horizontal lines. For another example, the angle error between the reconstructed vertical line and the horizontal line may be an average value of the angle error between each reconstructed vertical line and each horizontal line. The specific calculation method of the angle error between the reconstructed vertical line and the horizontal line is not limited, as long as the angle error between the reconstructed vertical line and the horizontal line can restrict the vertical relationship between the reconstructed vertical line and the horizontal line.
The fourth term is used to calculate the distance error between the reconstructed horizontal lines. Specifically, in the above formula, the distance error between the reconstructed horizontal lines is the reconstructed horizontal line l 1 And after reconstitution 4 A distance error between them. It should be appreciated that the fourth term in the above formula is merely an example, and the distance error between the reconstructed horizontal lines may also be calculated in other manners, for example, the distance error between the reconstructed horizontal lines may also be the distance error between other reconstructed horizontal lines. For another example, the distance error between the reconstructed horizontal lines may be the sum of the distance errors between the reconstructed horizontal lines. For another example, the distance error between the reconstructed horizontal lines may be an average value of the distance error between the reconstructed horizontal lines. The specific calculation method of the distance error between the reconstructed horizontal lines is not limited, as long as the distance error between the reconstructed horizontal lines can restrict the distance between the reconstructed horizontal lines.
The fifth term is used to calculate the distance error between the reconstructed perpendicular lines. Specifically, in the above formula, the distance error between the reconstructed perpendicular lines is the reconstructed perpendicular line L 1 And L after reconstruction 3 A distance error between them. Where d2 represents L in the shooting scene 1 And L 3 Distance between them. It should be understood that the fifth term in the above formula is merely an example, and the distance error between the reconstructed vertical lines may also be calculated in other manners, for example, the distance error between the reconstructed vertical lines may also be the distance error between other reconstructed vertical lines. For another example, the distance error between the reconstructed vertical lines may be the sum of the distance errors between the reconstructed vertical lines. For another example, the distance error between the reconstructed vertical lines may be an average value of the distance error between the reconstructed vertical lines. The specific calculation method of the distance error between the reconstructed vertical lines is not limited, as long as the distance error between the reconstructed vertical lines can restrict the distance between the reconstructed vertical lines.
In step S440, the external parameters of the binocular camera may be adjusted according to the reconstruction error of the one or more binocular images.
In the embodiment of the application, the reconstruction errors of the multi-frame binocular images are accumulated, and the external parameters of the binocular camera are adjusted according to the accumulated reconstruction errors, so that the influence of the accuracy of straight line extraction on the calibration result can be reduced, and the accuracy of the calibration result is improved.
That is, the external parameters of the binocular camera are adjusted with the goal of reducing the reconstruction errors of the multi-frame binocular image.
For example, the external parameters of the binocular camera that minimizes the reconstruction error of the multi-frame binocular image are taken as the calibration values of the external parameters of the binocular camera.
For example, the minimum reconstruction error for a multi-frame binocular image may be the minimum sum of the reconstruction errors for multi-frame binocular images.
For example, the reconstruction error of a multi-frame binocular image satisfies the following equation:
F(yaw,pitch,roll)=∑ i f i (yaw,pitch,roll);
f (yaw, pitch, roll) represents the reconstruction error of a multi-frame binocular image.
Alternatively, the minimum reconstruction error of the multi-frame binocular image may be the average value of the reconstruction errors of the multi-frame binocular image.
After obtaining the external parameter calibration value of the binocular camera, the external parameters of the binocular camera in the system can be updated. For example, to other functional modules of the autopilot system.
S450, controlling the vehicle-mounted display to display the calibration condition of the external parameters of the binocular camera.
Illustratively, the calibration condition of the binocular camera external parameters includes the current calibration progress, the current reconstruction error condition or p straight lines after reconstruction.
A schematic diagram of calibration of a binocular camera external parameter is shown in fig. 10. As shown in fig. 10, the calibration condition of the external parameters of the binocular camera includes the current calibration progress, the current reconstruction error condition or 12 straight lines after reconstruction.
The current calibration progress comprises the current calibration completion degree and the external parameters of the current binocular camera.
It should be understood that the external parameters in fig. 10 are shown in the form of euler angles by way of example only, and external parameters of a binocular camera may also be shown in other forms.
The current reconstruction error conditions include the current distance error and the current angle error. In FIG. 10, L after reconstruction is selected during calibration 1 、L 5 、l 1 、l 4 The four lines calculate the reconstruction error. For example, as shown in fig. 10, the angle error includes: l (L) 1 And L 5 Angle error between l 1 And l 4 Angle error between L 1 And l 1 Angle error between them. The distance error includes: l (L) 1 And L 5 A distance error between them.
It should be understood that the case of the reconstruction error in fig. 10 is only an example, and the case of the current reconstruction error may also include the current reconstruction error, i.e. the reconstruction error calculated from the current distance error and the current angle error, e.g. f i (yw, pitch, roll). Alternatively, other straight lines are also used to calculate the reconstruction error. Alternatively, other angle errors or distance errors may be used to calculate the reconstruction error, which is not limited in this embodiment of the present application.
Fig. 10 (a) and (b) show two calibration cases. In fig. 10 (a), the calibration completion is 25%, and the reconstruction error is relatively large at this time, and the 12 lines obtained by the current external parameter reconstruction are relatively distorted, which does not conform to the shooting scene in the real world. In fig. 10 (b), the calibration completion is 80%, and the reconstruction error is relatively small, so that 12 straight lines obtained based on the current external parameter reconstruction are more consistent with the shooting scene in the real world.
The embodiment of the application can be suitable for dynamic calibration of the camera and also suitable for static calibration of the camera.
For example, in the embodiment of the application, the camera is a vehicle-mounted camera, and the vehicle carried by the camera is in a moving state.
In the scheme provided by the application, the calibration object can be any type of road feature, and is not strictly limited to be a lane line. For example, the calibration reference in the scheme provided by the application can be any one of the following train track characteristics: lane lines, signboards, rod objects, road signs and traffic lights. Wherein, the signboard is a traffic signboard or a pole board, and the pole object is a street lamp pole.
In addition, the scheme provided by the application can be suitable for dynamic calibration of the camera and also suitable for static calibration of the camera. In addition, the embodiment of the application can utilize elements in the open road as the calibration object. Therefore, the scheme provided by the application has good universality.
It should be understood that the scheme provided by the application can be applied to the camera parameter calibration link of the automatic driving vehicle assembly offline, and is not necessarily limited to a fixed calibration workshop.
It should also be understood that the scheme provided by the application can also be applied to an initial calibration scene after the vehicle leaves the factory and a scene which causes external parameter change in the use process and needs real-time online correction or periodic calibration.
For example, the initial calibration value of the external parameters of the binocular camera may be obtained by manual measurement in the process of assembling objects, and after the vehicle leaves the factory, the external parameters of the binocular camera can be adjusted by using the scheme of the embodiment of the application, and the external parameters of the binocular camera in the system are updated, so that high-precision external parameters are provided for upper-layer business, the accuracy of the upper-layer business is improved, and the driving performance is further improved.
It is further understood that the scheme provided by the application can greatly reduce the dependence on a specific calibration site, and realize high-precision calibration of the external parameters of the vehicle-mounted camera at any time and any place (namely, in real time on line).
The various embodiments described herein may be separate solutions or may be combined according to inherent logic, which fall within the scope of the present application.
The method embodiments provided by the present application are described above, and the device embodiments provided by the present application will be described below. It should be understood that the descriptions of the apparatus embodiments and the descriptions of the method embodiments correspond to each other, and thus, descriptions of details not described may be referred to the above method embodiments, which are not repeated herein for brevity.
Fig. 11 shows an apparatus 600 for binocular camera extrinsic calibration according to an embodiment of the present application. The apparatus 600 includes an acquisition unit 610 and a processing unit 620.
An acquiring unit 610, configured to acquire a first image and a second image, where the first image is obtained by photographing a photographing scene by a first camera in the binocular camera, and the second image is obtained by photographing the photographing scene by a second camera in the binocular camera.
A processing unit 620, configured to extract m straight lines from the first image and the second image, where m is an integer greater than 1, and there is a correspondence between the m straight lines of the first image and the m straight lines of the second image; reconstructing n straight lines of the m straight lines of the first image and n straight lines of the m straight lines of the second image into a three-dimensional space based on external parameters of the binocular camera to obtain n reconstructed straight lines, wherein the n straight lines of the first image and the n straight lines of the second image are projections of the n straight lines in a shooting scene, n is more than 1 and less than or equal to m, and n is an integer; and adjusting external parameters of the binocular camera according to a reconstruction error, wherein the reconstruction error is determined according to the position relationship among the n lines after reconstruction and the position relationship among the n lines in the shooting scene.
Optionally, as an embodiment, the reconstruction error includes at least one of: angle errors between the n straight lines after reconstruction or distance errors between the n straight lines after reconstruction,
The angle error between the n lines after reconstruction is determined according to the difference between the angle between at least two lines of the n lines after reconstruction and the angle between at least two lines of the n lines in the shooting scene;
the distance error between the reconstructed n lines is determined according to a difference between a distance between at least two lines of the reconstructed n lines and a distance between at least two lines of the n lines in the photographed scene.
Optionally, as an embodiment, the at least two straight lines in the shooting scene include at least two straight lines parallel to each other.
Optionally, as an embodiment, the processing unit 620 is specifically configured to:
respectively carrying out instance segmentation on the first image and the second image to obtain an instance in the first image and an instance in the second image;
m straight lines are extracted from the instance in the first image and the instance in the second image respectively, and the correspondence between the m straight lines in the first image and the m straight lines in the second image is determined according to the correspondence between the instance in the first image and the instance in the second image.
Optionally, as an embodiment, the processing unit 620 is specifically configured to:
Respectively carrying out semantic segmentation on the first image and the second image to obtain a semantic segmentation result of the first image and a semantic segmentation result of the second image, wherein the semantic segmentation result of the first image comprises a horizontal object or a vertical object in the first image, and the semantic segmentation result of the second image comprises a horizontal object or a vertical object in the second image;
performing instance segmentation on the first image based on the semantic segmentation result of the first image to obtain an instance in the first image, and performing instance segmentation on the second image based on the semantic segmentation result of the second image to obtain an instance in the second image.
Optionally, as an embodiment, the apparatus further includes: and the display unit is used for displaying the calibration condition of the external parameters of the binocular camera.
Optionally, as an embodiment, the calibration condition of the external parameters of the binocular camera includes at least one of the following: the current calibration progress, the current reconstruction error condition or the reconstructed p straight lines are obtained by reconstructing p straight lines in m straight lines of the first image and p straight lines in m straight lines of the second image into a three-dimensional space based on the external parameters of the current binocular camera, wherein p is more than 1 and less than or equal to m, and p is an integer.
Optionally, as an embodiment, the current calibration schedule includes at least one of:
The external parameters of the current binocular camera or the current calibration completion degree.
Optionally, as an embodiment, the current case of reconstruction errors includes at least one of:
current reconstruction error, current distance error, or current angle error.
Optionally, the binocular camera is a vehicle-mounted camera, and the vehicle carried by the binocular camera can be in a static state or a moving state.
As shown in fig. 12, the apparatus 3000 may include at least one processor 3002 and a communication interface 3003.
Optionally, the apparatus 3000 may further include at least one of a memory 3001 and a bus 3004. Any two or all three of the memory 3001, the processor 3002, and the communication interface 3003 may be communicatively connected to each other via the bus 3004.
Alternatively, the memory 3001 may be a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access memory (random access memory, RAM). The memory 3001 may store a program that, when executed by the processor 3002, the processor 3002 and the communication interface 3003 are configured to perform the steps of the method of binocular camera extrinsic calibration according to the embodiments of the present application. That is, the processor 3002 may retrieve stored instructions from the memory 3001 via the communication interface 3003 to perform the steps of the method for binocular camera extrinsic calibration according to embodiments of the present application.
Alternatively, the memory 3001 may realize the functions of storing programs described above. Alternatively, the processor 3002 may employ a general-purpose CPU, microprocessor, ASIC, graphics processor (graphic processing unit, GPU) or one or more integrated circuits for executing associated programs to perform the functions required by the processing units in the apparatus of the present application or to perform the various steps of the method of binocular camera extrinsic calibration of the present application.
Alternatively, the processor 3002 may implement the functions described above for executing the relevant programs.
Alternatively, the processor 3002 may also be an integrated circuit chip with signal processing capabilities. In implementation, each step of the control method of the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor or an instruction in a software form.
Optionally, the processor 3002 may also be a general purpose processor, a digital signal processor (digital signal processing, DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads information in the memory, and combines the hardware of the storage medium to complete functions required to be executed by units included in the calibration device of the embodiment of the application, or execute each step of the external parameter calibration method of the binocular camera of the embodiment of the application.
Optionally, the communication interface 3003 may enable communication between the device and other equipment or a communication network using a transceiver device such as, but not limited to, a transceiver, for example, the communication interface 3003 may be used to obtain a binocular image. The communication interface 3003 may also be, for example, an interface circuit.
Bus 3004 may include a path to transfer information between various components of the device (e.g., memory, processor, communication interface).
Embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to implement the method of the above-described method embodiments.
The embodiment of the application also provides a terminal which comprises any one of the calibration devices, such as the device shown in fig. 11 or fig. 12.
The terminal may be, for example, a vehicle, a drone, a robot, or the like.
The calibration device may be mounted on the terminal or may be independent of the terminal.
The present application also provides a computer readable medium storing program code for execution by a device, the program code comprising means for performing the above-described embodiments.
Embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the above embodiments.
The embodiment of the application also provides a chip, which comprises a processor and a data interface, wherein the processor reads the instructions stored in the memory through the data interface, and the method of the embodiment is executed.
Optionally, as an implementation manner, the chip may further include a memory, where an instruction is stored in the memory, and the processor is configured to execute the instruction stored on the memory, where the instruction is executed, and where the processor is configured to perform the method in the foregoing embodiment.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

  1. A method for calibrating external parameters of a binocular camera, comprising the steps of:
    acquiring a first image and a second image, wherein the first image is obtained by shooting a shooting scene by a first camera in a binocular camera, and the second image is obtained by shooting the shooting scene by a second camera in the binocular camera;
    extracting m straight lines from the first image and the second image respectively, wherein m is an integer greater than 1, and the m straight lines of the first image and the m straight lines of the second image have a corresponding relation;
    reconstructing n straight lines of the m straight lines of the first image and n straight lines of the m straight lines of the second image into a three-dimensional space based on external parameters of the binocular camera to obtain n reconstructed straight lines, wherein the n straight lines of the first image and the n straight lines of the second image are projections of the n straight lines in the shooting scene, n is more than 1 and less than or equal to m, and n is an integer;
    and adjusting external parameters of the binocular camera according to a reconstruction error, wherein the reconstruction error is determined according to the position relationship among the n lines after reconstruction and the position relationship among the n lines in the shooting scene.
  2. The method of claim 1, wherein the reconstruction error comprises at least one of: an angle error between the n straight lines after reconstruction or a distance error between the n straight lines after reconstruction,
    The angle error between the n reconstructed straight lines is determined according to the difference between the angle between at least two straight lines in the n reconstructed straight lines and the angle between at least two straight lines in the n straight lines in the shooting scene;
    the distance error between the reconstructed n straight lines is determined according to a difference between a distance between at least two straight lines of the reconstructed n straight lines and a distance between at least two straight lines of the n straight lines in the shooting scene.
  3. The method of claim 2, wherein the at least two straight lines in the captured scene comprise at least two straight lines that are parallel to each other.
  4. A method according to any one of claims 1 to 3, wherein the extracting m straight lines in the first image and the second image, respectively, comprises:
    respectively carrying out instance segmentation on the first image and the second image to obtain an instance in the first image and an instance in the second image;
    and extracting m straight lines from the instance in the first image and the instance in the second image respectively, wherein the corresponding relation between the m straight lines in the first image and the m straight lines in the second image is determined according to the corresponding relation between the instance in the first image and the instance in the second image.
  5. The method of claim 4, wherein performing instance segmentation on the first image and the second image to obtain an instance in the first image and an instance in the second image, respectively, comprises:
    respectively carrying out semantic segmentation on the first image and the second image to obtain a semantic segmentation result of the first image and a semantic segmentation result of the second image, wherein the semantic segmentation result of the first image comprises a horizontal object or a vertical object in the first image, and the semantic segmentation result of the second image comprises a horizontal object or a vertical object in the second image;
    performing instance segmentation on the first image based on the semantic segmentation result of the first image to obtain an instance in the first image, and performing instance segmentation on the second image based on the semantic segmentation result of the second image to obtain an instance in the second image.
  6. The method according to any one of claims 1 to 5, further comprising: and controlling a display to display the calibration condition of the external parameters of the binocular camera.
  7. The method of claim 6, wherein the calibration of the binocular camera external parameters comprises at least one of: the method comprises the steps of carrying out current calibration progress, current reconstruction error or p straight lines after reconstruction, wherein the p straight lines after reconstruction are obtained by reconstructing p straight lines in m straight lines of a first image and p straight lines in m straight lines of a second image into a three-dimensional space based on the current external parameters of the binocular camera, and p is more than or equal to 1 and less than or equal to m, and p is an integer.
  8. The method of claim 7, wherein the current calibration schedule comprises at least one of:
    and the current external parameters of the binocular camera or the current calibration completion degree.
  9. The method of claim 8, wherein the current reconstruction error condition comprises at least one of:
    current reconstruction error, current distance error, or current angle error.
  10. A binocular camera extrinsic parameter calibration device, comprising:
    the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a first image and a second image, the first image is obtained by shooting a shooting scene by a first camera in a binocular camera, and the second image is obtained by shooting the shooting scene by a second camera in the binocular camera;
    a processing unit for:
    extracting m straight lines from the first image and the second image respectively, wherein m is an integer greater than 1, and the m straight lines of the first image and the m straight lines of the second image have a corresponding relation;
    reconstructing n straight lines of the m straight lines of the first image and n straight lines of the m straight lines of the second image into a three-dimensional space based on external parameters of the binocular camera to obtain n reconstructed straight lines, wherein the n straight lines of the first image and the n straight lines of the second image are projections of the n straight lines in the shooting scene, n is more than 1 and less than or equal to m, and n is an integer;
    And adjusting external parameters of the binocular camera according to a reconstruction error, wherein the reconstruction error is determined according to the position relationship among the n lines after reconstruction and the position relationship among the n lines in the shooting scene.
  11. The apparatus of claim 10, wherein the reconstruction error comprises at least one of: an angle error between the n straight lines after reconstruction or a distance error between the n straight lines after reconstruction,
    the angle error between the n reconstructed straight lines is determined according to the difference between the angle between at least two straight lines in the n reconstructed straight lines and the angle between at least two straight lines in the n straight lines in the shooting scene;
    the distance error between the reconstructed n straight lines is determined according to a difference between a distance between at least two straight lines of the reconstructed n straight lines and a distance between at least two straight lines of the n straight lines in the shooting scene.
  12. The apparatus of claim 11, wherein the at least two straight lines in the photographed scene comprise at least two straight lines that are parallel to each other.
  13. The apparatus according to any one of claims 10 to 12, wherein the processing unit is specifically configured to:
    Respectively carrying out instance segmentation on the first image and the second image to obtain an instance in the first image and an instance in the second image;
    and extracting m straight lines from the instance in the first image and the instance in the second image respectively, wherein the corresponding relation between the m straight lines in the first image and the m straight lines in the second image is determined according to the corresponding relation between the instance in the first image and the instance in the second image.
  14. The apparatus according to claim 13, wherein the processing unit is specifically configured to:
    respectively carrying out semantic segmentation on the first image and the second image to obtain a semantic segmentation result of the first image and a semantic segmentation result of the second image, wherein the semantic segmentation result of the first image comprises a horizontal object or a vertical object in the first image, and the semantic segmentation result of the second image comprises a horizontal object or a vertical object in the second image;
    performing instance segmentation on the first image based on the semantic segmentation result of the first image to obtain an instance in the first image, and performing instance segmentation on the second image based on the semantic segmentation result of the second image to obtain an instance in the second image.
  15. The apparatus according to any one of claims 10 to 14, further comprising: and the display unit is used for displaying the calibration condition of the external parameters of the binocular camera.
  16. The apparatus of claim 15, wherein the calibration of the binocular camera external parameters comprises at least one of: the method comprises the steps of carrying out current calibration progress, current reconstruction error or p straight lines after reconstruction, wherein the p straight lines after reconstruction are obtained by reconstructing p straight lines in m straight lines of a first image and p straight lines in m straight lines of a second image into a three-dimensional space based on the current external parameters of the binocular camera, and p is more than or equal to 1 and less than or equal to m, and p is an integer.
  17. The apparatus of claim 16, wherein the current calibration schedule comprises at least one of:
    and the current external parameters of the binocular camera or the current calibration completion degree.
  18. The apparatus of claim 17, wherein the current reconstruction error condition comprises at least one of:
    current reconstruction error, current distance error, or current angle error.
  19. A chip comprising at least one processor and interface circuitry, the at least one processor retrieving instructions stored on a memory through the interface circuitry to perform the method of any of claims 1 to 9.
  20. A computer readable storage medium storing program code for execution by a device, the program code comprising instructions for performing the method of any one of claims 1 to 9.
  21. A terminal, characterized in that it comprises an apparatus according to any of claims 10 to 18.
  22. The terminal of claim 21, wherein the terminal further comprises a binocular camera.
CN202180094173.2A 2021-07-16 2021-07-16 External parameter calibration method and device for binocular camera Pending CN116917936A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/106747 WO2023283929A1 (en) 2021-07-16 2021-07-16 Method and apparatus for calibrating external parameters of binocular camera

Publications (1)

Publication Number Publication Date
CN116917936A true CN116917936A (en) 2023-10-20

Family

ID=84919002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180094173.2A Pending CN116917936A (en) 2021-07-16 2021-07-16 External parameter calibration method and device for binocular camera

Country Status (2)

Country Link
CN (1) CN116917936A (en)
WO (1) WO2023283929A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173257B (en) * 2023-11-02 2024-05-24 安徽蔚来智驾科技有限公司 3D target detection and calibration parameter enhancement method, electronic equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201515433A (en) * 2013-10-14 2015-04-16 Etron Technology Inc Image calibration system and calibration method of a stereo camera
CN103745452B (en) * 2013-11-26 2014-11-26 理光软件研究所(北京)有限公司 Camera external parameter assessment method and device, and camera external parameter calibration method and device
JP7002007B2 (en) * 2017-05-01 2022-01-20 パナソニックIpマネジメント株式会社 Camera parameter set calculation device, camera parameter set calculation method and program
CN111462249B (en) * 2020-04-02 2023-04-18 北京迈格威科技有限公司 Traffic camera calibration method and device
CN112184830B (en) * 2020-09-22 2021-07-09 深研人工智能技术(深圳)有限公司 Camera internal parameter and external parameter calibration method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2023283929A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
JP7485749B2 (en) Video-based localization and mapping method and system - Patents.com
WO2021004312A1 (en) Intelligent vehicle trajectory measurement method based on binocular stereo vision system
WO2021004548A1 (en) Vehicle speed intelligent measurement method based on binocular stereo vision system
Zhao et al. Detection, tracking, and geolocation of moving vehicle from uav using monocular camera
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
JP2020525809A (en) System and method for updating high resolution maps based on binocular images
CN103377476B (en) Use the image registration of the multimodal data of three-dimensional geographical arc
CN112667837A (en) Automatic image data labeling method and device
Berrio et al. Camera-LIDAR integration: Probabilistic sensor fusion for semantic mapping
US11430199B2 (en) Feature recognition assisted super-resolution method
CN111060924A (en) SLAM and target tracking method
US20230222688A1 (en) Mobile device positioning method and positioning apparatus
US20230138487A1 (en) An Environment Model Using Cross-Sensor Feature Point Referencing
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
Kruber et al. Vehicle position estimation with aerial imagery from unmanned aerial vehicles
CN114662587A (en) Three-dimensional target sensing method, device and system based on laser radar
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
CN116917936A (en) External parameter calibration method and device for binocular camera
CN114140533A (en) Method and device for calibrating external parameters of camera
CN111754388B (en) Picture construction method and vehicle-mounted terminal
CN113196341A (en) Method for detecting and modeling objects on the surface of a road
CN115457084A (en) Multi-camera target detection tracking method and device
CN113874681B (en) Evaluation method and system for point cloud map quality
Blachut et al. Automotive Perception System Evaluation with Reference Data from a UAV’s Camera Using ArUco Markers and DCNN
Trusheim et al. Cooperative image orientation considering dynamic objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination