CN111369632A - Method and device for acquiring internal parameters in camera calibration - Google Patents

Method and device for acquiring internal parameters in camera calibration Download PDF

Info

Publication number
CN111369632A
CN111369632A CN202010151835.2A CN202010151835A CN111369632A CN 111369632 A CN111369632 A CN 111369632A CN 202010151835 A CN202010151835 A CN 202010151835A CN 111369632 A CN111369632 A CN 111369632A
Authority
CN
China
Prior art keywords
camera
observation point
coefficient
coordinates
imaging height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010151835.2A
Other languages
Chinese (zh)
Inventor
韩承志
唐逸之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010151835.2A priority Critical patent/CN111369632A/en
Publication of CN111369632A publication Critical patent/CN111369632A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a method and a device for acquiring internal parameters in camera calibration, relates to the technical field of artificial intelligence, in particular to the technical field of automatic driving, and specifically comprises the following steps: acquiring a contour mapping relation and a camera model of a camera; acquiring coordinates of a plurality of observation points in a camera coordinate system; calculating a camera incident angle of any one observation point by using the coordinates of any one observation point for the coordinates of any one observation point; matching the imaging height of any observation point by using the camera incidence angle and contour mapping relation of any observation point; and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients in the camera model and the imaging heights of the plurality of observation points meet a preset loss function to obtain the internal reference coefficients in the camera calibration, so that no external equipment is needed, the implementation method is simple, no checkerboard distribution condition in space is needed, and the obtained labeled internal reference is accurate.

Description

Method and device for acquiring internal parameters in camera calibration
Technical Field
The present application relates to artificial intelligence in the field of computer technologies, and in particular, to a method and an apparatus for obtaining internal parameters in camera calibration.
Background
Cameras are generally applied in more fields, for example, security, unmanned aerial vehicles, automatic driving and other fields need to take images by using the cameras. The camera internal reference calibration is one of important steps in the use process of the camera, and plays an important role in the aspects of image distortion removal, image splicing, monocular/binocular distance measurement, visual positioning and the like.
In the prior art, a commonly used camera calibration method is based on a checkerboard calibration technology, and by acquiring checkerboard images, detecting grid corners, and establishing a mapping relation between two-dimensional (2-dimensional, 2-d) images and three-dimensional (3-dimensional, 3-d) images, internal parameters of a camera are calculated.
However, in the calibration method in the prior art, external checkerboard equipment is required, the calibration is complicated to implement, checkerboard data at different positions needs to be collected for calibration, and the distribution condition of the checkerboard in the space also affects the calibration precision.
Disclosure of Invention
The embodiment of the application provides a method and a device for acquiring internal parameters in camera calibration, and aims to solve the technical problem that in the prior art, the accuracy for identifying traffic signal lamps is not high.
A first aspect of an embodiment of the present application provides a method for obtaining internal parameters in camera calibration, including:
acquiring a contour mapping relation and a camera model of a camera; the contour mapping relation is used for representing the relation between a camera incident angle and an imaging height, and the camera model comprises the incidence relation between the imaging height and an internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system; calculating a camera incident angle of any one observation point by using the coordinate of the any one observation point for the coordinate of the any one observation point; matching the imaging height of any observation point by using the camera incidence angle of any observation point and the contour mapping relation; and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients and the imaging heights of the plurality of observation points meet a preset loss function, so as to obtain the internal reference coefficients in the camera calibration. Therefore, the labeling internal reference coefficient of the camera can be automatically acquired according to the equal-height mapping relation, the camera model and the loss function of the camera. The method has the advantages that the dependence on external equipment is not needed, the implementation method is simple, the distribution condition of the checkerboard in the space is not needed, and the obtained internal reference of the label is accurate.
Optionally, the method further includes:
and carrying out distortion removal on the image shot by the camera by utilizing the internal reference coefficient. This allows an accurate image to be obtained.
Optionally, the method further includes:
acquiring a first image and a second image shot by the camera; calculating characteristic points of the first image and the second image; and calculating the position conversion of the camera by using the internal reference coefficient, the position of the characteristic point in the first image and the position of the characteristic point in the second image. This allows accurate camera position conversion.
Optionally, the reference coefficients include: focal length f, radial distortion coefficient: k is a radical of1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000021
Figure BDA0002402722860000022
Figure BDA0002402722860000023
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=r*(1+k1r2+k2r4+k3r6)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000024
(f,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the equal-height mapping relation, n is a natural number, and h represents the imaging height matched in the equal-height mapping relation by using the α, so that an accurate internal reference coefficient of the camera corresponding to the pinhole model can be obtained.
Optionally, the reference coefficients include: focal length f, refractive distortion coefficient: k is a radical of1、k2、k3And k4
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000031
Figure BDA0002402722860000032
Figure BDA0002402722860000033
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=α*(1+k1r2+k2r4+k3r6+k4r8)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000034
(f,k1,k2,k3,k4)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
This allows to obtain accurate reference coefficients for the camera corresponding to the fish-eye equidistant model.
Optionally, the reference coefficients include: focal length f, refractive distortion coefficient ζ, radial distortion coefficient k1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000035
Figure BDA0002402722860000036
Figure BDA0002402722860000037
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
Figure BDA0002402722860000038
h′=l*(1+k1l2+k2l4+k3l4)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000041
(f,ζ,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
Thus, accurate internal reference coefficients of the camera corresponding to the fisheye unified sphere model can be obtained.
A second aspect of the embodiments of the present application provides a device for obtaining internal parameters in camera calibration, including:
the acquisition module is used for acquiring a camera model and an equal-height mapping relation of a camera; the contour mapping relation is used for representing the relation between a camera incident angle and an imaging height, and the camera model comprises the incidence relation between the imaging height and an internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system;
the calculation module is used for calculating the camera incidence angle of any observation point by using the coordinate of any observation point for the coordinate of any observation point;
the matching module is used for matching the imaging height of any observation point by utilizing the camera incidence angle of any observation point and the contour mapping relation;
and the training module is used for modifying the internal parameter coefficient in the camera model until the imaging height calculated based on the modified internal parameter coefficient and the imaging heights of the plurality of observation points meet a preset loss function, so as to obtain the internal parameter coefficient in the camera calibration.
Optionally, the apparatus further comprises:
and the distortion removing module is used for removing distortion of the image shot by the camera by utilizing the internal reference coefficient.
Optionally, the apparatus further comprises:
the position conversion calculation module is used for acquiring a first image and a second image shot by the camera; calculating characteristic points of the first image and the second image; and calculating the position conversion of the camera by using the internal reference coefficient, the position of the characteristic point in the first image and the position of the characteristic point in the second image.
Optionally, the reference coefficients include: focal length f, radial distortion coefficient: k is a radical of1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000042
Figure BDA0002402722860000043
Figure BDA0002402722860000044
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=r*(1+k1r2+k2r4+k3r6)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000051
(f,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
Optionally, the reference coefficients include: focal length f, refractive distortion coefficient: k is a radical of1、k2、k3And k4
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000052
Figure BDA0002402722860000053
Figure BDA0002402722860000054
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=α*(1+k1r2+k2r4+k3r6+k4r8)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000055
(f,k1,k2,k3,k4)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
Optionally, the reference coefficients include: focal length f, refractive distortion coefficient ζ, radial distortion coefficient k1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000056
Figure BDA0002402722860000061
Figure BDA0002402722860000062
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
Figure BDA0002402722860000063
h′=l*(1+k1l2+k2l4+k3l4)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000064
(f,ζ,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
A third aspect of the embodiments of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding first aspects.
A fourth aspect of embodiments of the present application provides a non-transitory computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of the preceding first aspects.
In summary, the embodiment of the present application has the following beneficial effects with respect to the prior art:
the embodiment of the application provides a method and a device for obtaining internal parameters in camera calibration, which can automatically obtain the labeled internal parameter coefficient of a camera according to the equal-height mapping relation, the camera model and the loss function of the camera. The method has the advantages that the dependence on external equipment is not needed, the implementation method is simple, the distribution condition of the checkerboard in the space is not needed, and the obtained internal reference of the label is accurate. Specifically, the contour mapping relation and the camera model of the camera can be obtained; the contour mapping relation is used for identifying the relation between the camera incident angle and the imaging height, and the camera model comprises the incidence relation between the imaging height and the internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system; calculating a camera incident angle of any one observation point by using the coordinates of any one observation point for the coordinates of any one observation point; matching the imaging height of any observation point by using the camera incidence angle and contour mapping relation of any observation point; and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients in the camera model and the imaging heights of the plurality of observation points meet a preset loss function to obtain the internal reference coefficients in the camera calibration, so that no external equipment is needed, the implementation method is simple, no checkerboard distribution condition in space is needed, and the obtained labeled internal reference is accurate.
Drawings
Fig. 1 is a schematic diagram of a system architecture applicable to a method for obtaining internal parameters in camera calibration according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for obtaining internal parameters in camera calibration according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a relationship between an incident angle and an imaging height of a camera according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a contour mapping curve according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an internal reference acquisition device in camera calibration according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing a method for internal parameter acquisition in camera calibration according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
The method for obtaining the internal parameters in the camera calibration of the embodiment of the application can be applied to computing equipment, such as a terminal or a server, and the terminal can include: a mobile phone, a tablet computer, a notebook computer, or a desktop computer, etc. The embodiment of the present application does not specifically limit the specific device used.
For example, a Graphical User Interface (GUI) may be provided in the terminal or the server, and a control, a switch, and the like for receiving a user operation may be set in the GUI, so that a user may perform a mapping relationship such as inputting height and an operation of a camera model in the GUI, and it can be understood that specific content of the GUI may be determined according to an actual application scenario, and this is not specifically limited in the embodiment of the present invention.
The camera described in the embodiment of the application can be applied to application scenes such as security protection, unmanned aerial vehicles or automatic driving and the like which need to be subjected to image shooting.
The reference coefficients of the camera described in the embodiments of the present application may be, for example, parameters required for converting between an image coordinate system and/or a pixel coordinate system and a camera coordinate system, such as a translation matrix, a rotation matrix, and the like.
The coordinates of the observation point described in the embodiment of the present application may be coordinates of the observation point in a camera coordinate system. The origin of the camera coordinate system may be located at the optical center of the camera, the vertical axis (z-axis) may coincide with the optical axis of the camera, and the horizontal axis (x-axis) and the vertical axis (y-axis) may be parallel to the imaging plane. It is understood that in other embodiments, the camera coordinate system may be defined in other reasonable ways accepted in the art, and the embodiments of the present application are not limited thereto.
Exemplarily, as shown in fig. 1, fig. 1 is a schematic view of an application scenario architecture of the method in automatic driving according to the embodiment of the present application.
Some typical objects are schematically shown in the example environment 100 of the application scenario architecture, including a road 102, and a vehicle 110 traveling on the road 102. As shown in fig. 1, the road 102 includes, for example, a stop sign line 115-1 and a lane sign line 115-2 (individually or collectively referred to as a sign line 115), and the environment 100 further includes therein a camera 105 for sensing environmental information of the road 102. It should be understood that these illustrated facilities and objects are examples only, and that the presence of objects that may be present in different traffic environments will vary depending on the actual situation. The scope of the embodiments of the present application is not limited in this respect.
Vehicle 110 may be any type of vehicle that may carry people and/or things and be moved by a powered system such as an engine, including but not limited to a car, truck, bus, electric vehicle, motorcycle, recreational vehicle, train, and the like. One or more vehicles 110 in environment 100 may be vehicles with some autonomous driving capabilities, such vehicles also referred to as unmanned vehicles. Of course, another vehicle or vehicles 110 in environment 100 may also be vehicles without autopilot capabilities.
In some embodiments, the camera 105 may be arranged above the roadway 102. In some embodiments, cameras 105 may also be arranged on both sides of the roadway 102, for example. As shown in fig. 1, the camera 105 may be communicatively coupled to a computing device 120. Although shown as a separate entity, the computing device 120 may be embedded in the camera 105. The computing device 120 may also be an entity external to the camera 105 and may communicate with the camera 105 via a wireless network. Computing device 120 may be implemented as one or more computing devices containing at least a processor, memory, and other components typically found in a general purpose computer to implement the functions of computing, storage, communication, control, and the like.
In some possible embodiments, the camera 105 may acquire environmental information (e.g., lane line information, road boundary information, or obstacle information) related to the road 102 and send the environmental information to the vehicle 110 for use in driving decisions of the vehicle 110.
In some possible embodiments, the camera 105 may also determine the position of the vehicle 110 based on camera's internal parameters and captured images of the vehicle 110 and send the position to the vehicle 110 to enable the positioning of the vehicle 110.
In some possible embodiments, the computing device 102 may also de-distort the image captured by the camera 105 based on the reference coefficients of the camera 105 to obtain an accurate image.
In some possible embodiments, the computing device 102 may also calculate the position transformation of the camera 105 based on the positions of the feature points in the two images and the reference coefficients of the camera 105 after obtaining the two images captured by the camera 105.
It will be appreciated that determining the accurate reference coefficients of the camera is critical whether to obtain accurate image information, or to calculate a positional translation of the camera, etc.
In the embodiment of the application, the labeling internal reference coefficient of the camera can be automatically acquired according to the equal-height mapping relation, the camera model and the loss function of the camera. The method has the advantages that the dependence on external equipment is not needed, the implementation method is simple, the distribution condition of the checkerboard in the space is not needed, and the obtained internal reference of the label is accurate. Specifically, the contour mapping relation and the camera model of the camera can be obtained; the contour mapping relation is used for identifying the relation between the camera incident angle and the imaging height, and the camera model comprises the incidence relation between the imaging height and the internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system; calculating a camera incident angle of any one observation point by using the coordinates of any one observation point for the coordinates of any one observation point; matching the imaging height of any observation point by using the camera incidence angle and contour mapping relation of any observation point; and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients in the camera model and the imaging heights of the plurality of observation points meet a preset loss function to obtain the internal reference coefficients in the camera calibration, so that no external equipment is needed, the implementation method is simple, no checkerboard distribution condition in space is needed, and the obtained labeled internal reference is accurate.
It can be understood that the method provided by the embodiment of the application can also be applied to application scenes such as unmanned aerial vehicles or security and the like, and the method provided by the embodiment of the application can be adopted to conveniently and accurately obtain the internal reference coefficient of the camera in each application scene, and the specific application scene of the embodiment of the application is not limited.
As shown in fig. 2, fig. 2 is a schematic flowchart of a method for obtaining internal parameters in camera calibration according to an embodiment of the present application. The method specifically comprises the following steps:
step S101: acquiring a contour mapping relation and a camera model of a camera; the contour mapping relation is used for representing the relation between the camera incidence angle and the imaging height, and the camera model comprises the incidence relation between the imaging height and the internal reference coefficient.
In the embodiment of the present application, the contour mapping represents a relationship between a camera incident angle and an imaging height, and the contour mapping can be represented in the form of a graph or a table, for example, as shown in fig. 3, after a light ray 30 enters the camera at a certain camera incident angle α, an image is generated on an image plane, and the imaging height h corresponds to the camera incident angle α.
According to the camera incident angle and the imaging height of the rays, a contour mapping curve as shown in fig. 4 can be obtained by plotting. Or a contour mapping table of camera incident angle to imaging height as shown in table 1 may be obtained.
TABLE 1
Incident angle of camera Height of image formation
α1 h1
α2 h2
α3 h3
α4 h4
α5 h5
In this embodiment of the application, the camera model may also be referred to as a camera projection model, the camera projection model may include a pinhole model, a fisheye equidistant model, or a fisheye uniform sphere model, and the like, and in each camera model, the association relationship between the imaging height and the internal reference coefficient may be different. The association relationship between the specific imaging height and the reference coefficient in each camera model will be described in detail in the following embodiments, and will not be described herein again.
In a specific application, each type of camera is designed and manufactured according to the equal-height mapping relation, and the equal-height mapping relation of the camera can be obtained from data provided by a camera module manufacturer. For example, the electronic device for executing the internal parameter obtaining method in the camera calibration according to the embodiment of the present application may obtain the equal height mapping relationship of the camera from the database of the camera module manufacturer, or receive the equal height mapping relationship of the camera input by the user, which is not specifically limited in the embodiment of the present application.
The camera model may be obtained from any device or database storing the camera model, and the embodiment of the present application does not limit the specific manner of obtaining the contour mapping relationship of the camera and the camera model.
Step S102: and acquiring the coordinates of a plurality of observation points in a camera coordinate system.
In the embodiment of the application, the observation point can be selected arbitrarily, and for any observation point, the coordinates of the observation point in the camera coordinate system can be acquired. The number of the observation points can be set according to an actual application scene, and it can be understood that the more the number of the observation points is, the more accurate result is likely to be obtained when a loss function is trained by a plurality of observation points subsequently.
Step S103: for the coordinates of any one observation point, calculating the camera incident angle of the any one observation point by using the coordinates of the any one observation point.
In this application embodiment, after obtaining the coordinates of the observation point, the camera incident angle of the observation point may be calculated based on the specific values of the observation point in the x-axis, the y-axis, and the z-axis, and this application embodiment does not limit the specific calculation manner.
Step S104: and matching the imaging height of any observation point by using the camera incidence angle of any observation point and the contour mapping relation.
In the embodiment of the application, after the camera incident angle of the observation point is obtained, the imaging height corresponding to the camera incident angle can be matched in the contour mapping relation, so that the imaging height of the observation point is obtained.
Step S105: and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients and the imaging heights of the plurality of observation points meet a preset loss function, so as to obtain the internal reference coefficients in the camera calibration.
In this embodiment of the application, the preset loss function may be, for example, that a difference or a variance between an imaging height obtained based on an incident angle of the camera at the observation point and an imaging height calculated based on an internal parameter in the camera model is smaller than a certain value, which may indicate that the internal parameter in the camera model is an accurate internal parameter system, so that an internal parameter in the camera calibration may be obtained.
By way of example, the present application provides a method for obtaining an internal reference coefficient for a pinhole model by using a camera model.
In the method, the reference coefficients include: focal length f, radial distortion coefficient: k is a radical of1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000111
Figure BDA0002402722860000112
Figure BDA0002402722860000113
α=tan-1(r)
based on the camera incident angle α, the imaging height h can be matched in a contour mapping relationship.
The correlation between the imaging height h' and the internal reference coefficient included in the camera model is as follows:
h′=r*(1+k1r2+k2r4+k3r6)*f。
the preset loss function is:
Figure BDA0002402722860000121
(f,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the equal-height mapping relation, and n is a natural number.
Thus, the internal reference coefficient satisfying the preset loss function can be obtained: focal length f, radial distortion coefficient: k is a radical of1、k2And k3The reference coefficient may be used as a reference coefficient in camera calibration.
By way of example, the embodiment of the present application provides a method for obtaining an internal reference coefficient for a fish-eye equidistant model by a camera model.
In the method, the reference coefficients include: focal length f, refractive distortion coefficient: k is a radical of1、k2、k3And k4
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000122
Figure BDA0002402722860000123
Figure BDA0002402722860000124
α=tan-1(r)
based on the camera incident angle α, the imaging height h can be matched in a contour mapping relationship.
The correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=α*(1+k1r2+k2r4+k3r6+k4r8)*f。
the preset loss function is:
Figure BDA0002402722860000125
(f,k1,k2,k3,k4)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
Thus, the internal reference coefficient satisfying the preset loss function can be obtained: focal length f, refractive distortion coefficient: k is a radical of1、k2、k3And k4The reference coefficient may be used as a reference coefficient in camera calibration.
By way of example, the embodiment of the present application provides a method for obtaining an internal reference coefficient for a fish-eye unified sphere model by a camera model.
In the method, the reference coefficients include: focal length f, refractive distortion coefficient ζ, radial distortion coefficient k1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000131
Figure BDA0002402722860000132
Figure BDA0002402722860000133
α=tan-1(r)
based on the camera incident angle α, the imaging height h can be matched in a contour mapping relationship.
The correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
Figure BDA0002402722860000134
h′=l*(1+k1l2+k2l4+k3l4)*f。
the preset loss function is:
Figure BDA0002402722860000135
(f,ζ,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the equal-height mapping relation, and n is a natural number.
Thus, the internal reference coefficient satisfying the preset loss function can be obtained: focal length f, refractive distortion coefficient ζ, radial distortion coefficient k1、k2And k3The reference coefficient may be used as a reference coefficient in camera calibration.
In practice, the calibration precision of the sub-pixel level can be obtained by adopting the internal reference coefficient obtained in the mode.
In a specific application, after the internal parameter in the camera calibration is obtained, the image shot by the camera can be subjected to distortion removal by using the internal parameter, for example, a distorted straight line in the image can be changed into a curve by using a distorted internal parameter, so that an accurate image is obtained.
Alternatively, the position conversion of the camera may also be calculated using the internal reference coefficients. For example, a first image and a second image taken by the camera are acquired; calculating characteristic points of the first image and the second image; and calculating the position conversion of the camera by using the internal reference coefficient, the position of the characteristic point in the first image and the position of the characteristic point in the second image.
The feature points may be features existing in both the two images, and the feature points are not limited in the embodiment of the present application, and rotation or translation of the feature points in the two images can be calculated through an internal reference coefficient, so as to obtain position transformation of the camera.
It can be understood that, in a specific application, image stitching, monocular/binocular distance measurement, visual positioning, or the like may also be performed according to the internal reference coefficient, which is not specifically limited in the embodiment of the present application.
In summary, the embodiments of the present application provide a method and an apparatus for obtaining internal parameters in camera calibration, which can automatically obtain a camera calibration internal parameter coefficient according to a camera contour mapping relationship, a camera model, and a loss function. The method has the advantages that the dependence on external equipment is not needed, the implementation method is simple, the distribution condition of the checkerboard in the space is not needed, and the obtained internal reference of the label is accurate. Specifically, the contour mapping relation and the camera model of the camera can be obtained; the contour mapping relation is used for identifying the relation between the camera incident angle and the imaging height, and the camera model comprises the incidence relation between the imaging height and the internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system; calculating a camera incident angle of any one observation point by using the coordinates of any one observation point for the coordinates of any one observation point; matching the imaging height of any observation point by using the camera incidence angle and contour mapping relation of any observation point; and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients in the camera model and the imaging heights of the plurality of observation points meet a preset loss function to obtain the internal reference coefficients in the camera calibration, so that no external equipment is needed, the implementation method is simple, no checkerboard distribution condition in space is needed, and the obtained labeled internal reference is accurate.
Fig. 5 is a schematic structural diagram of an embodiment of an apparatus for obtaining internal parameters in camera calibration provided in the present application. As shown in fig. 5, the apparatus for obtaining internal parameters in camera calibration provided in this embodiment includes:
an obtaining module 51, configured to obtain a camera model and an equal-height mapping relationship of a camera; the contour mapping relation is used for representing the relation between a camera incident angle and an imaging height, and the camera model comprises the incidence relation between the imaging height and an internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system;
a calculating module 52, configured to calculate, for the coordinate of any observation point, a camera incident angle of the any observation point by using the coordinate of the any observation point;
a matching module 53, configured to match an imaging height of any observation point by using the camera incident angle of any observation point and the contour mapping relationship;
a training module 54, configured to modify the internal reference coefficients in the camera model until an imaging height calculated based on the modified internal reference coefficients and imaging heights of the multiple observation points satisfy a preset loss function, so as to obtain the internal reference coefficients in the camera calibration.
Optionally, the apparatus further comprises:
and the distortion removing module is used for removing distortion of the image shot by the camera by utilizing the internal reference coefficient.
Optionally, the apparatus further comprises:
the position conversion calculation module is used for acquiring a first image and a second image shot by the camera; calculating characteristic points of the first image and the second image; and calculating the position conversion of the camera by using the internal reference coefficient, the position of the characteristic point in the first image and the position of the characteristic point in the second image.
Optionally, the reference coefficients include: focal length f, radial distortion coefficient: k is a radical of1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000151
Figure BDA0002402722860000152
Figure BDA0002402722860000153
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=r*(1+k1r2+k2r4+k3r6)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000154
(f,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
Optionally, the reference coefficients include: focal length f, refractive distortion coefficient: k is a radical of1、k2、k3And k4
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000155
Figure BDA0002402722860000156
Figure BDA0002402722860000161
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=α*(1+k1r2+k2r4+k3r6+k4r8)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000162
(f,k1,k2,k3,k4)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
Optionally, the reference coefficients include: focal length f, refractive distortion coefficient ζ, radial distortion coefficient k1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure BDA0002402722860000163
Figure BDA0002402722860000164
Figure BDA0002402722860000165
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
Figure BDA0002402722860000166
h′=l*(1+k1l2+k2l4+k3l4)*f。
optionally, the preset loss function is:
Figure BDA0002402722860000167
(f,ζ,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
The embodiment of the application provides a method and a device for obtaining internal parameters in camera calibration, which can automatically obtain the labeled internal parameter coefficient of a camera according to the equal-height mapping relation, the camera model and the loss function of the camera. The method has the advantages that the dependence on external equipment is not needed, the implementation method is simple, the distribution condition of the checkerboard in the space is not needed, and the obtained internal reference of the label is accurate. Specifically, the contour mapping relation and the camera model of the camera can be obtained; the contour mapping relation is used for identifying the relation between the camera incident angle and the imaging height, and the camera model comprises the incidence relation between the imaging height and the internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system; calculating a camera incident angle of any one observation point by using the coordinates of any one observation point for the coordinates of any one observation point; matching the imaging height of any observation point by using the camera incidence angle and contour mapping relation of any observation point; and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients in the camera model and the imaging heights of the plurality of observation points meet a preset loss function to obtain the internal reference coefficients in the camera calibration, so that no external equipment is needed, the implementation method is simple, no checkerboard distribution condition in space is needed, and the obtained labeled internal reference is accurate.
The apparatus for obtaining internal parameters in camera calibration provided in the embodiments of the present application can be used to perform the methods shown in the corresponding embodiments, and the implementation manner and principle thereof are the same and will not be described again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, the embodiment of the present application is a block diagram of an electronic device for a method of internal reference acquisition in camera calibration. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform a method of internal reference acquisition in camera calibration provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of internal reference acquisition in camera calibration provided herein.
The memory 602, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of internal parameter acquisition in camera calibration in the embodiments of the present application (for example, the acquisition module 51, the calculation module 52, the matching module 53, and the training module 54 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing, namely, implements the method of internal reference acquisition in camera calibration in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device acquired from the internal reference in the camera calibration, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory remotely located from the processor 601, and these remote memories may be connected over a network to the electronic device involved in the acquisition in the camera calibration. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for acquiring internal parameters in camera calibration may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic device that are internally referenced in the camera calibration, such as input devices like a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, etc. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the labeled internal reference coefficient of the camera can be automatically acquired according to the equal-height mapping relation, the camera model and the loss function of the camera. The method has the advantages that the dependence on external equipment is not needed, the implementation method is simple, the distribution condition of the checkerboard in the space is not needed, and the obtained internal reference of the label is accurate. Specifically, the contour mapping relation and the camera model of the camera can be obtained; the contour mapping relation is used for identifying the relation between the camera incident angle and the imaging height, and the camera model comprises the incidence relation between the imaging height and the internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system; calculating a camera incident angle of any one observation point by using the coordinates of any one observation point for the coordinates of any one observation point; matching the imaging height of any observation point by using the camera incidence angle and contour mapping relation of any observation point; and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients in the camera model and the imaging heights of the plurality of observation points meet a preset loss function to obtain the internal reference coefficients in the camera calibration, so that no external equipment is needed, the implementation method is simple, no checkerboard distribution condition in space is needed, and the obtained labeled internal reference is accurate.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (20)

1. A method for obtaining internal parameters in camera calibration, the method comprising:
acquiring a contour mapping relation and a camera model of a camera; the contour mapping relation is used for representing the relation between a camera incident angle and an imaging height, and the camera model comprises the incidence relation between the imaging height and an internal parameter coefficient;
acquiring coordinates of a plurality of observation points in a camera coordinate system;
calculating a camera incident angle of any one observation point by using the coordinate of the any one observation point for the coordinate of the any one observation point;
matching the imaging height of any observation point by using the camera incidence angle of any observation point and the contour mapping relation;
and modifying the internal reference coefficients in the camera model until the imaging height calculated based on the modified internal reference coefficients and the imaging heights of the plurality of observation points meet a preset loss function, so as to obtain the internal reference coefficients in the camera calibration.
2. The method of claim 1, further comprising:
and carrying out distortion removal on the image shot by the camera by utilizing the internal reference coefficient.
3. The method of claim 1, further comprising:
acquiring a first image and a second image shot by the camera;
calculating characteristic points of the first image and the second image;
and calculating the position conversion of the camera by using the internal reference coefficient, the position of the characteristic point in the first image and the position of the characteristic point in the second image.
4. The method according to any of claims 1-3, wherein the reference coefficients comprise: focal length f, radial distortion coefficient: k is a radical of1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure FDA0002402722850000011
Figure FDA0002402722850000012
Figure FDA0002402722850000013
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=r*(1+k1r2+k2r4+k3r6)*f。
5. the method of claim 4, wherein the predetermined loss function is:
Figure FDA0002402722850000021
(f,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
6. The method according to any of claims 1-3, wherein the reference coefficients comprise: focal length f, refractive distortion coefficient: k is a radical of1、k2、k3And k4
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure FDA0002402722850000022
Figure FDA0002402722850000023
Figure FDA0002402722850000024
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=α*(1+k1r2+k2r4+k3r6+k4r8)*f。
7. the method of claim 6, wherein the predetermined loss function is:
Figure FDA0002402722850000025
(f,k1,k2,k3,k4)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
8. The method according to any of claims 1-3, wherein the reference coefficients comprise: focal length f, refractive distortion coefficient ζ, radial distortion coefficient k1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure FDA0002402722850000026
Figure FDA0002402722850000027
Figure FDA0002402722850000031
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
Figure FDA0002402722850000032
h′=l*(1+k1l2+k2l4+k3l4)*f。
9. the method of claim 8, wherein the predetermined loss function is:
Figure FDA0002402722850000033
(f,ζ,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
10. An apparatus for obtaining internal parameters in camera calibration, comprising:
the acquisition module is used for acquiring a camera model and an equal-height mapping relation of a camera; the contour mapping relation is used for representing the relation between a camera incident angle and an imaging height, and the camera model comprises the incidence relation between the imaging height and an internal parameter coefficient; acquiring coordinates of a plurality of observation points in a camera coordinate system;
the calculation module is used for calculating the camera incidence angle of any observation point by using the coordinate of any observation point for the coordinate of any observation point;
the matching module is used for matching the imaging height of any observation point by utilizing the camera incidence angle of any observation point and the contour mapping relation;
and the training module is used for modifying the internal parameter coefficient in the camera model until the imaging height calculated based on the modified internal parameter coefficient and the imaging heights of the plurality of observation points meet a preset loss function, so as to obtain the internal parameter coefficient in the camera calibration.
11. The apparatus of claim 10, further comprising:
and the distortion removing module is used for removing distortion of the image shot by the camera by utilizing the internal reference coefficient.
12. The apparatus of claim 10, further comprising:
the position conversion calculation module is used for acquiring a first image and a second image shot by the camera; calculating characteristic points of the first image and the second image; and calculating the position conversion of the camera by using the internal reference coefficient, the position of the characteristic point in the first image and the position of the characteristic point in the second image.
13. The apparatus according to any one of claims 10-12, wherein the reference coefficients comprise: focal length f, radial distortion coefficient: k is a radical of1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure FDA0002402722850000041
Figure FDA0002402722850000042
Figure FDA0002402722850000043
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=r*(1+k1r2+k2r4+k3r6)*f。
14. the apparatus of claim 13, wherein the predetermined loss function is:
Figure FDA0002402722850000044
(f,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
15. The apparatus according to any one of claims 10-12, wherein the reference coefficients comprise: focal length f, refractive distortion coefficient: k is a radical of1、k2、k3And k4
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure FDA0002402722850000045
Figure FDA0002402722850000046
Figure FDA0002402722850000047
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
h′=α*(1+k1r2+k2r4+k3r6+k4r8)*f。
16. the apparatus of claim 15, wherein the predetermined loss function is:
Figure FDA0002402722850000048
(f,k1,k2,k3,k4)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
17. The apparatus according to any one of claims 10-12, wherein the reference coefficients comprise: focal length f, refractive distortion coefficient ζ, radial distortion coefficient k1、k2And k3
For the coordinates (x, y, z) of any one observation point, the camera incident angle α of the any one observation point is calculated using the coordinates of the any one observation point, satisfying the following formula:
Figure FDA0002402722850000051
Figure FDA0002402722850000052
Figure FDA0002402722850000053
α=tan-1(r)
the correlation between the imaging height h' included in the camera model and the internal reference coefficient is as follows:
Figure FDA0002402722850000054
h′=l*(1+k1l2+k2l4+k3l4)*f。
18. the apparatus of claim 17, wherein the predetermined loss function is:
Figure FDA0002402722850000055
(f,ζ,k1,k2,k3)=arg min(loss)
wherein n represents that n mapping relations exist in the contour mapping relation, n is a natural number, and h represents an imaging height matched in the contour mapping relation by the α.
19. An electronic device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
CN202010151835.2A 2020-03-06 2020-03-06 Method and device for acquiring internal parameters in camera calibration Pending CN111369632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010151835.2A CN111369632A (en) 2020-03-06 2020-03-06 Method and device for acquiring internal parameters in camera calibration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010151835.2A CN111369632A (en) 2020-03-06 2020-03-06 Method and device for acquiring internal parameters in camera calibration

Publications (1)

Publication Number Publication Date
CN111369632A true CN111369632A (en) 2020-07-03

Family

ID=71211188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010151835.2A Pending CN111369632A (en) 2020-03-06 2020-03-06 Method and device for acquiring internal parameters in camera calibration

Country Status (1)

Country Link
CN (1) CN111369632A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988215A (en) * 2020-08-11 2020-11-24 上海连尚网络科技有限公司 Method and equipment for pushing user
CN113115017A (en) * 2021-03-05 2021-07-13 上海炬佑智能科技有限公司 3D imaging module parameter inspection method and 3D imaging device
CN113587895A (en) * 2021-07-30 2021-11-02 杭州三坛医疗科技有限公司 Binocular distance measuring method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509261A (en) * 2011-10-10 2012-06-20 宁波大学 Distortion correction method for fisheye lens
CN107248178A (en) * 2017-06-08 2017-10-13 上海赫千电子科技有限公司 A kind of fisheye camera scaling method based on distortion parameter
CN109461126A (en) * 2018-10-16 2019-03-12 重庆金山医疗器械有限公司 A kind of image distortion correction method and system
CN110378963A (en) * 2018-12-04 2019-10-25 北京京东振世信息技术有限公司 Camera parameter scaling method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509261A (en) * 2011-10-10 2012-06-20 宁波大学 Distortion correction method for fisheye lens
CN107248178A (en) * 2017-06-08 2017-10-13 上海赫千电子科技有限公司 A kind of fisheye camera scaling method based on distortion parameter
CN109461126A (en) * 2018-10-16 2019-03-12 重庆金山医疗器械有限公司 A kind of image distortion correction method and system
CN110378963A (en) * 2018-12-04 2019-10-25 北京京东振世信息技术有限公司 Camera parameter scaling method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988215A (en) * 2020-08-11 2020-11-24 上海连尚网络科技有限公司 Method and equipment for pushing user
CN113115017A (en) * 2021-03-05 2021-07-13 上海炬佑智能科技有限公司 3D imaging module parameter inspection method and 3D imaging device
CN113115017B (en) * 2021-03-05 2022-03-18 上海炬佑智能科技有限公司 3D imaging module parameter inspection method and 3D imaging device
CN113587895A (en) * 2021-07-30 2021-11-02 杭州三坛医疗科技有限公司 Binocular distance measuring method and device

Similar Documents

Publication Publication Date Title
CN112652016B (en) Point cloud prediction model generation method, pose estimation method and pose estimation device
CN111274343B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN112288825B (en) Camera calibration method, camera calibration device, electronic equipment, storage medium and road side equipment
WO2018119889A1 (en) Three-dimensional scene positioning method and device
CN112132829A (en) Vehicle information detection method and device, electronic equipment and storage medium
CN108279670B (en) Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory
WO2018128674A1 (en) Systems and methods for mapping based on multi-journey data
CN111369632A (en) Method and device for acquiring internal parameters in camera calibration
JP2018535402A (en) System and method for fusing outputs of sensors having different resolutions
CN111612852B (en) Method and apparatus for verifying camera parameters
CN111462029B (en) Visual point cloud and high-precision map fusion method and device and electronic equipment
CN108604379A (en) System and method for determining the region in image
CN112101209B (en) Method and apparatus for determining world coordinate point cloud for roadside computing device
CN110794844B (en) Automatic driving method, device, electronic equipment and readable storage medium
CN111739005B (en) Image detection method, device, electronic equipment and storage medium
CN111612753B (en) Three-dimensional object detection method and device, electronic equipment and readable storage medium
CN111721281B (en) Position identification method and device and electronic equipment
CN112344855B (en) Obstacle detection method and device, storage medium and drive test equipment
CN112509057A (en) Camera external parameter calibration method and device, electronic equipment and computer readable medium
CN112561978A (en) Training method of depth estimation network, depth estimation method of image and equipment
CN111666876A (en) Method and device for detecting obstacle, electronic equipment and road side equipment
CN112967344A (en) Method, apparatus, storage medium, and program product for camera external reference calibration
CN111753739A (en) Object detection method, device, equipment and storage medium
CN112668428A (en) Vehicle lane change detection method, roadside device, cloud control platform and program product
CN112487979A (en) Target detection method, model training method, device, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200703

RJ01 Rejection of invention patent application after publication