CN112233185A - Camera calibration method, image registration method, camera device and storage device - Google Patents

Camera calibration method, image registration method, camera device and storage device Download PDF

Info

Publication number
CN112233185A
CN112233185A CN202011019202.2A CN202011019202A CN112233185A CN 112233185 A CN112233185 A CN 112233185A CN 202011019202 A CN202011019202 A CN 202011019202A CN 112233185 A CN112233185 A CN 112233185A
Authority
CN
China
Prior art keywords
camera
preset
imaging
parameters
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011019202.2A
Other languages
Chinese (zh)
Other versions
CN112233185B (en
Inventor
王子彤
王廷鸟
刘晓沐
王松
张东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011019202.2A priority Critical patent/CN112233185B/en
Publication of CN112233185A publication Critical patent/CN112233185A/en
Application granted granted Critical
Publication of CN112233185B publication Critical patent/CN112233185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a camera calibration method, an image registration method, a camera device and a storage device, wherein the camera calibration method comprises the following steps: acquiring the spatial positions of a plurality of preset points within a preset view field range; the preset view field range is a common view field range of the first camera and the second camera; obtaining a first imaging position of a plurality of preset points projected on the first camera by using the camera parameters and the spatial position of the first camera; obtaining a second imaging position of the plurality of preset points projected on the second camera by using the camera parameters and the spatial position of the second camera; and determining imaging conversion parameters between the first camera and the second camera based on the first imaging position and the second imaging position of the plurality of preset points. According to the scheme, the image registration effect can be improved.

Description

Camera calibration method, image registration method, camera device and storage device
Technical Field
The present application relates to the field of optical technologies, and in particular, to a camera calibration method, an image registration method, an image capturing device, and a storage apparatus.
Background
Image registration has been widely used in image processing as one of the most important techniques in image stitching, image fusion, stereo vision, three-dimensional reconstruction, depth estimation, and image measurement. Image registration generally refers to superimposing images captured by cameras by using imaging transformation parameters between the cameras to achieve a completely overlapped state.
Accurate image registration is an important precondition for accurately realizing applications such as image splicing, image fusion and the like. The existing registration method usually depends on the fact that the scene where the camera is located has more striking features, and the scene is used as an extracted or selected object. Therefore, for scenes with less prominent features, the existing registration method is generally poor in registration effect. In view of this, how to improve the image registration effect becomes an urgent problem to be solved.
Disclosure of Invention
The camera calibration method, the image registration method, the camera device and the storage device are mainly used for solving the technical problem, and the image registration effect can be improved.
In order to solve the above problem, a first aspect of the present application provides a camera calibration method, including: acquiring the spatial positions of a plurality of preset points within a preset view field range; the preset view field range is a common view field range of the first camera and the second camera; obtaining a first imaging position of a plurality of preset points projected on the first camera by using the camera parameters and the spatial position of the first camera; obtaining a second imaging position of the plurality of preset points projected on the second camera by using the camera parameters and the spatial position of the second camera; and determining imaging conversion parameters between the first camera and the second camera based on the first imaging position and the second imaging position of the plurality of preset points.
In order to solve the above problem, a second aspect of the present application provides an image registration method, including: acquiring imaging conversion parameters between a first camera and a second camera; wherein the imaging conversion parameters are obtained by the camera calibration method in the first aspect; and registering a first image shot by the first camera with a second image shot by the second camera by using the imaging conversion parameters.
In order to solve the above problem, a third aspect of the present application provides an imaging device, which includes a memory and a processor coupled to each other, where the memory stores program instructions, and the processor is configured to execute the program instructions to implement the camera calibration method in the first aspect.
In order to solve the above problem, a fourth aspect of the present application provides a storage device storing program instructions executable by a processor, the program instructions being used in the camera calibration method in the first aspect.
According to the scheme, the spatial positions of the plurality of preset points in the preset view field range are obtained, the preset view field range is the common view field range of the first camera and the second camera, so that the first imaging positions of the plurality of preset points projected on the first camera are obtained by utilizing the camera parameters and the spatial positions of the first camera, the second imaging positions of the plurality of preset points projected on the second camera are obtained by utilizing the camera parameters and the spatial positions of the second camera, and the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of the plurality of points, so that the imaging conversion parameters between the cameras can be obtained only by utilizing the camera parameters of the cameras and the spatial positions of the plurality of preset points without extracting any scene characteristics, and the accuracy of the imaging conversion parameters can be improved, and then the effect of subsequently utilizing the imaging conversion parameters to carry out image registration can be favorably improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a camera calibration method according to the present application;
FIG. 2 is a diagram illustrating an embodiment of the spatial positions of the preset points;
FIG. 3 is a schematic flow chart diagram illustrating another embodiment of a camera calibration method according to the present application;
FIG. 4 is a schematic flow chart diagram illustrating an embodiment of an image registration method of the present application;
FIG. 5 is a schematic diagram of a frame of an embodiment of a camera calibration apparatus according to the present application;
FIG. 6 is a block diagram of an embodiment of an image registration apparatus according to the present application;
FIG. 7 is a schematic diagram of a frame of an embodiment of the imaging device of the present application;
FIG. 8 is a block diagram of an embodiment of a memory device according to the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a camera calibration method according to an embodiment of the present application. Specifically, the method may include the steps of:
step S11: and acquiring the spatial positions of a plurality of preset points within a preset view field range.
In the embodiment of the present disclosure, the preset view field range is a common view field range of the first camera and the second camera, that is, if the preset point is located in the view field range of the first camera, the preset point can be shot by the first camera, and the preset points are also located in the view field range of the second camera, and can be shot by the second camera.
In one implementation scenario, the first camera and the second camera may be integrated into the same imaging device. For example, the first camera and the second camera may be arranged up and down in the image pickup device, or the first camera and the second camera may be arranged left and right in the image pickup device, which may be specifically set according to the actual application requirement, and is not limited herein.
In another implementation scenario, the types of the first camera and the second camera may be set according to actual application needs. For example, the first camera may be a wide-angle camera and the second camera may be a tele-camera; or the first camera and the second camera can both be wide-angle cameras; alternatively, the first camera and the second camera may both be telephoto cameras, and are not limited herein.
In yet another implementation scenario, the number of the preset points may be set according to the actual application requirement. Specifically, the number of the preset points may be greater than or equal to 4, and so on, for example, 4, 5, 6, 7, and so on, which are not illustrated herein.
The spatial position of the preset point is the three-dimensional coordinate of the preset point, and for convenience of description, the ith preset point K in the plurality of preset pointsiCan be expressed as (x)i,yi,zi). In an implementation scenario, please refer to fig. 2 in combination, fig. 2 is a state diagram of an embodiment of spatial positions of preset points, as shown in fig. 2, a plurality of preset points are all on a preset plane within a preset field range, and the preset plane is perpendicular to an optical axis of a camera device, the optical axis of the camera device may be an optical axis of a first camera (a dotted line a in the drawing) or an optical axis of a second camera (a dotted line B in the drawing), and in addition, when the optical axis of the first camera is not parallel to the optical axis of the second camera, the optical axis of the camera device may also be an angular bisector of an included angle formed by the optical axis of the first camera and the optical axis of the second camera, which may be specifically set according to actual application requirements, and is not limited herein. In a specific implementation scenario, for convenience of description, the distance from the image capturing device to the preset plane may be denoted as d, and the spatial position of the ith preset point in the plurality of preset points may be denoted as (x)i,yiD). In the above mode, the plurality of preset points are arranged on the preset plane in the preset field of view, and the preset plane is perpendicular to the optical axis of the camera device, so that the improvement of subsequent calculation imaging can be facilitatedAccuracy of the conversion parameters.
In a specific implementation scenario, under the condition that the field of view ranges of the first camera and the second camera are known, a common field of view range of the first camera and the second camera can be determined according to a position relationship between the first camera and the second camera, and a plurality of preset planes perpendicular to an optical axis of the camera device can be determined in the common field of view range, so that a plurality of preset points can be selected in one of the preset planes, and spatial positions of the preset points are acquired, so that imaging conversion parameters of the first camera and the second camera can be calculated subsequently. Therefore, scene features do not need to be extracted, a registration image does not need to be provided, or complicated preposition work is carried out, only a plurality of virtual points in a preset view field range need to be selected as preset points, the method is convenient to use, and the method can be suitable for scenes with less prominent features.
Step S12: and obtaining a first imaging position of the plurality of preset points projected on the first camera by using the camera parameters and the spatial position of the first camera.
In one implementation scenario, the camera parameters may specifically include internal element parameters, which may include, for example, a lens x-axis focal length fxY-axis focal length fyAnd may also include the x-axis optical center position cxY-axis optical center position cy. In a specific implementation scenario, the internal element parameters may be represented by the following matrix K:
Figure BDA0002700077150000051
in the above formula (1), s represents a pixel pitch of a camera photosensitive device (sensor).
In an implementation scenario, the first camera may be used as a reference camera, so that the first imaging positions of the plurality of preset points can be obtained by using the internal element parameters and the spatial position of the first camera.
In a specific implementation scenario, the spatial positions of the preset points may be multiplied by the internal element parameters of the first camera, respectively, to obtain the corresponding first imaging positions. Specifically, it can be represented by the following formula:
Figure BDA0002700077150000052
in the above-mentioned formula (2),
Figure BDA0002700077150000053
denotes an internal element parameter, P, of the first cameraiRepresenting the spatial position, X, of the ith of several preset points1iAnd the first imaging position of the ith preset point projected to the first camera in the plurality of preset points is represented.
In another specific implementation scenario, the second camera may also be used as a reference camera according to actual application needs, and steps similar to those described above are adopted to obtain a plurality of preset points projected to the second imaging position of the second camera, which is not described herein again specifically.
Step S13: and obtaining a second imaging position of the plurality of preset points projected on the second camera by using the camera parameters and the spatial position of the second camera.
In one implementation scenario, the camera parameters may also include external position parameters, for example, a rotation parameter R, and a first camera to second camera translation parameter T. Specifically, the camera device can obtain the installation information of the camera device after or before installation, so that the external position parameters of the camera device can be obtained based on the installation information of the camera device, and further, the internal element parameters and the external position parameters of the second camera can be utilized to obtain the second imaging positions of a plurality of preset points on the premise that the first camera is a reference camera.
In a specific implementation scenario, the spatial positions of the preset points may be multiplied by the internal element parameters and the external position parameters of the second camera, respectively, to obtain corresponding second imaging positions. Specifically, it can be represented by the following formula:
Figure BDA0002700077150000061
in the above formula (3), K2And
Figure BDA0002700077150000062
both represent internal component parameters of the second camera,
Figure BDA0002700077150000063
which represents a parameter R of the rotation of the rotor,
Figure BDA0002700077150000064
representing translation parameters T, PiRepresenting the spatial position, X, of the ith of several preset points2iAnd the second imaging position of the ith preset point in the plurality of preset points projected to the second camera is shown.
In another specific implementation scenario, the installation information of the image pickup device may specifically include an installation angle of the image pickup device, so that the installation angle of the image pickup device may be utilized to determine a first included angle α between an optical axis of the image pickup device and a first preset plane, a second included angle β between the optical axis of the image pickup device and a second preset plane, and a third included angle γ between the optical axis of the image pickup device and a third preset plane, and then the first included angle α, the second included angle β, and the third included angle γ may be converted by using a preset conversion method to obtain a rotation parameter, and any two of the first preset plane, the second preset plane, and the third preset plane are perpendicular to each other. Specifically, it can be represented by the following formula:
Figure BDA0002700077150000065
in the above formula (4), Rx(α) is the roll angle, and the same direction as the right-hand helix (i.e., counterclockwise in the yz plane), Ry(β) is the pitch angle, and the same direction as the right-hand helix (i.e., counterclockwise in the zx plane), RzAnd (γ) is the angle of yaw, the same direction as the right-handed helix (i.e., counterclockwise in the xy plane). The first preset plane may be a ground, and the installation angle may be an included angle between an optical axis of the image pickup device and the ground.
In another specific implementation scenario, when the installation angle is not 0, that is, when the optical axis of the image pickup device is not parallel to the ground, the spatial position may be updated by using the rotation parameter, and based on the updated spatial position, the first imaging position and the second imaging position of the plurality of preset points are obtained by using the above steps. Specifically, the rotation parameter and the space may be multiplied, thereby achieving the update of the spatial position. In a specific implementation scenario, see the following equation:
Pi=R*Ki……(5)
in the above formula (5), R represents a rotation parameter, KiRepresenting spatial position, PiRepresenting the updated spatial position.
In another specific implementation scenario, the second camera may also be used as a reference camera according to actual application needs, and steps similar to those described above are adopted to obtain second imaging positions where a plurality of preset points are projected onto the second camera, and a plurality of preset points are projected onto the first imaging position of the first camera, that is, the second imaging positions of the plurality of preset points may be obtained by using internal element parameters and spatial positions of the second camera, and the first imaging positions of the plurality of preset points may be obtained by using internal element parameters and external position parameters of the first camera. Reference may be made to the foregoing description for details, which are not repeated herein.
In an implementation scenario, the steps S12 and S13 may be executed in a sequential order, for example, step S12 is executed first, and then step S13 is executed, or step S13 is executed first, and then step S12 is executed. In another implementation scenario, the above steps S12 and S13 may also be performed simultaneously. The setting can be specifically performed according to the actual application requirements, and is not limited herein.
Step S14: and determining imaging conversion parameters between the first camera and the second camera based on the first imaging position and the second imaging position of the plurality of preset points.
In an implementation scenario, an objective function related to the imaging conversion parameter may be constructed by using the first imaging position and the second imaging position of the preset point, so that the objective function is solved by using a preset solving manner to obtain the imaging conversion parameter. Specifically, for each preset point, an objective function related to the imaging conversion parameter may be constructed by using the first imaging position and the second imaging position thereof, which may be specifically expressed as follows:
Figure BDA0002700077150000071
in the above formula (6), H represents an imaging conversion parameter,
Figure BDA0002700077150000072
first imaging position, X, representing the ith preset point2iA second imaging position representing the ith preset point. A plurality of equation sets related to the imaging conversion parameters can be constructed through a plurality of preset points, so that the equation sets are solved, and the numerical values of all elements in the imaging conversion parameters can be obtained. For example, when the imaging conversion parameter is a matrix, the values of the elements in the matrix can be obtained. Specifically, the size of the matrix may be set according to the actual application requirement, for example, 3 × 3, and the like, which is not limited herein.
In a specific implementation scenario, when the scene includes more than two cameras, the above steps may also be used to obtain an imaging conversion parameter between any two cameras. For example, when a scene includes three cameras, an imaging conversion parameter between the first camera and the second camera, an imaging conversion parameter between the second camera and the third camera, and an imaging conversion parameter between the first camera and the third camera may be obtained, and when a scene includes four, five, six, and the like, the same may be done, and there is no need to give an example here, and specifically refer to the foregoing step description, which is not described herein again. In addition, the more than two cameras may also be integrated in the same camera device, for example, three cameras are integrated in the same camera device, or four cameras are integrated in the same camera device, which may be specifically set according to actual application needs, and is not limited herein.
According to the scheme, the spatial positions of the plurality of preset points in the preset view field range are obtained, the preset view field range is the common view field range of the first camera and the second camera, so that the first imaging positions of the plurality of preset points projected on the first camera are obtained by utilizing the camera parameters and the spatial positions of the first camera, the second imaging positions of the plurality of preset points projected on the second camera are obtained by utilizing the camera parameters and the spatial positions of the second camera, and the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of the plurality of points, so that the imaging conversion parameters between the cameras can be obtained only by utilizing the camera parameters of the cameras and the spatial positions of the plurality of preset points without extracting any scene characteristics, and the accuracy of the imaging conversion parameters can be improved, and then the effect of subsequently utilizing the imaging conversion parameters to carry out image registration can be favorably improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating another embodiment of a camera calibration method according to the present application. The method specifically comprises the following steps:
step S31: based on the imaging device mounting information, an external position parameter of the imaging device is obtained.
In the embodiment of the present disclosure, the image pickup device is integrated with the first camera and the second camera. The mounting information may specifically include a mounting angle of the image pickup device.
Reference may be made to the related description in the foregoing embodiments, and details are not repeated herein.
Step S32: and acquiring the spatial positions of a plurality of preset points within a preset view field range.
In the embodiment of the present disclosure, the preset field range is a common field range of the first camera and the second camera.
Reference may be made to the related description in the foregoing embodiments, and details are not repeated herein.
In an implementation scenario, the steps S31 and S32 may be executed in a sequential order, for example, step S31 is executed first, and then step S32 is executed, or step S32 is executed first, and then step S31 is executed. In another implementation scenario, the above steps S31 and S32 may be performed simultaneously. The setting can be specifically performed according to the actual application requirements, and is not limited herein.
Step S33: and judging whether the installation angle is 0, if not, executing the step S34, otherwise, executing the step S35.
Step S34: the spatial position is updated with the rotation parameters.
Reference may be made to the related description in the foregoing embodiments, and details are not repeated herein.
Step S35: and taking the first camera as a reference camera, and obtaining first imaging positions of a plurality of preset points by using the internal element parameters and the spatial position of the first camera.
Reference may be made to the related description in the foregoing embodiments, and details are not repeated herein.
Step S36: and obtaining second imaging positions of a plurality of preset points by using the internal element parameters and the external position parameters of the second camera.
Reference may be made to the related description in the foregoing embodiments, and details are not repeated herein.
In an implementation scenario, the second camera may also be used as a reference camera, so that the internal element parameters and the spatial position of the second camera may be used to obtain second imaging positions of a plurality of preset points, and the internal element parameters and the external position parameters of the first camera may be used to obtain first imaging positions of a plurality of preset points. The setting can be specifically performed according to the actual application requirements, and is not limited herein.
Step S37: and determining imaging conversion parameters between the first camera and the second camera based on the first imaging position and the second imaging position of the plurality of preset points.
Reference may be made to the related description in the foregoing embodiments, and details are not repeated herein.
Different from the foregoing embodiment, based on the installation information of the camera device, obtaining the external position parameter of the camera device, and obtaining the spatial positions of the plurality of preset points located within the preset view field range, thereby determining whether the installation angle is 0, and when the installation angle is not 0, updating the spatial positions by using the rotation parameter, when the installation angle is 0, directly using the first camera as the reference camera, and using the internal element parameter and the spatial position of the first camera to obtain the first imaging positions of the plurality of preset points, and using the internal element parameter and the external position parameter of the second camera to obtain the second imaging positions of the plurality of preset points, and further based on the first imaging positions and the second imaging positions of the plurality of preset points, determining the imaging conversion parameter between the first camera and the second camera, so that only the camera parameter of the camera and the spatial positions of the plurality of preset points are needed, the imaging conversion parameters between the cameras can be obtained without extracting any scene features, so that the accuracy of the imaging conversion parameters can be improved, and the effect of subsequently utilizing the imaging conversion parameters to carry out image registration can be favorably improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of an image registration method according to the present application. Specifically, the method may include the steps of:
step S41: and acquiring imaging conversion parameters between the first camera and the second camera.
In the embodiment of the present disclosure, the imaging conversion parameter is obtained through the steps in any of the above embodiments of the camera calibration method. Specifically, reference may be made to relevant steps in the foregoing embodiments, which are not described herein again.
In one implementation scenario, the first camera and the second camera may be integrated into the same imaging device. For example, the first camera and the second camera may be arranged up and down in the image pickup device, or the first camera and the second camera may be arranged left and right in the image pickup device, which may be specifically set according to the actual application requirement, and is not limited herein.
Step S42: and registering a first image shot by the first camera with a second image shot by the second camera by using the imaging conversion parameters.
In an implementation scenario, when the first camera is used as a reference camera in the process of obtaining the imaging conversion parameter, the pixel coordinates of the pixel points in the first image shot by the first camera may be multiplied by the imaging conversion parameter to obtain the pixel coordinates of the corresponding pixel points in the second image shot by the second camera, so that the pixel points in the first image are aligned with the corresponding pixel points in the second image, and the first image shot by the first camera and the second image shot by the second camera may be registered.
In another implementation scenario, when the second camera is used as a reference camera in the process of obtaining the imaging conversion parameter, the pixel coordinates of the pixel points in the second image captured by the second camera may be multiplied by the imaging conversion parameter to obtain the pixel coordinates of the corresponding pixel points in the first image captured by the first camera, so that the pixel points in the second image are aligned with the corresponding pixel points in the first image, and the second image captured by the second camera may be registered with the first image captured by the first camera.
In yet another implementation scenario, after the first image and the second image are registered, the first image and the second image may be subjected to stitching processing, fusion processing, and the like based on the registration result, which may be specifically set according to the actual application requirement, and is not limited herein.
According to the scheme, the imaging conversion parameters are obtained through the steps in the camera calibration method embodiment, and the accuracy of the imaging conversion parameters can be improved, so that the registration effect can be improved when the first image shot by the first camera is registered with the second image shot by the second camera based on the imaging conversion parameters.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an embodiment of a camera calibration device 50 according to the present application. The camera calibration device 50 comprises a position acquisition module 51, a first projection module 52, a second projection module 53 and a parameter determination module 54, wherein the position acquisition module 51 is used for acquiring the spatial positions of a plurality of preset points located in a preset field range; the preset view field range is a common view field range of the first camera and the second camera; the first projection module 52 is configured to obtain a first imaging position where a plurality of preset points are projected on the first camera by using the camera parameters and the spatial position of the first camera; the second projection module 53 is configured to obtain a second imaging position where a plurality of preset points are projected on the second camera by using the camera parameters and the spatial position of the second camera; the parameter determining module 54 is configured to determine an imaging conversion parameter between the first camera and the second camera based on the first imaging position and the second imaging position of the plurality of preset points.
According to the scheme, the spatial positions of the plurality of preset points in the preset view field range are obtained, the preset view field range is the common view field range of the first camera and the second camera, so that the first imaging positions of the plurality of preset points projected on the first camera are obtained by utilizing the camera parameters and the spatial positions of the first camera, the second imaging positions of the plurality of preset points projected on the second camera are obtained by utilizing the camera parameters and the spatial positions of the second camera, and the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of the plurality of points, so that the imaging conversion parameters between the cameras can be obtained only by utilizing the camera parameters of the cameras and the spatial positions of the plurality of preset points without extracting any scene characteristics, and the accuracy of the imaging conversion parameters can be improved, and then the effect of subsequently utilizing the imaging conversion parameters to carry out image registration can be favorably improved.
In some embodiments, the first camera and the second camera are integrated in the same camera device, the camera parameters include internal element parameters, and the first projection module 52 is specifically configured to use the first camera as a reference camera, and obtain the first imaging positions of the plurality of preset points by using the internal element parameters and the spatial position of the first camera.
Different from the foregoing embodiment, the first camera and the second camera are integrated in the same camera device, and the camera parameters include internal element parameters, so that the first camera is used as a reference camera, and the first imaging positions of the plurality of preset points are obtained by using the internal element parameters and the spatial position of the first camera, which is beneficial to reducing the calculation amount of calibration and reducing the calibration complexity.
In some embodiments, the camera parameters further include external position parameters, the camera calibration apparatus 50 further includes an external parameter obtaining module, configured to obtain the external position parameters of the image pickup device based on the installation information of the image pickup device, and the second projection module 53 is specifically configured to obtain the second imaging positions of the plurality of preset points by using the internal element parameters and the external position parameters of the second camera.
Different from the embodiment, the camera parameters further include external position parameters, and the external position parameters of the camera device are obtained based on the installation information of the camera device, so that the second imaging positions of the plurality of preset points are obtained by using the internal element parameters and the external position parameters of the second camera, the calculation amount of calibration can be favorably reduced, and the calibration complexity is reduced.
In some embodiments, the mounting information includes a mounting angle of the image pickup device, the external position parameter includes a rotation parameter of the image pickup device, the external parameter obtaining module includes an included angle obtaining submodule configured to determine a first included angle between an optical axis of the image pickup device and a first preset plane, a second included angle between the optical axis of the image pickup device and a second preset plane, and a third included angle between the optical axis of the image pickup device and a third preset plane, using the mounting angle of the image pickup device, the external parameter obtaining module includes an included angle converting submodule configured to convert the first included angle, the second included angle, and the third included angle by using a preset conversion method to obtain the rotation parameter, and any two of the first preset plane, the second preset plane, and the third preset plane are perpendicular to each other.
Different from the foregoing embodiment, the installation information includes an installation angle of the image pickup device, the external position parameter includes a rotation parameter of the image pickup device, thereby utilizing the installation angle of the image pickup device, determining a first included angle between an optical axis of the image pickup device and a first preset plane, a second included angle between the optical axis of the image pickup device and a second preset plane, a third included angle between the optical axis of the image pickup device and a third preset plane, and converting the first included angle, the second included angle and the third included angle by using a preset conversion mode to obtain the rotation parameter, and the first preset plane, the second preset plane and the third preset plane are perpendicular to each other at will, which can be beneficial to improving the accuracy of the rotation parameter.
In some embodiments, the first predetermined plane is the ground, and/or the camera calibration device 50 further includes a position updating module for updating the spatial position by using the rotation parameter when the installation angle is not 0.
Different from the previous embodiment, the first preset plane is set as the ground, so that the calibration complexity can be reduced; when the installation angle is not 0, the spatial position is updated by utilizing the rotation parameters, so that the calibration accuracy can be favorably improved.
In some embodiments, the external position parameters further include a translation parameter of the first camera to the second camera; the first projection module 52 is specifically configured to multiply the spatial positions of the preset points with the internal element parameters of the first camera, respectively, to obtain corresponding first imaging positions, and the second projection module 53 is specifically configured to multiply the spatial positions of the preset points with the internal element parameters and the external position parameters of the second camera, respectively, to obtain corresponding second imaging positions.
Unlike the foregoing embodiment, the above arrangement can be advantageous to reduce the computational complexity of camera calibration.
In some embodiments, the plurality of preset points are all on a fourth preset plane within the preset field of view, and the fourth preset plane is perpendicular to the optical axis of the image pickup device.
Different from the foregoing embodiment, the plurality of preset points are all arranged on the fourth preset plane within the preset view field range, and the fourth preset plane is perpendicular to the optical axis of the camera device, which can be beneficial to improving the calibration accuracy.
In some embodiments, the parameter determining module 54 includes a function constructing sub-module configured to construct an objective function related to the imaging conversion parameter using the first imaging position and the second imaging position of the preset point, and the parameter determining module 54 includes a function solving sub-module configured to solve the objective function using a preset method to obtain the imaging conversion parameter.
Different from the foregoing embodiment, the target function related to the imaging conversion parameter is constructed by using the first imaging position and the second imaging position of the preset point, so that the target function is solved by using the preset mode to obtain the imaging conversion parameter, which is beneficial to simplifying the calibration process and reducing the calibration complexity.
Referring to fig. 6, fig. 6 is a schematic diagram of an embodiment of an image registration apparatus 60 according to the present application. The image registration device 60 comprises a parameter obtaining module 61 and an image registration module 62, wherein the parameter obtaining module 61 is used for obtaining an imaging conversion parameter between the first camera and the second camera; the imaging conversion parameters are obtained by the camera calibration device in any one of the above embodiments of the camera calibration device; the image registration module 62 is configured to register a first image captured by the first camera with a second image captured by the second camera using the imaging conversion parameter.
According to the scheme, the imaging conversion parameters are obtained through the camera calibration device in any one of the camera calibration device embodiments, and the accuracy of the imaging conversion parameters can be improved, so that the registration effect can be improved when the first image shot by the first camera is registered with the second image shot by the second camera based on the imaging conversion parameters.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an embodiment of an image pickup device 70 according to the present application. The camera device 70 comprises a memory 71 and a processor 72 coupled to each other, the memory 71 stores program instructions, and the processor 72 is configured to execute the program instructions to implement the steps in any of the above-mentioned embodiments of the camera calibration method. In addition, the image pickup device 70 may be provided with a photosensitive element, an optical lens, and the like according to actual application requirements, and is not limited herein.
In particular, the processor 72 is configured to control itself and the memory 71 to implement the steps in any of the above-described embodiments of the camera calibration method. The processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The Processor 72 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Additionally, processor 72 may be commonly implemented by a plurality of integrated circuit chips.
According to the scheme, the spatial positions of the plurality of preset points in the preset view field range are obtained, the preset view field range is the common view field range of the first camera and the second camera, so that the first imaging positions of the plurality of preset points projected on the first camera are obtained by utilizing the camera parameters and the spatial positions of the first camera, the second imaging positions of the plurality of preset points projected on the second camera are obtained by utilizing the camera parameters and the spatial positions of the second camera, and the imaging conversion parameters between the first camera and the second camera are determined based on the first imaging positions and the second imaging positions of the plurality of points, so that the imaging conversion parameters between the cameras can be obtained only by utilizing the camera parameters of the cameras and the spatial positions of the plurality of preset points without extracting any scene characteristics, and the accuracy of the imaging conversion parameters can be improved, and then the effect of subsequently utilizing the imaging conversion parameters to carry out image registration can be favorably improved.
Referring to fig. 8, fig. 8 is a schematic diagram of a memory device 80 according to an embodiment of the present disclosure. The memory device 80 stores program instructions 801 that can be executed by the processor, the program instructions 801 being for implementing the steps in any of the camera calibration method embodiments described above.
According to the scheme, the accuracy of the imaging conversion parameters can be improved, and the effect of subsequently utilizing the imaging conversion parameters to conduct image registration can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. A camera calibration method is characterized by comprising the following steps:
acquiring the spatial positions of a plurality of preset points within a preset view field range; the preset view field range is a common view field range of the first camera and the second camera;
obtaining a first imaging position of the plurality of preset points projected on the first camera by using the camera parameters of the first camera and the spatial position; and the number of the first and second groups,
obtaining a second imaging position of the plurality of preset points projected on the second camera by using the camera parameters of the second camera and the spatial position;
and determining imaging conversion parameters between the first camera and the second camera based on the first imaging position and the second imaging position of the preset points.
2. The method of claim 1, wherein the first camera and the second camera are integrated in the same imaging device, the camera parameters comprising internal component parameters;
the obtaining of the first imaging positions of the plurality of preset points projected on the first camera by using the camera parameters of the first camera and the spatial position includes:
and taking the first camera as a reference camera, and obtaining the first imaging positions of the plurality of preset points by using the internal element parameters and the spatial position of the first camera.
3. The method of claim 2, wherein the camera parameters further include an external location parameter; before the obtaining, by using the camera parameters and the spatial position of the second camera, a second imaging position where the preset points are projected on the second camera, the method further includes:
obtaining an external position parameter of the camera device based on the camera device installation information;
the obtaining of the second imaging positions of the preset points projected on the second camera by using the second camera parameters of the second camera and the spatial position includes:
and obtaining second imaging positions of the plurality of preset points by using the internal element parameters and the external position parameters of the second camera.
4. The method according to claim 3, wherein the mounting information includes a mounting angle of the image pickup device, and the external position parameter includes a rotation parameter of the image pickup device;
the obtaining of the external position parameter of the image pickup device based on the image pickup device mounting position information includes:
determining a first included angle between an optical axis of the camera device and a first preset plane, a second included angle between the optical axis of the camera device and a second preset plane, and a third included angle between the optical axis of the camera device and a third preset plane by using the installation angle of the camera device;
converting the first included angle, the second included angle and the third included angle by using a preset conversion mode to obtain the rotation parameters;
and any two of the first preset plane, the second preset plane and the third preset plane are vertical to each other.
5. The method according to claim 4, characterized in that said first predetermined plane is the ground;
and/or before the first imaging position where the plurality of preset points are projected on the first camera is obtained by using the camera parameters and the spatial position of the first camera, the method further includes:
and when the installation angle is not 0, updating the spatial position by using the rotation parameter.
6. The method of claim 3, wherein the external position parameters further include translation parameters of the first camera to the second camera;
and/or obtaining the first imaging positions of the plurality of preset points by using the internal element parameters of the first camera and the spatial position, wherein the obtaining of the first imaging positions of the plurality of preset points comprises:
multiplying the spatial positions of the preset points with the parameters of the internal elements of the first camera respectively to obtain the corresponding first imaging positions;
and/or obtaining second imaging positions of the plurality of preset points by using the internal element parameters and the external position parameters of the second camera, including:
and multiplying the spatial positions of the preset points with the internal element parameters and the external position parameters of the second camera respectively to obtain the corresponding second imaging positions.
7. The method according to claim 2, wherein the preset points are all on a fourth preset plane within the preset field of view, and the fourth preset plane is perpendicular to the optical axis of the image pickup device.
8. The method according to claim 1, wherein the determining the imaging conversion parameter between the first camera and the second camera based on the first imaging position and the second imaging position of the preset points comprises:
constructing an objective function related to the imaging conversion parameter by using the first imaging position and the second imaging position of the preset point;
and solving the objective function by using a preset mode to obtain the imaging conversion parameter.
9. An image registration method, comprising:
acquiring imaging conversion parameters between a first camera and a second camera; wherein the imaging conversion parameter is obtained by the camera calibration method of any one of claims 1 to 8;
and registering a first image shot by the first camera with a second image shot by the second camera by using the imaging conversion parameters.
10. The method of claim 9, wherein the first camera and the second camera are integrated in the same imaging device.
11. An image capture device comprising a memory and a processor coupled to each other, the memory storing program instructions, the processor being configured to execute the program instructions to implement the camera calibration method according to any one of claims 1 to 8, or to implement the image registration method according to any one of claims 9 to 10.
12. A storage device storing program instructions executable by a processor to implement the camera calibration method of any one of claims 1 to 8 or to implement the image registration method of any one of claims 9 to 10.
CN202011019202.2A 2020-09-24 2020-09-24 Camera calibration method, image registration method, image pickup device and storage device Active CN112233185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011019202.2A CN112233185B (en) 2020-09-24 2020-09-24 Camera calibration method, image registration method, image pickup device and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011019202.2A CN112233185B (en) 2020-09-24 2020-09-24 Camera calibration method, image registration method, image pickup device and storage device

Publications (2)

Publication Number Publication Date
CN112233185A true CN112233185A (en) 2021-01-15
CN112233185B CN112233185B (en) 2024-06-11

Family

ID=74108025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011019202.2A Active CN112233185B (en) 2020-09-24 2020-09-24 Camera calibration method, image registration method, image pickup device and storage device

Country Status (1)

Country Link
CN (1) CN112233185B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770057A (en) * 2021-01-20 2021-05-07 北京地平线机器人技术研发有限公司 Camera parameter adjusting method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805910A (en) * 2018-06-01 2018-11-13 海信集团有限公司 More mesh Train-borne recorders, object detection method, intelligent driving system and automobile
WO2018233373A1 (en) * 2017-06-23 2018-12-27 华为技术有限公司 Image processing method and apparatus, and device
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018233373A1 (en) * 2017-06-23 2018-12-27 华为技术有限公司 Image processing method and apparatus, and device
CN108805910A (en) * 2018-06-01 2018-11-13 海信集团有限公司 More mesh Train-borne recorders, object detection method, intelligent driving system and automobile
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770057A (en) * 2021-01-20 2021-05-07 北京地平线机器人技术研发有限公司 Camera parameter adjusting method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112233185B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
US11997397B2 (en) Method, apparatus, and device for processing images, and storage medium
CN106683071B (en) Image splicing method and device
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN112655024B (en) Image calibration method and device
JP2017112602A (en) Image calibrating, stitching and depth rebuilding method of panoramic fish-eye camera and system thereof
KR20090078463A (en) Distorted image correction apparatus and method
CN109598764A (en) Camera calibration method and device, electronic equipment, computer readable storage medium
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN111340737B (en) Image correction method, device and electronic system
CN109598763A (en) Camera calibration method, device, electronic equipment and computer readable storage medium
CN109660718A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN109584312A (en) Camera calibration method, device, electronic equipment and computer readable storage medium
CN112233185B (en) Camera calibration method, image registration method, image pickup device and storage device
CN109697737A (en) Camera calibration method, device, electronic equipment and computer readable storage medium
CN112598751A (en) Calibration method and device, terminal and storage medium
CN111353945B (en) Fisheye image correction method, device and storage medium
JP2017103695A (en) Image processing apparatus, image processing method, and program of them
CN114693807A (en) Method and system for reconstructing mapping data of power transmission line image and point cloud
CN109379521B (en) Camera calibration method and device, computer equipment and storage medium
CN109584313A (en) Camera calibration method, device, computer equipment and storage medium
CN110796596A (en) Image splicing method, imaging device and panoramic imaging system
CN111292380A (en) Image processing method and device
CN112446928B (en) External parameter determining system and method for shooting device
CN114862934B (en) Scene depth estimation method and device for billion pixel imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant