CN112258582B - Camera attitude calibration method and device based on road scene recognition - Google Patents

Camera attitude calibration method and device based on road scene recognition Download PDF

Info

Publication number
CN112258582B
CN112258582B CN202011086402.XA CN202011086402A CN112258582B CN 112258582 B CN112258582 B CN 112258582B CN 202011086402 A CN202011086402 A CN 202011086402A CN 112258582 B CN112258582 B CN 112258582B
Authority
CN
China
Prior art keywords
camera
straight line
rotation angle
edges
attitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011086402.XA
Other languages
Chinese (zh)
Other versions
CN112258582A (en
Inventor
付垚
吴凯
刘江
贾腾龙
刘奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heading Data Intelligence Co Ltd
Original Assignee
Heading Data Intelligence Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heading Data Intelligence Co Ltd filed Critical Heading Data Intelligence Co Ltd
Priority to CN202011086402.XA priority Critical patent/CN112258582B/en
Publication of CN112258582A publication Critical patent/CN112258582A/en
Application granted granted Critical
Publication of CN112258582B publication Critical patent/CN112258582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a camera attitude calibration method and a device based on road scene recognition, which are characterized in that semantic segmentation is carried out on the basis of pictures acquired by a camera, and linear beams parallel to the optical axis of the camera in a road scene are extracted as the edges of target lines; fitting a linear equation according to the target straight line edge, and solving vanishing point coordinates corresponding to different target straight line edges according to a perspective principle; and reversely deducing the attitude of the camera according to the linear equation obtained by fitting and the eye level line determined by the vanishing point coordinates to finish attitude calibration. According to the method, the acquired image data is subjected to semantic segmentation, effective environment edge information is extracted and used for calculating the camera posture, and parameter calibration is completed in a self-adaptive mode in the using process of the acquisition equipment, so that the acquisition precision of map elements is improved, errors caused by crowdsourcing of construction maps in the prior art in the sensing and three-dimensional scene reconstruction processes are reduced, and a large amount of manual calibration cost in equipment deployment is reduced.

Description

Camera attitude calibration method and device based on road scene recognition
Technical Field
The invention relates to the technical field of machine vision, in particular to a method for realizing gesture self-calibration of a vehicle-mounted camera through road scene recognition in a driving process.
Background
The high-precision map plays an important role in the automatic driving system, and is an indispensable part of the automatic driving system. The high-precision map is high in manufacturing cost, long in acquisition period and slow in updating. Therefore, the mode of crowdsourcing collection of the vehicle-mounted equipment is adopted at present to keep the freshness of the high-precision map, the crowdsourcing mode is low in information collection cost, and the installation precision of the collection equipment is difficult to guarantee.
The three-dimensional scene reconstruction completed by camera data without strict parameter calibration has relative imaging errors, the installation mode of the camera cannot be absolutely the same every time, calibration work needs a specific flow for each trolley, and pure manual calibration work needs a large amount of labor time cost.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a camera attitude calibration method and device based on road scene recognition, which are used for semantically segmenting acquired image data, extracting effective environment edge information for camera attitude calculation, and completing parameter calibration in a self-adaptive manner in the use process of acquisition equipment, thereby improving the acquisition precision of map elements, reducing errors introduced in the perception and three-dimensional scene reconstruction processes of crowdsourcing maps in the prior art, and reducing a large amount of manual calibration cost in equipment deployment.
The technical scheme for solving the technical problems is as follows:
in a first aspect, the present invention provides a camera pose calibration method based on road scene recognition, including the following steps:
performing semantic segmentation based on a picture acquired by a camera, and extracting a line beam parallel to the optical axis of the camera in a road scene as a target line edge;
fitting a linear equation according to the target linear edge, and solving vanishing point coordinates corresponding to different target linear edges according to a perspective principle;
and reversely deducing the attitude of the camera according to the linear equation obtained by fitting and the eye level line determined by the vanishing point coordinates to finish attitude calibration.
Preferably, the fitting a linear equation according to the target straight line edge and solving vanishing point coordinates corresponding to different target straight line edges according to a perspective principle includes:
extracting an ROI (region of interest) where a target straight line edge is located by adopting an artificial neural network, and then obtaining a straight line edge point set by utilizing a Canny segmentation algorithm;
for any line object, deriving a parameter equation of a straight line by using at least two points in a point set, and solving a slope k and an intercept b of the straight line;
aiming at a group of mutually parallel straight line edges, solving the intersection point, namely the vanishing point, of the group of straight line edges according to the parameter equation corresponding to the group of straight line edges.
Preferably, the reversely deriving the camera pose according to the fitted linear equation and the view plane determined by the plurality of vanishing point coordinates to complete the pose calibration includes:
the X-axis rotation angle alpha and the Y-axis rotation angle beta of the camera are calculated using the following equations,
Figure BDA0002720493590000021
in the formula (u) p ,v p ) Vanishing points corresponding to a set of mutually parallel straight line edges in the image (u) 0 ,v 0 ) Offset of origin of camera optical axis to picture pixel coordinate, f x ,f y A pixel number representing a unit length on an imaging plane;
selecting two groups of non-parallel linear edge wire harnesses in a plane, extracting different edges to obtain different vanishing points, wherein the connecting line of the two groups of vanishing points is a visual flat line, and a horizontal included angle in the u direction in a pixel coordinate system, namely a Z-axis rotation angle gamma is obtained by calculating the slope of the visual flat line in an image;
and solving a rotation matrix R according to the X-axis rotation angle alpha, the Y-axis rotation angle beta and the Z-axis rotation angle gamma.
Preferably, the method further comprises calibrating the camera pose R obtained by the first calibration 0 As an initial value, time-domain first-order lag filtering is introduced to suppress error noise:
optR 0 =R 0
Figure BDA0002720493590000031
and setting an iterative convergence condition, and obtaining an optimized rotation matrix R through multiple observation iterations and combining a known translation vector t to derive a complete camera external reference matrix [ R | t ] for camera attitude calibration, wherein the translation vector t is a space translation vector of a camera mounting bracket to a vehicle body.
In a second aspect, the present invention further provides a camera pose calibration apparatus based on road scene recognition, including:
the segmentation extraction module is used for performing semantic segmentation on the basis of pictures acquired by the camera and extracting linear beams parallel to the optical axis of the camera in a road scene as target linear edges;
the linear equation fitting module is used for fitting a linear equation according to the target linear edge and solving vanishing point coordinates corresponding to different target linear edges according to a perspective principle;
and the attitude calculation module is used for reversely deducing the attitude of the camera according to the fitted linear equation and the vision flat line determined by the plurality of vanishing point coordinates to finish attitude calibration.
In a third aspect, the present invention also provides an electronic device, including:
a memory for storing a computer software program;
and the processor is used for reading and executing the computer software program stored in the memory, so as to realize the camera attitude calibration method based on road scene recognition in the first aspect of the invention.
In a fourth aspect, the present invention further provides a non-transitory computer readable storage medium, in which a computer software program for implementing the method for calibrating a camera pose based on road scene recognition according to the first aspect of the present invention is stored.
The invention has the beneficial effects that:
1. for map data collected in a crowdsourcing mode, the map precision problem caused by insufficient perception reconstruction precision is solved, and the camera posture calibration method provided by the invention can effectively reduce perception errors, so that the crowdsourcing map construction precision is improved.
2. In the traditional map acquisition method, camera calibration needs manual calculation after equipment is installed and a pattern in a specific mode is shot, and the camera posture calibration method provided by the invention can be automatically completed through semantic information of a road scene identified by a road without manual intervention, so that the deployment cost is reduced.
3. The invention provides a camera posture calibration method which can automatically complete calibration in the running process of equipment and reduce maintenance cost.
Drawings
Fig. 1 is a flowchart of a camera pose calibration method based on road scene recognition according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the conversion between different coordinate systems involved in an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a positional relationship between camera coordinates and vehicle body coordinates according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a perspective view of a camera contrasting real parallel lines in accordance with an embodiment of the present invention;
FIG. 5 is a schematic view of the horizontal line derivation involved in the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a camera pose calibration device based on road scene recognition according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
Example 1
The embodiment of the invention provides a camera attitude calibration method based on road scene recognition, which comprises the following steps as shown in figure 1,
step 1, semantic segmentation edge extraction
And after obtaining the semantic information of the stable image extraction, performing edge extraction on the characteristic elements. The characteristic elements related in the embodiment of the invention comprise road signs with identification function contained in road scenes such as kerbs, guardrails, light poles, building edges and the like.
And selecting a linear beam parallel to the optical axis of the camera in the space for calibrating the camera, and taking the target edge conforming to the characteristics as an effective edge. It should be understood that the line beams parallel to each other or the line beams parallel to the optical axis of the camera described in this embodiment refer to the extracted line beams parallel to each other or the optical axis of the camera in the real scene, for example, the curb base lines on both sides of the road, the curb base lines and the lane lines, or the curb base lines and the guardrail base lines, and so on. In the picture shot by the camera, due to the perspective principle, two parallel lines form a certain included angle.
And extracting the ROI (region of interest) of the semantic target by adopting an artificial neural network, and then obtaining a linear edge point set by utilizing a Canny segmentation algorithm.
Figure BDA0002720493590000051
Figure BDA0002720493590000052
As shown in fig. 4, where R and Ω are two straight lines that do not intersect in the physical world parallel to the optical axis of the camera. The corresponding point set is the uv value of the pixel coordinate on the straight line.
And 2, step: solving linear equations and vanishing points
For any line object in the pixel coordinate system, a parameter equation of a straight line can be deduced according to at least two points in the point set.
v=ku+b
Taking the point set corresponding to the straight line R as an example, take any two points R i ,r j E.g. R, is substituted into a pixel coordinate system parameter equation, so that the values of the slope k and the intercept b of the straight line are solved. According to the perspective principle, the extension lines of parallel lines in the physical world in an image intersect at a point at infinity, and the corresponding pixel coordinates (u) are solved through a parameter equation p ,v p ) Namely the vanishing point.
Figure BDA0002720493590000061
And step 3: deriving camera pose
As shown in FIGS. 2 and 3, the vehicle body coordinates [ X Y Z ] are based on the principle of planar imaging] T Through the attitude matrix [ R | t]Converting into camera coordinate, converting the camera coordinate into pixel coordinate [ uv ] via internal reference matrix K] T
Figure BDA0002720493590000062
For an internal reference matrix K where f x ,f y Number of pixels per unit length, u, on the imaging plane 0 ,v 0 Is the offset of the origin of the optical axis with respect to the pixel coordinates.
Figure BDA0002720493590000063
The space translation vector t of the camera mounting bracket to the vehicle body does not change in the driving process, so that the translation vector is used as a constant to solve the rotation matrix R, and then the camera attitude matrix can be derived. While the rotation matrix may be indirectly calculated from three attitude angles of the camera mounting, if the attitude angles are an X-axis rotation angle α, a Y-axis rotation angle β, and a Z-axis rotation angle γ, respectively, the rotation matrix R may be expressed as follows.
Figure BDA0002720493590000064
For the solution of the Z-axis rotation angle γ, two groups of different vanishing points are needed to form a view-level line, as shown in fig. 5, two groups of non-parallel wire harnesses in the plane are selected under the condition of no need of orthogonality, and the vanishing points in different directions can be obtained by taking the curb and the sidewalk in the figure as an example. And the connecting line of the two groups of vanishing points is a visual flat line, and a horizontal included angle in the u direction in the pixel coordinate system, namely a Z-axis rotation angle gamma is obtained by calculating the slope of the visual flat line in the image.
The known internal reference matrix K calculates the corresponding vanishing point (u) through the parameter equation of the two parallel lines obtained in the step 2 p ,v p ) The X-axis rotation angle alpha and the Y-axis rotation can be solved by the following equationsThe angle of rotation beta. Where XYZ is the body coordinate, X C Y C Z C To correspond to the camera coordinates, the known rotation angle γ is eliminated.
Figure BDA0002720493590000071
The vanishing point satisfies Z → ∞, so the above equation is simplified as:
Figure BDA0002720493590000072
the X-axis rotation angle α, the Y-axis rotation angle β, and the Z-axis rotation angle γ are obtained by the above method, and the rotation matrix R is derived.
And 4, step 4: multiple iteration optimized pose
According to the calibration method, three degrees of freedom in the camera rotation matrix R are constrained through single observation information, the constraint precision is different in different scenes, the result has certain contingency, and the long-term stability requirement of map data acquisition cannot be well met. In order to improve the robustness of the calibration algorithm, the camera attitude R obtained in the step 3 is used 0 As an initial value, time domain first-order lag filtering is introduced to suppress error noise.
optR 0 =R 0
Figure BDA0002720493590000073
And (3) giving an iterative convergence condition, stopping iteration if the attitude angle change | delta alpha | is less than 1 degree, and combining an optimized rotation matrix R obtained through multiple observation iterations with a known translation vector t to derive an integral camera external reference matrix [ Rt ] for camera attitude calibration.
Example 2
Based on the method, the invention also provides a camera attitude calibration device based on road scene recognition, which comprises the following steps:
and the segmentation extraction module is used for performing semantic segmentation on the basis of the pictures acquired by the camera and extracting linear beams parallel to the optical axis of the camera in the road scene as target linear edges.
And the linear equation fitting module is used for fitting a linear equation according to the target linear edge and solving vanishing point coordinates corresponding to different target linear edges according to a perspective principle.
And the attitude calculation module is used for reversely deducing the attitude of the camera according to the fitted linear equation and the vision flat line determined by the multiple vanishing point coordinates to finish attitude calibration.
An iterative optimization module for calibrating the camera attitude R obtained by the first calibration 0 As an initial value, time domain first-order lag filtering is introduced to suppress error noise:
optR 0 =R 0
Figure BDA0002720493590000081
and setting an iterative convergence condition, and obtaining an optimized rotation matrix R through multiple observation iterations and combining a known translation vector t to derive a complete camera external reference matrix [ R | t ] for camera attitude calibration, wherein the translation vector t is a space translation vector of a camera mounting bracket to a vehicle body.
Specifically, the linear equation fitting module includes:
the point set extraction submodule extracts an ROI (region of interest) area where the target straight line edge is located by adopting an artificial neural network and then obtains a straight line edge point set by utilizing a Canny segmentation algorithm;
the parameter equation derivation module is used for deriving a parameter equation of a straight line by using at least two points in the point set aiming at any line object and solving a slope k and an intercept b of the straight line;
and the vanishing point calculation module is used for solving the intersection point, namely the vanishing point, of a group of parallel straight line edges according to the parameter equation corresponding to the group of straight line edges.
The gesture calculation module comprises:
a rotation angle calculation module for calculating the rotation angle alpha of the X axis and the rotation angle beta of the Y axis of the camera by using the following formula,
Figure BDA0002720493590000082
in the formula (u) p ,v p ) Vanishing points corresponding to a set of mutually parallel straight line edges in the image, (u) 0 ,v 0 ) Offset of origin of camera optical axis to picture pixel coordinate, f x ,f y A pixel number representing a unit length on an imaging plane;
selecting two groups of non-parallel linear edge wire harnesses in a plane, extracting different edges to obtain different vanishing points, wherein the connecting line of the two groups of vanishing points is a visual flat line, and a horizontal included angle in the u direction in a pixel coordinate system, namely a Z-axis rotation angle gamma is obtained by calculating the slope of the visual flat line in an image;
and the rotation matrix solving module is used for solving the rotation matrix R according to the X-axis rotation angle alpha, the Y-axis rotation angle beta and the Z-axis rotation angle gamma.
Example 3
The present invention provides an electronic device, including:
a memory for storing a computer software program;
and the processor is used for reading and executing the computer software program stored in the memory, so that the camera posture calibration method based on road scene recognition in embodiment 1 is realized.
It should also be noted that the logic instructions in the computer software program can be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (8)

1. A camera attitude calibration method based on road scene recognition is characterized by comprising the following steps:
performing semantic segmentation on the basis of a picture acquired by a camera, and extracting a line beam parallel to the optical axis of the camera in a road scene as a target line edge;
fitting a linear equation according to the target straight line edge, and solving vanishing point coordinates corresponding to different target straight line edges according to a perspective principle;
reversely deducing the attitude of the camera according to a linear equation obtained by fitting and a horizon determined by a plurality of vanishing point coordinates to finish attitude calibration;
the step of reversely deducing the camera attitude according to the linear equation obtained by fitting and the view level determined by the multiple vanishing point coordinates to finish attitude calibration comprises the following steps:
the X-axis rotation angle alpha and the Y-axis rotation angle beta of the camera are calculated using the following equations,
Figure FDA0003835119220000011
in the formula (u) p ,v p ) Vanishing points corresponding to a set of mutually parallel straight line edges in the image, (u) 0 ,v 0 ) Offset of origin, f, of camera optical axis to picture pixel coordinates x ,f y A pixel number representing a unit length on an imaging plane;
selecting two groups of non-parallel linear edge wire harnesses in a plane, extracting different edges to obtain different vanishing points, wherein the connecting line of the two groups of vanishing points is a visual flat line, and a horizontal included angle in the u direction in a pixel coordinate system, namely a Z-axis rotation angle gamma is obtained by calculating the slope of the visual flat line in an image;
and solving a rotation matrix R according to the X-axis rotation angle alpha, the Y-axis rotation angle beta and the Z-axis rotation angle gamma.
2. The method of claim 1, wherein fitting a line equation according to the target line edge and solving vanishing point coordinates corresponding to different target line edges according to a perspective principle comprises:
extracting an ROI (region of interest) area where a target straight line edge is located by adopting an artificial neural network, and then obtaining a straight line edge point set by utilizing a Canny segmentation algorithm;
for any line object, deriving a parameter equation of a straight line by using at least two points in a point set, and solving a slope k and an intercept b of the straight line;
aiming at a group of mutually parallel straight line edges, solving the intersection point, namely the vanishing point, of the group of straight line edges according to the parameter equation corresponding to the group of straight line edges.
3. The method of claim 1 or 2, further comprising calibrating the camera pose R obtained from the initial calibration 0 As an initial value, time-domain first-order lag filtering is introduced to suppress error noise:
optR 0 =R 0
Figure FDA0003835119220000021
and setting an iterative convergence condition, and obtaining an optimized rotation matrix R through multiple observation iterations and combining a known translation vector t to derive a complete camera external reference matrix [ R | t ] for camera attitude calibration, wherein the translation vector t is a space translation vector of a camera mounting bracket to a vehicle body.
4. A camera attitude calibration device based on road scene recognition is characterized by comprising:
the segmentation extraction module is used for performing semantic segmentation on the basis of pictures acquired by the camera and extracting linear beams parallel to the optical axis of the camera in a road scene as target linear edges;
the linear equation fitting module is used for fitting a linear equation according to the target linear edge and solving vanishing point coordinates corresponding to different target linear edges according to a perspective principle;
the attitude calculation module is used for reversely deducing the attitude of the camera according to a linear equation obtained by fitting and a visual flat line determined by a plurality of vanishing point coordinates to finish attitude calibration;
the attitude calculation module comprises: a rotation angle calculation module for calculating an X-axis rotation angle alpha and a Y-axis rotation angle beta of the camera using the following formula,
Figure FDA0003835119220000022
in the formula (u) p ,v p ) Vanishing points corresponding to a set of mutually parallel straight line edges in the image, (u) 0 ,v 0 ) Offset of origin of camera optical axis to picture pixel coordinate, f x ,f y A pixel number representing a unit length on an imaging plane;
selecting two groups of non-parallel linear edge wire harnesses in a plane, extracting different edges to obtain different vanishing points, wherein the connecting line of the two groups of vanishing points is a visual flat line, and a horizontal included angle in the u direction in a pixel coordinate system, namely a Z-axis rotation angle gamma is obtained by calculating the slope of the visual flat line in an image;
and the rotation matrix solving module is used for solving the rotation matrix R according to the X-axis rotation angle alpha, the Y-axis rotation angle beta and the Z-axis rotation angle gamma.
5. The apparatus of claim 4, wherein the line equation fitting module comprises:
the point set extraction submodule extracts an ROI (region of interest) where the target straight line edge is located by adopting an artificial neural network and then obtains a straight line edge point set by utilizing a Canny segmentation algorithm;
the parameter equation derivation module is used for deriving a parameter equation of a straight line by using at least two points in the point set aiming at any line object and solving a slope k and an intercept b of the straight line;
and the vanishing point calculation module is used for solving the intersection point, namely the vanishing point, of a group of parallel straight line edges according to the parameter equation corresponding to the group of straight line edges.
6. The apparatus of claim 4 or 5, further comprising an iterative optimization module for calibrating the initially calibrated camera pose R 0 As an initial value, time domain first-order lag filtering is introduced to suppress error noise:
optR 0 =R 0
Figure FDA0003835119220000031
and setting iterative convergence conditions, obtaining an optimized rotation matrix R through multiple observation iterations, and combining the optimized rotation matrix R with a known translation vector t to derive an integral camera external reference matrix [ R | t ] for camera attitude calibration, wherein the translation vector t is a space translation vector of a camera mounting bracket to a vehicle body.
7. An electronic device, comprising:
a memory for storing a computer software program;
a processor for reading and executing the computer software program stored in the memory, thereby implementing a camera pose calibration method based on road scene recognition as claimed in any one of claims 1 to 3.
8. A non-transitory computer-readable storage medium, wherein the storage medium stores therein a computer software program for implementing the method for calibrating camera pose based on road scene recognition according to any one of claims 1-3.
CN202011086402.XA 2020-10-12 2020-10-12 Camera attitude calibration method and device based on road scene recognition Active CN112258582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011086402.XA CN112258582B (en) 2020-10-12 2020-10-12 Camera attitude calibration method and device based on road scene recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011086402.XA CN112258582B (en) 2020-10-12 2020-10-12 Camera attitude calibration method and device based on road scene recognition

Publications (2)

Publication Number Publication Date
CN112258582A CN112258582A (en) 2021-01-22
CN112258582B true CN112258582B (en) 2022-11-08

Family

ID=74242301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011086402.XA Active CN112258582B (en) 2020-10-12 2020-10-12 Camera attitude calibration method and device based on road scene recognition

Country Status (1)

Country Link
CN (1) CN112258582B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663529B (en) * 2022-03-22 2023-08-01 阿波罗智能技术(北京)有限公司 External parameter determining method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416791A (en) * 2018-03-01 2018-08-17 燕山大学 A kind of monitoring of parallel institution moving platform pose and tracking based on binocular vision
CN110930459A (en) * 2019-10-29 2020-03-27 北京经纬恒润科技有限公司 Vanishing point extraction method, camera calibration method and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416791A (en) * 2018-03-01 2018-08-17 燕山大学 A kind of monitoring of parallel institution moving platform pose and tracking based on binocular vision
CN110930459A (en) * 2019-10-29 2020-03-27 北京经纬恒润科技有限公司 Vanishing point extraction method, camera calibration method and storage medium

Also Published As

Publication number Publication date
CN112258582A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN107063228B (en) Target attitude calculation method based on binocular vision
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
CN108520536B (en) Disparity map generation method and device and terminal
DE112018000605T5 (en) Information processing apparatus, data management apparatus, data management system, method and program
US20090110267A1 (en) Automated texture mapping system for 3D models
CN113362247A (en) Semantic live-action three-dimensional reconstruction method and system of laser fusion multi-view camera
CN110930365B (en) Orthogonal vanishing point detection method under traffic scene
CN112184792B (en) Road gradient calculation method and device based on vision
CN112489106A (en) Video-based vehicle size measuring method and device, terminal and storage medium
CN110766760A (en) Method, device, equipment and storage medium for camera calibration
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN103900473A (en) Intelligent mobile device six-degree-of-freedom fused pose estimation method based on camera and gravity inductor
CN111862236B (en) Self-calibration method and system for fixed-focus binocular camera
CN111998862A (en) Dense binocular SLAM method based on BNN
CN113763569A (en) Image annotation method and device used in three-dimensional simulation and electronic equipment
CN112258582B (en) Camera attitude calibration method and device based on road scene recognition
CN113327296A (en) Laser radar and camera online combined calibration method based on depth weighting
CN114140533A (en) Method and device for calibrating external parameters of camera
CN116358486A (en) Target ranging method, device and medium based on monocular camera
DE112014002943T5 (en) Method of registering data using a set of primitives
CN114754779B (en) Positioning and mapping method and device and electronic equipment
CN113850293B (en) Positioning method based on multisource data and direction prior combined optimization
EP3389015A1 (en) Roll angle calibration method and roll angle calibration device
CN112767482B (en) Indoor and outdoor positioning method and system with multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant