CN114937250A - Method and device for calculating relative pose of vehicle body, vehicle, equipment and storage medium - Google Patents

Method and device for calculating relative pose of vehicle body, vehicle, equipment and storage medium Download PDF

Info

Publication number
CN114937250A
CN114937250A CN202210523063.XA CN202210523063A CN114937250A CN 114937250 A CN114937250 A CN 114937250A CN 202210523063 A CN202210523063 A CN 202210523063A CN 114937250 A CN114937250 A CN 114937250A
Authority
CN
China
Prior art keywords
frame image
point
current frame
previous frame
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210523063.XA
Other languages
Chinese (zh)
Inventor
张鹏
易成伟
张青峰
刘刚江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foss Hangzhou Intelligent Technology Co Ltd
Original Assignee
Foss Hangzhou Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foss Hangzhou Intelligent Technology Co Ltd filed Critical Foss Hangzhou Intelligent Technology Co Ltd
Priority to CN202210523063.XA priority Critical patent/CN114937250A/en
Publication of CN114937250A publication Critical patent/CN114937250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for calculating the relative pose of a vehicle body, a vehicle, equipment and a storage medium, wherein the method comprises the following steps: acquiring a current frame image and a previous frame image acquired by vehicle-mounted sensing equipment; determining a vanishing point of a current frame image and a vanishing point of a previous frame image; constructing a first region of interest based on the vanishing point of the current frame image, and constructing a second region of interest based on the vanishing point of the previous frame image; extracting a first characteristic point in the first interested area and a second characteristic point in the second interested area; matching the first characteristic points with the second characteristic points to obtain k matching point pairs; judging whether the number k of the matching point pairs is larger than a first threshold value or not; and if so, calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the matching point pairs. The invention can quickly calculate the relative pose of the vehicle body between the continuous image frames.

Description

Method and device for calculating relative pose of vehicle body, vehicle, equipment and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method and a device for calculating a relative pose of a vehicle body, a vehicle, equipment and a storage medium.
Background
With the continuous development of sensing technology and related technology, the automatic driving control system provides powerful guarantee for the realization of automatic driving. However, the development of automatic driving is not an important goal, namely how to ensure the safety of the vehicle efficiently, which means that the automatic driving vehicle cannot collide with anything during the driving process, and the vehicle needs to measure surrounding obstacles in real time.
FIG. 1 is a schematic diagram of a monocular vision system provided in the prior art for detecting a vehicle, and referring to FIG. 1(a), when pitch does not exist, a triangle relationship is constructed
Figure BDA0003642702430000011
The absolute pitch angle of online (offline) calibration is alpha; referring to fig. 1(b), when the pitch exists, the relative attitude pitch angle β compensation is increased. Relative refers to the Pitch angle from time t-1 to time t.
In order to accurately measure a distance by using similar triangulation by a monocular camera, an absolute external reference attitude angle is required, and a dynamic attitude angle is also required to compensate angle change, and the existing method for compensating the dynamic attitude angle at the connection moment comprises a method for calculating the attitude by using vehicle-mounted Inertial Measurement Unit (IMU) equipment and a method for continuously iterating to find an optimal model for the whole image and recovering the attitude by using the optimal model.
Fig. 2 is a schematic diagram of an IMU provided in the prior art, and referring to fig. 2, a general IMU is equipped with a three-axis gyroscope and three-directional accelerometers to measure acceleration and angular velocity in three-dimensional space, and thereby calculate a relative attitude angle of a vehicle. However, the gyroscope in the IMU has a zero drift phenomenon, the accumulated error generated by the accumulated integral of the gyroscope is relatively large, and the distance error calculated by the IMU is increased along with the long-time running of the vehicle.
The method for continuously iterating the whole image to find the optimal model generally needs to perform RANSAC iteration thousands times on the whole image to respectively solve the homography matrix and the basic matrix to select the optimal model, for most of vehicle-mounted embedded devices, the data processing capacity is weak, so that a large amount of time is consumed by many iterations, and the attitude cannot be rapidly solved to compensate the measurement distance.
Therefore, it is necessary to provide a method capable of rapidly solving the relative pose of the vehicle body between the successive image frames to compensate for the connecting time.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. To this end, the invention provides a vehicle body relative pose calculation method in a first aspect, including:
acquiring a current frame image and a previous frame image acquired by vehicle-mounted sensing equipment;
determining a vanishing point of the current frame image and a vanishing point of the previous frame image;
constructing a first region of interest based on the vanishing point of the current frame image, and constructing a second region of interest based on the vanishing point of the previous frame image;
extracting a first characteristic point in the first region of interest and extracting a second characteristic point in the second region of interest;
matching the first characteristic points with the second characteristic points to obtain k matched point pairs; wherein k is an integer;
judging whether the number k of the matching point pairs is larger than a first threshold value or not;
and if so, calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the matching point pairs.
Further, the calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the matching point pairs comprises:
calculating a first homography matrix between the current frame image and the previous frame image according to the k matching point pairs;
calculating projection errors of the k matching point pairs and a mean value of the projection errors based on the first homography matrix;
judging whether the mean value of the projection errors is larger than a second threshold value or not;
and if not, calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the projection error and the pinhole camera model.
Further, before calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the projection error and the pinhole camera model, the method further includes:
screening m minimum projection errors from the k projection errors, and determining m minimum matching point pairs corresponding to the projection errors according to a screening result; wherein m is less than k and m is not less than 4;
calculating a second homography matrix between the current frame image and the previous frame image based on the m matching point pairs;
projecting the second characteristic point to the current frame image through the second homography matrix to obtain a second homogeneous coordinate;
and carrying out difference calculation on the first characteristic point coordinate and the second homogeneous coordinate to obtain projection errors of the m matching point pairs.
Further, the calculating projection errors of the k matching point pairs and a mean of the projection errors based on the first homography matrix includes:
projecting the second characteristic point to the current frame image through the first homography matrix to obtain a first uniform coordinate;
carrying out differential calculation on the first characteristic point coordinate and the first uniform coordinate to obtain a projection error of the matching point pair;
and calculating the mean value of the projection errors according to the projection errors of the matching point pairs and the number k of the matching point pairs.
Further, the calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the projection error and the pinhole camera model comprises:
calculating the mean value of the projection errors and the minimum value of the projection errors;
judging whether the absolute value of the difference between the average value of the projection errors and the minimum value of the projection errors is smaller than a third threshold value;
if so, substituting the minimum error value into a pinhole camera model, and calculating the relative pose of the vehicle body between the current frame image and the previous frame image;
if not, substituting the error mean value into a pinhole camera model, and calculating the relative pose of the vehicle body between the current frame image and the previous frame image.
Further, after determining the vanishing point of the current frame image and the vanishing point of the previous frame image, the method further includes:
obtaining the confidence coefficient of the vanishing point of the current frame image;
judging whether the confidence of the vanishing point is greater than a preset confidence threshold value;
if yes, turning to the step of constructing a first region of interest by taking the vanishing point of the current frame image as the center and constructing a second region of interest by taking the vanishing point of the previous frame image as the center;
and if not, turning to the step of determining the vanishing point of the current frame image and the vanishing point of the previous frame image.
The second aspect of the present invention provides a vehicle body relative pose calculation apparatus, including:
the image acquisition module is used for acquiring a current frame image and a previous frame image acquired by the vehicle-mounted sensing equipment;
a vanishing point determining module, configured to determine a vanishing point of the current frame image and a vanishing point of the previous frame image;
the interesting region constructing module is used for constructing a first interesting region based on the vanishing point of the current frame image and constructing a second interesting region based on the vanishing point of the previous frame image;
the characteristic point extraction module is used for extracting a first characteristic point in the first region of interest and extracting a second characteristic point in the second region of interest;
the characteristic point matching module is used for matching the first characteristic point with the second characteristic point to obtain k matching point pairs; wherein k is an integer;
the first judging module is used for judging whether the number k of the matching point pairs is greater than a first threshold value;
and the vehicle body relative pose calculation module is used for calculating the vehicle body relative pose between the current frame image and the previous frame image based on the matching point pairs when the number k of the matching point pairs is greater than a first threshold value.
A third aspect of the invention provides a vehicle having the vehicle body relative posture calculating device as set forth in the second aspect of the invention.
A fourth aspect of the present invention proposes an electronic device, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the vehicle body relative pose calculation method according to the first aspect of the present invention.
A fifth aspect of the present invention provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, loaded and executed by a processor to implement the vehicle body relative pose calculation method according to the first aspect of the present invention.
The implementation of the invention has the following beneficial effects:
(1) according to the method and the device, only the feature points in the virtual infinite rectangular frame plane need to be taken, the feature points of the whole image do not need to be taken for calculation, and compared with the scheme of taking the feature points of the whole image for calculation, the number of the feature points is reduced, so that the calculation amount and the processing time are reduced, and the relative pose of the vehicle body between the continuous image frames can be calculated quickly.
(2) According to the embodiment of the invention, only two homography matrixes at most need to be calculated, so that the homography matrix (or the basic matrix) is avoided being solved through multiple RANSAC iterations, and the iteration times are reduced compared with the scheme of solving the homography matrix through multiple RANSAC iterations, so that the calculation amount and the processing time of an ideal model are reduced, and the relative pose of the vehicle body between continuous image frames can be quickly calculated.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a monocular vision system provided in the prior art for inspecting a vehicle;
FIG. 2 is a schematic diagram of the operation of a prior art IMU;
fig. 3 is a flowchart of a vehicle body relative pose calculation method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a position relationship between matching point pairs according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a feature point matching result provided in the embodiment of the present invention;
FIG. 6 is a flowchart of step S107 provided by the embodiment of the present invention;
fig. 7 is another flowchart of step S107 provided by the embodiment of the present invention;
FIG. 8 is a schematic diagram of a coordinate relationship of a pinhole camera provided in an embodiment of the present invention;
FIG. 9 is a comparison graph of the relative attitude angle calculation results provided by the embodiment of the invention;
fig. 10 is a block diagram of a vehicle body relative pose calculation apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout.
It should be noted that the terminal according to the embodiment of the present invention may include, but is not limited to, a mobile phone, a Personal Digital Assistant (PDA), a wireless handheld device, a Tablet Computer (Tablet Computer), a Personal Computer (PC), an MP3 player, an MP4 player, a wearable device (e.g., smart glasses, smart watch, smart bracelet, etc.), and the like.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Examples
Fig. 3 is a flowchart of a method for calculating the relative pose of the vehicle body according to the embodiment of the present invention, and the present specification provides the operation steps of the method according to the embodiment or the flowchart, but more or less operation steps can be included based on the conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. The method for calculating the relative pose of the vehicle body shown in fig. 3 is applied to a vehicle end and can also be applied to a server end, and particularly as shown in fig. 3, the method can include the following steps:
s101: acquiring a current frame image and a previous frame image acquired by vehicle-mounted sensing equipment;
specifically, the vehicle-mounted sensing device may be a monocular vision camera, or may be another sensing device having an image acquisition function based on monocular vision; the vehicle-mounted sensing device is arranged on a vehicle at a position capable of acquiring images of a scene in front of the vehicle or a scene behind the vehicle, including but not limited to a position inside/outside a front windshield glass and not influencing the sight of a driver, a position above a hood, a position above a luggage compartment and the like.
Specifically, the current frame image and the previous frame image are images of consecutive frames acquired by the vehicle-mounted sensing device, the current frame image is an image acquired by the vehicle-mounted sensing device at the current moment, and the previous frame image is an image acquired by the vehicle-mounted sensing device at the previous moment of the current moment.
Specifically, the scene ahead of the vehicle may include a lane line of the current lane, a lane line of the adjacent lane, and road accessories, equipment, and the like regularly arranged along the extending direction of the road. The road accessories and equipment can be roadside railings, kerbs, street lamps and the like, and other road accessories and equipment with high consistency in shape can also be applied in the text according to actual conditions.
S102: determining a vanishing point of a current frame image and a vanishing point of a previous frame image;
the vanishing point is a concept in focus perspective, and means that a straight line which is not parallel to a picture is necessarily concentrated to disappear from far away, and the concentrated point is the vanishing point.
The image shot by the camera is subjected to image segmentation, lane lines and road accessories in the image segmentation result can be used for extracting or constructing auxiliary lines, the auxiliary lines are parallel to each other and are not parallel to a picture in the image shot by the camera, the auxiliary lines are concentrated to disappear to far away along the driving direction of the vehicle, and the concentrated points of the auxiliary lines are vanishing points. The plurality of auxiliary lines intersect at a circular point indicated by a straight arrow near the center lane, which is a vanishing point of the image captured by the camera.
In one embodiment, after determining the vanishing point of the current frame image and the vanishing point of the previous frame image, the method further includes:
obtaining the confidence coefficient of the vanishing point of the current frame image;
judging whether the confidence of the vanishing point is greater than a preset confidence threshold value;
if yes, turning to a step of constructing a first region of interest by taking the vanishing point of the current frame image as the center and constructing a second region of interest by taking the vanishing point of the previous frame image as the center;
if not, turning to the step of determining the vanishing point of the current frame image and the vanishing point of the previous frame image.
S103: constructing a first region of interest based on the vanishing point of the current frame image, and constructing a second region of interest based on the vanishing point of the previous frame image; wherein the first region of interest at least partially overlaps the second region of interest;
specifically, the first region of interest is an infinite plane of the current frame image, and the second region of interest is an infinite plane of the previous frame image.
When the region of interest is constructed based on the vanishing point of the image, the relative position information of the vanishing point and the region of interest is needed, which specifically comprises:
acquiring first relative position information, wherein the first relative position information is the relative position information of a vanishing point of a current frame image and a first interested area;
constructing a first region of interest based on the vanishing point and the first relative position information of the current frame image;
acquiring second relative position information, wherein the second relative position information is the relative position information of the vanishing point of the current frame image and a second interested area;
and constructing a second region of interest based on the vanishing point of the last frame of image and the second relative position information.
Specifically, the information of the relative position of the vanishing point and the region of interest includes, but is not limited to, the relative position of the vanishing point at a certain specific point in the region of interest, where the certain specific point in the region of interest may be a center point, a corner point of the region of interest, a specific point of a certain frame of the region of interest, and the like, and the embodiment is not limited thereto.
Specifically, the first relative position information and the second relative position information are the same or different; the first region of interest and the second region of interest are the same or different in shape and the same or different in size.
For example, the first region of interest is a square frame with the vanishing point as a left lower corner point, and the second region of interest is a circle with the vanishing point as a center;
for another example, the first region of interest is an ellipse with a vanishing point as a center, and the second region of interest is an irregular polygon with the vanishing point as a midpoint of a certain specified edge;
for another example, the first region of interest and the second region of interest are rectangles with the same size, the center point of the first region of interest is on the left side of the vanishing point and has a first distance from the vanishing point, and the center point of the second region of interest is on the right side of the vanishing point and has a second distance from the vanishing point;
for another example, the first region of interest is a rectangle centered on the vanishing point, the second region of interest is a rectangle centered on the vanishing point, and the first region of interest and the second region of interest have the same rectangular size.
It should be noted that the above examples are only for illustration, and other situations different from the above examples exist in practical applications.
S104: extracting a first characteristic point in the first region of interest and extracting a second characteristic point in the second region of interest;
in the prior art, feature points are usually extracted in the whole image, but the embodiment of the invention only needs to take the feature points in the virtual infinite rectangular frame plane, does not need to extract the feature points in the whole image, and does not need to perform subsequent calculation based on the feature points in the whole image, thereby reducing the processing time.
Specifically, the first region of interest may include a plurality of first feature points, and the second region of interest may include a plurality of second feature points. Fig. 4 is a schematic diagram of a position relationship of a pair of matching points according to an embodiment of the present invention, as shown in fig. 4, where a first feature point is a point on a virtual infinite plane of a current frame image, a second feature point is a point on a virtual infinite plane of a previous frame image, and in a case where the first feature point and the second feature point are matched, a projection of the first feature point on the infinite plane and a projection of the second feature point on the infinite plane should be coincident. Therefore, after the feature points are extracted, the feature points can be matched according to the positional relationship of the mutually matched feature points.
S105: matching the first characteristic points with the second characteristic points to obtain k matching point pairs; wherein k is an integer;
fig. 5 is a schematic diagram of a feature point matching result according to an embodiment of the present invention, where as shown in fig. 5, a solid dot is a vanishing point, a virtual infinity rectangular plane is formed with the vanishing point as a center, and m pairs of matching points are output in a rectangular frame, where the virtual infinity rectangular plane is represented by a rectangular frame, the pairs of matching points in the rectangular frame are represented by circles, the feature points from one region of interest are filled with oblique lines, the feature points from another region of interest are not filled, and the pair of matching points with the smallest projection error is represented by a solid dot.
Specifically, the feature point matching method may be violence matching, K nearest neighbor matching, cross matching, and the like, which is not limited in this embodiment.
S106: judging whether the number k of the matching point pairs is larger than a first threshold value or not; if yes, executing the next step; if not, ending the flow; wherein the first threshold is not less than 4;
calculating the relative pose of the camera between the current frame image and the previous frame image based on the matching point pairs requires solving a homography matrix between the current frame image and the previous frame image, and since the homography matrix has 8 degrees of freedom, each pair of matching points gives two constraints, at least 4 pairs of matching points are required to solve the equation set, namely the first threshold is not less than 4.
S107: calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the matching point pairs;
fig. 6 is a flowchart of step S107 provided in the embodiment of the present invention, and specifically as shown in fig. 6, step S107 includes:
s1071: calculating a first homography matrix between the current frame image and the previous frame image according to the k matching point pairs;
in one embodiment, step S1071 includes:
normalizing the feature point coordinates of the k matching point pairs to obtain normalized feature point coordinates;
a first homography matrix is computed based on the normalized feature point coordinates.
S1072: calculating projection errors of the k matching point pairs and an average value of the projection errors based on the first homography matrix;
in one embodiment, step S1072 includes:
projecting the second characteristic point to the current frame image through the first homography matrix to obtain a first coordinate of the same time;
carrying out differential calculation on the first characteristic point coordinate and the first uniform coordinate to obtain a projection error of the matching point pair;
and calculating the mean value of the projection errors according to the projection errors of the matching point pairs and the number k of the matching point pairs.
S1073: judging whether the mean value of the projection errors is larger than a second threshold value or not; if yes, ending the process; if not, executing the next step;
s1074: and calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the projection error and the pinhole camera model.
In one embodiment, step S1074 includes:
calculating the mean value of the projection errors and the minimum value of the projection errors;
judging whether the absolute value of the difference between the average value of the projection errors and the minimum value of the projection errors is smaller than a third threshold value or not;
if so, substituting the minimum error value into the pinhole camera model, and calculating the relative pose of the vehicle body between the current frame image and the previous frame image;
and if not, substituting the error mean value into the pinhole camera model, and calculating the relative pose of the vehicle body between the current frame image and the previous frame image.
Specifically, calculating the relative pose of the vehicle body between the current frame image and the previous frame image may include:
calculating the relative pose of the camera between the current frame image and the previous frame image based on the matching point pairs;
and determining the relative pose of the vehicle body between the current frame image and the previous frame image according to the relative pose of the camera.
Fig. 7 is another flowchart of step S107 provided in the embodiment of the present invention, and specifically as shown in fig. 7, step S107 further includes, compared with the embodiment shown in fig. 6:
s1075: screening m minimum projection errors from the k projection errors, and determining m matching point pairs corresponding to the minimum projection errors according to a screening result; wherein m is less than k and m is not less than 4;
s1076: calculating a second homography matrix between the current frame image and the previous frame image based on the m matching point pairs;
s1077: projecting the second characteristic point to the current frame image through a second homography matrix to obtain a second homogeneous coordinate;
s1078: and carrying out differential calculation on the first characteristic point coordinates and the second homogeneous coordinates to obtain the projection errors of the m matching point pairs.
In one embodiment, the vanishing point calculated by the lane line is used, a fixed rectangular area is set by taking the vanishing point as a middle point, the fixed rectangular area is assumed to be an infinite plane pi, a normal vector of the plane is n, the depth is d, and a certain point on the infinite plane of the previous frame image is p l =(u l ,v l ) A certain point on the infinite plane of the current frame image is p c =(u c ,v c ) They are a pair of matching points, and the point sets on two continuous frames of plane can be used for first time to calculate homography matrix H by using direct linear transformation method 1
Homography matrices are commonly used to describe the transformation of points lying on the same plane between two imagesRelationship, assuming a pair of matching points p l And p c The points fall on a point P on a plane pi, and the result of multiplying the transpose of the normal vector by the point on the plane is equal to the distance from the origin of the coordinate system to the plane, so that the plane equation with the vanishing point as the center and infinity can be set as
n T P+d=0 (1)
Slightly finished to obtain
Figure BDA0003642702430000131
Assume that the camera's intrinsic parameters are K, R, t are the motion between two cameras (if desired, the motion between two consecutive frames of a camera can also be assumed)
p l =KP,p c =K(RP+t) (3)
In particular, if a homogeneous coordinate p is used c =(u c ,v c 1) and p l =(u l ,v l 1); adding one less scale factor also holds true, and finally, it can be deduced that:
Figure BDA0003642702430000132
then the homography matrix H 1 Equal to:
Figure BDA0003642702430000133
abbreviations may be obtained:
p c =H 1 *p l (6)
since the homography matrix is a 3 x 3 matrix, the expansion yields:
Figure BDA0003642702430000134
sorting can be written as a linear system of equations
Figure BDA0003642702430000135
Figure BDA0003642702430000136
Is rearranged to obtain
-h 00 u l -h 01 v l -h 02 +h 20 u l u c +h 21 v l u c +h 22 u c =0 (9)
-h 10 u l -h 11 v l -h 12 +h 20 u l v c +h 21 v l v c +h 22 v c =0 (10)
Rewriting to matrix form
Figure BDA0003642702430000141
An equation with AX equal to 0 is obtained, each pair of matching pairs gives two constraints, since the homography matrix has 8 degrees of freedom, at least 4 pairs of matching points are needed to solve the equation system, and assuming that there are k matching points on an infinite plane, the following equation is satisfied:
Figure BDA0003642702430000142
the homography matrix H can be solved for the linear equation of the above formula by Singular Value Decomposition (SVD) 1
Passing the point on the infinity plane of the previous frame through a homography matrix H 1 Projected onto the current frame to obtain homogeneous coordinates p' c =(u’ c ,v’ c ,1)
p’ c =H 1 *p l (13)
With point p of the current frame c Difference can be obtained (also called projection error)
ΔU=|u c -u’ c | (14)
ΔV=|v c -v’ c | (15)
And (3) arranging the projection error values of the k pairs of matching points from small to large to obtain an average error:
Figure BDA0003642702430000151
Figure BDA0003642702430000152
taking out m groups of matching points with minimum projection error, and solving the homography matrix H for the m groups of matching points by using the formula (12) again 2 M sets of projection error values and mean values are obtained by combining equations (14), (15), (16) and (17), and are arranged from small to large, and a matching point p with the smallest difference value is taken out c_min =(u c_min ,v c_min ) And p l_min =(u l_min ,v l_min ) And finding the minimum projection error value
ΔU min =u c_min -u l_min (15)
ΔV min =v c_min -v l_min (16)
FIG. 8 is a schematic diagram of a coordinate relationship of a pinhole camera according to an embodiment of the present invention, as shown in FIG. 8, Δ U min Transverse red line, Δ V min Vertical red line, fx focus is at the projection of x to, and the projection of fy focus in y to according to pinhole camera model, can construct the trigonometry:
Figure BDA0003642702430000153
Figure BDA0003642702430000154
specifically, the relative position of the vehicle body can be used for pose estimation of the rear end, and the pose estimation result can be used as input information of an automatic driving assistance system.
It should be noted that the present invention is not limited by the order of acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the invention.
VO is an abbreviation for Visual odometer, the corresponding Chinese name is Visual odometer. VP is an abbreviation for Vanish Point, and the corresponding chinese name is the vanishing Point. Fig. 9 is a comparison graph of the calculation results of the relative attitude angle according to the embodiment of the present invention, and as shown in fig. 9, the legend corresponding to VO is used to indicate the line type and color of the fluctuation curve of the relative pitch angle calculated based on VO, and the legend corresponding to VP is used to indicate the line type and color of the fluctuation curve of the relative pitch angle calculated based on VP.
Fig. 10 is a block diagram of a vehicle body relative pose calculation apparatus according to an embodiment of the present invention, and specifically, as shown in fig. 10, the apparatus may include:
the image acquisition module 201 is configured to acquire a current frame image and a previous frame image acquired by the vehicle-mounted sensing device;
a vanishing point determining module 202, configured to determine a vanishing point of the current frame image and a vanishing point of the previous frame image;
the interesting region constructing module 203 is used for constructing a first interesting region based on the vanishing point of the current frame image and constructing a second interesting region based on the vanishing point of the previous frame image; wherein the first region of interest and the second region of interest have the same shape and size;
a feature point extraction module 204, configured to extract a first feature point in the first region of interest and extract a second feature point in the second region of interest;
a feature point matching module 205, configured to match the first feature point with the second feature point to obtain k matching point pairs; wherein k is an integer;
a first judging module 206, configured to judge whether the number k of matching point pairs is greater than a first threshold; if yes, executing the next step; if not, ending the flow; wherein the first threshold is not less than 4;
and the vehicle body relative pose calculation module 207 is used for calculating the vehicle body relative pose between the current frame image and the previous frame image based on the matching point pairs.
Preferably, the device is provided integrally with a camera for photographing a scene ahead of the vehicle.
An embodiment of the present invention further provides a vehicle having the vehicle body relative pose calculation device according to the above device embodiment. The relative position of the vehicle body output by the vehicle body relative pose calculation means can be used for pose estimation of the rear end, and the pose estimation result can be used as input information of the automatic driving assistance-type system. It is noted that the vehicle of the present invention may be a truck, a sport utility vehicle, a van, a caravan, or any other type of vehicle without departing from the scope of the present disclosure.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
Embodiments of the present invention also provide an electronic device, which includes a processor and a memory, where at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the vehicle body relative pose calculation method as in the method embodiment.
Embodiments of the present invention also provide a storage medium that can be disposed in a server to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing the vehicle body relative pose calculation method in the method embodiments, where the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the vehicle body relative pose calculation method provided in the method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
As can be seen from the above embodiments of the method, apparatus, vehicle, device and storage medium for calculating a dynamic attitude angle provided by the present invention, the following beneficial effects can be achieved by implementing the present invention:
(1) the embodiment of the invention only needs to take the feature points in the virtual infinity rectangular frame plane, does not need to calculate the feature points of the whole image, and reduces the processing time.
(2) According to the embodiment of the invention, only at most two homography matrixes need to be calculated, so that the homography matrix (or a basic matrix) is avoided being solved through multiple RANSAC iterations, and the time for calculating the ideal model is reduced.
It should be noted that: the sequence of the above embodiments of the present invention is only for description, and does not represent the advantages or disadvantages of the embodiments. And that specific embodiments have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and server embodiments, since they are substantially similar to the method embodiments, the description is simple, and the relevant points can be referred to the partial description of the method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A vehicle body relative pose calculation method is characterized by comprising the following steps:
acquiring a current frame image and a previous frame image acquired by vehicle-mounted sensing equipment;
determining a vanishing point of the current frame image and a vanishing point of the previous frame image;
constructing a first region of interest based on the vanishing point of the current frame image, and constructing a second region of interest based on the vanishing point of the previous frame image;
extracting a first characteristic point in the first region of interest and extracting a second characteristic point in the second region of interest;
matching the first characteristic points with the second characteristic points to obtain k matched point pairs; wherein k is an integer;
judging whether the number k of the matching point pairs is larger than a first threshold value or not;
and if so, calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the matching point pairs.
2. The method according to claim 1, wherein the calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the matching point pairs comprises:
calculating a first homography matrix between the current frame image and the previous frame image according to the k matching point pairs;
calculating projection errors of the k matching point pairs and a mean value of the projection errors based on the first homography matrix;
judging whether the mean value of the projection errors is larger than a second threshold value or not;
and if not, calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the projection error and the pinhole camera model.
3. The method of claim 2, wherein before the calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the projection error and the pinhole camera model, further comprises:
screening m minimum projection errors from the k projection errors, and determining m minimum matching point pairs corresponding to the projection errors according to a screening result; wherein m is less than k and m is not less than 4;
calculating a second homography matrix between the current frame image and the previous frame image based on the m matching point pairs;
projecting the second characteristic point to the current frame image through the second homography matrix to obtain a second homogeneous coordinate;
and carrying out difference calculation on the first characteristic point coordinates and the second homogeneous coordinates to obtain projection errors of the m matching point pairs.
4. The method of claim 2, wherein the calculating projection errors and a mean of the projection errors for k of the pairs of matched points based on the first homography matrix comprises:
projecting the second characteristic point to the current frame image through the first homography matrix to obtain a first uniform coordinate;
performing difference calculation on the first characteristic point coordinate and the first homogeneous coordinate to obtain a projection error of the matching point pair;
and calculating the mean value of the projection errors according to the projection errors of the matching point pairs and the number k of the matching point pairs.
5. The method of claim 2, wherein the calculating the relative pose of the vehicle body between the current frame image and the previous frame image based on the projection error and a pinhole camera model comprises:
calculating the mean value of the projection errors and the minimum value of the projection errors;
judging whether the absolute value of the difference between the average value of the projection errors and the minimum value of the projection errors is smaller than a third threshold value;
if so, substituting the minimum error value into a pinhole camera model, and calculating the relative pose of the vehicle body between the current frame image and the previous frame image;
and if not, substituting the error mean value into a pinhole camera model, and calculating the relative pose of the vehicle body between the current frame image and the previous frame image.
6. The method of claim 1, wherein after determining the vanishing point of the current frame image and the vanishing point of the previous frame image, further comprising:
obtaining the confidence coefficient of the vanishing point of the current frame image;
judging whether the confidence of the vanishing point is greater than a preset confidence threshold value;
if yes, turning to the step of constructing a first region of interest by taking the vanishing point of the current frame image as the center and constructing a second region of interest by taking the vanishing point of the previous frame image as the center;
and if not, turning to the step of determining the vanishing point of the current frame image and the vanishing point of the previous frame image.
7. A vehicle body relative pose calculation apparatus, characterized by comprising:
the image acquisition module is used for acquiring a current frame image and a previous frame image acquired by the vehicle-mounted sensing equipment;
a vanishing point determining module, configured to determine a vanishing point of the current frame image and a vanishing point of the previous frame image;
the interesting region constructing module is used for constructing a first interesting region based on the vanishing point of the current frame image and constructing a second interesting region based on the vanishing point of the previous frame image;
the characteristic point extraction module is used for extracting a first characteristic point in the first region of interest and extracting a second characteristic point in the second region of interest;
the characteristic point matching module is used for matching the first characteristic point with the second characteristic point to obtain k matching point pairs; wherein k is an integer;
the first judging module is used for judging whether the number k of the matching point pairs is greater than a first threshold value;
and the vehicle body relative pose calculation module is used for calculating the vehicle body relative pose between the current frame image and the previous frame image based on the matching point pairs when the number k of the matching point pairs is greater than a first threshold value.
8. A vehicle characterized by having the body relative posture calculating device of claim 7.
9. An electronic device, characterized in that the electronic device comprises a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to realize the vehicle body relative pose calculation method according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the vehicle body relative pose calculation method according to any one of claims 1 to 6.
CN202210523063.XA 2022-05-13 2022-05-13 Method and device for calculating relative pose of vehicle body, vehicle, equipment and storage medium Pending CN114937250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210523063.XA CN114937250A (en) 2022-05-13 2022-05-13 Method and device for calculating relative pose of vehicle body, vehicle, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210523063.XA CN114937250A (en) 2022-05-13 2022-05-13 Method and device for calculating relative pose of vehicle body, vehicle, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114937250A true CN114937250A (en) 2022-08-23

Family

ID=82865047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210523063.XA Pending CN114937250A (en) 2022-05-13 2022-05-13 Method and device for calculating relative pose of vehicle body, vehicle, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114937250A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863124A (en) * 2023-09-04 2023-10-10 所托(山东)大数据服务有限责任公司 Vehicle attitude determination method, controller and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863124A (en) * 2023-09-04 2023-10-10 所托(山东)大数据服务有限责任公司 Vehicle attitude determination method, controller and storage medium
CN116863124B (en) * 2023-09-04 2023-11-21 所托(山东)大数据服务有限责任公司 Vehicle attitude determination method, controller and storage medium

Similar Documents

Publication Publication Date Title
US9430874B2 (en) Estimation of object properties in 3D world
CN112444242B (en) Pose optimization method and device
CN102722697B (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN110826357A (en) Method, device, medium and equipment for three-dimensional detection and intelligent driving control of object
CN110363817B (en) Target pose estimation method, electronic device, and medium
CN111279354A (en) Image processing method, apparatus and computer-readable storage medium
EP3259732B1 (en) Method and device for stabilization of a surround view image
CN113029128A (en) Visual navigation method and related device, mobile terminal and storage medium
CN113137968B (en) Repositioning method and repositioning device based on multi-sensor fusion and electronic equipment
CN111753739B (en) Object detection method, device, equipment and storage medium
EP4386676A1 (en) Method and apparatus for calibrating cameras and inertial measurement unit, and computer device
Jang et al. Camera orientation estimation using motion-based vanishing point detection for advanced driver-assistance systems
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
CN114937250A (en) Method and device for calculating relative pose of vehicle body, vehicle, equipment and storage medium
CN113592706B (en) Method and device for adjusting homography matrix parameters
CN114418839A (en) Image stitching method, electronic device and computer-readable storage medium
CN113345032A (en) Wide-angle camera large-distortion image based initial image construction method and system
Salvi et al. A survey addressing the fundamental matrix estimation problem
CN113048985B (en) Camera relative motion estimation method under known relative rotation angle condition
Pagel Extrinsic self-calibration of multiple cameras with non-overlapping views in vehicles
CN114757824A (en) Image splicing method, device, equipment and storage medium
CN113011212B (en) Image recognition method and device and vehicle
CN110363821B (en) Monocular camera installation deviation angle acquisition method and device, camera and storage medium
CN112037261A (en) Method and device for removing dynamic features of image
CN113570667B (en) Visual inertial navigation compensation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination