CN109345591B - Vehicle posture detection method and device - Google Patents

Vehicle posture detection method and device Download PDF

Info

Publication number
CN109345591B
CN109345591B CN201811190195.5A CN201811190195A CN109345591B CN 109345591 B CN109345591 B CN 109345591B CN 201811190195 A CN201811190195 A CN 201811190195A CN 109345591 B CN109345591 B CN 109345591B
Authority
CN
China
Prior art keywords
vehicle
image data
feature point
dimensional coordinates
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811190195.5A
Other languages
Chinese (zh)
Other versions
CN109345591A (en
Inventor
伍宽
魏宇腾
魏川
耿鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuangjisha Technology Co ltd
Original Assignee
Beijing Shuangjisha Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuangjisha Technology Co ltd filed Critical Beijing Shuangjisha Technology Co ltd
Priority to CN201811190195.5A priority Critical patent/CN109345591B/en
Publication of CN109345591A publication Critical patent/CN109345591A/en
Application granted granted Critical
Publication of CN109345591B publication Critical patent/CN109345591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method and a device for detecting vehicle posture information, wherein the method comprises the following steps: acquiring a left eye image and a right eye image in the same blind area of a vehicle in current frame image data acquired by a camera; performing three-dimensional correction on the left eye image and the right eye image, and determining characteristic point pairs matched with each other in the three-dimensional corrected left eye image and right eye image; calculating three-dimensional coordinates of the matched feature point pairs, and determining a ground static feature point and a carriage side feature point in the current frame image data according to the obtained three-dimensional coordinates; and obtaining the self attitude information of the vehicle according to the ground static characteristic points and the carriage side characteristic points in the current frame image data. According to the vehicle self-attitude detection method and device provided by the embodiment of the invention, the obtained data is high in precision, and the real self-attitude of the vehicle can be reflected.

Description

Vehicle posture detection method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for detecting the self posture of a vehicle.
Background
At present, more and more attention is paid to how to ensure that the vehicle does not collide during the driving process and how to ensure the safety of a driver and passengers on the vehicle during the driving process. In order to ensure the driving safety of the vehicle, besides the detection of the motion of the moving object around the vehicle, the posture of the vehicle itself needs to be determined.
The vehicle attitude information mainly includes the moving speed and the moving direction of the vehicle. The moving speed and the moving direction of the vehicle are generally obtained by using an acceleration sensor or a gyro inertial navigation device mounted on the vehicle.
The data accuracy of the moving speed and the moving direction of the vehicle acquired by the acceleration sensor or the gyro inertial navigation device is low, and the real self posture of the vehicle cannot be reflected.
Disclosure of Invention
In order to solve the above problems, an object of an embodiment of the present invention is to provide a vehicle self-attitude detection method and apparatus.
In a first aspect, an embodiment of the present invention provides a vehicle posture detection method, including:
acquiring a left eye image and a right eye image in the same blind area of a vehicle in current frame image data acquired by a camera;
performing stereo correction on the left eye image and the right eye image, and determining feature point pairs matched with each other in the left eye image and the right eye image after the stereo correction;
calculating three-dimensional coordinates of the matched feature point pairs, and determining a ground static feature point and a carriage side feature point in the current frame image data according to the obtained three-dimensional coordinates;
and obtaining the self attitude information of the vehicle according to the ground static characteristic points and the carriage side characteristic points in the current frame image data.
In a second aspect, an embodiment of the present invention further provides a vehicle own posture detection apparatus, including:
the acquisition module is used for acquiring a left eye image and a right eye image in the same blind area of the vehicle in the current frame image data acquired by the camera;
the processing module is used for performing three-dimensional correction on the left eye image and the right eye image and determining feature point pairs matched with each other in the left eye image and the right eye image after the three-dimensional correction;
the calculation module is used for calculating the three-dimensional coordinates of the matched feature point pairs and determining the ground static feature point and the carriage side feature point in the current frame image data according to the obtained three-dimensional coordinates;
and the detection module is used for obtaining the self-attitude information of the vehicle according to the ground static characteristic points and the carriage side characteristic points in the current frame image data.
In the solutions provided in the first aspect to the second aspect of the embodiments of the present invention, feature point pairs that are matched with each other in a left eye image and a right eye image after stereo correction are determined, then three-dimensional coordinates of the feature point pairs are calculated, and a ground static feature point and a carriage side feature point in current frame image data are determined according to the obtained three-dimensional coordinates; and finally, obtaining the self-attitude information of the vehicle according to the ground static characteristic points and the carriage side characteristic points, wherein compared with the mode of obtaining the self-attitude of the vehicle through the moving speed and the moving direction of the vehicle obtained by an acceleration sensor or a gyro inertial navigation device in the related technology, the data obtained through calculation has high precision, and the real self-attitude of the vehicle can be reflected.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating an application scenario of the vehicle posture detection method and apparatus provided by the embodiment of the invention;
fig. 2 is a flowchart showing a vehicle posture detecting method provided in embodiment 1 of the present invention;
fig. 3 is a schematic structural view showing a vehicle own posture detecting apparatus according to embodiment 2 of the present invention.
Icon: 300-an acquisition module; 302-a processing module; 304-a calculation module; 306-detection module.
Detailed Description
At present, more and more attention is paid to how to ensure that the vehicle does not collide during the driving process and how to ensure the safety of a driver and passengers on the vehicle during the driving process. In order to ensure the driving safety of the vehicle, besides the detection of the motion of the moving object around the vehicle, the posture of the vehicle itself needs to be determined. In the fields of intelligent automobiles and safe driving, the self posture of a vehicle is an important parameter for realizing the perception of the surrounding environment and the control of the vehicle, so that the accurate acquisition of the vehicle posture information for representing the self posture of the vehicle plays a vital role. The vehicle attitude information mainly includes a vehicle moving speed and a vehicle moving direction.
The conventional method commonly used is to obtain the driving speed and the moving direction of the vehicle by using an acceleration sensor or a special gyro inertial navigation device installed on the vehicle. The method has the advantages of low data accuracy and low system stability. The other method is to use a monocular vision visual odometer method installed on the vehicle to acquire the vehicle posture information by calculating the optical flow of the feature points of the moving object. The method is easily interfered by other object feature points, so the method is only suitable for simpler scenes, dense optical flow calculation is adopted in the method, the calculated amount is large, and the response speed of the system is difficult to meet the real-time requirement. Based on this, the embodiment provides a vehicle posture detection method and device, by determining feature point pairs matched with each other in a left eye image and a right eye image after stereo correction, then calculating three-dimensional coordinates of the feature point pairs matched with each other, and determining a ground static feature point and a carriage side feature point in current frame image data according to the obtained three-dimensional coordinates; and finally, obtaining the self-attitude information of the vehicle according to the ground static characteristic points and the carriage side characteristic points, wherein compared with the mode of obtaining the self-attitude of the vehicle through the moving speed and the moving direction of the vehicle obtained by an acceleration sensor or a gyro inertial navigation device in the related technology, the data obtained through calculation has high precision, and the real self-attitude of the vehicle can be reflected.
Referring to an application scenario diagram of the vehicle self-attitude detection method and device shown in fig. 1, the application scenario diagram provides a vehicle self-attitude detection system, which includes: the system comprises two groups of binocular stereo cameras 100, a collecting card 102, an image processing unit 104 and a data output unit 106, wherein the two groups of binocular stereo cameras 100 are respectively arranged on two sides of a cab and ensure that a carriage is positioned in the common visual field of a left eye camera and a right eye camera of the binocular stereo cameras; the camera only has the function of shooting images and does not have the function of storing images, so the invention uses the acquisition card 102 to store the images acquired by the binocular stereo camera and provides original image information for the subsequent image processing unit 104; the image processing unit 104 is mainly responsible for image processing, extracts information from the image, and calculates vehicle attitude information by combining camera calibration parameters; the data output unit 106 transfers the vehicle attitude information calculated by the image processing unit 104 to other modules for use.
The above-described vehicle own posture detecting system first needs to calibrate the binocular stereo camera 100 before use. The calibration of the binocular stereo camera is divided into two steps: calibrating internal parameters of the binocular stereo camera before the vehicle posture detection system is installed and calibrating external parameters of the binocular stereo camera after the system is installed. Before the vehicle posture detection system is installed, the binocular stereo camera is calibrated to obtain internal parameters, such as focal length, lens distortion coefficient and the like of the binocular stereo camera, which are only related to the characteristics of the camera. Installing a vehicle posture detection system on a vehicle, keeping the relative position of a binocular stereo camera and the vehicle unchanged, and then starting to calibrate the binocular stereo camera for the second time to obtain the position relation between the binocular stereo camera serving as an external parameter of the binocular stereo camera and the ground; the position relation between the binocular stereo camera and the ground mainly comprises a rotation matrix and a translation vector.
And after calibration is completed, the internal parameters of the binocular stereo camera and the external parameters of the binocular stereo camera are cached in the image processing unit as camera calibration parameters.
After camera calibration and system installation are completed, the carriage running direction is made to be consistent with the vehicle head running direction, and the initial normal vector alpha of the carriage side surface in the camera coordinate system at the moment is calculated0And calculating the initial normal vector alpha0Buffered in the image processing unit 104. In the using process of the system, images around the vehicle are acquired in real time through the binocular stereo camera, and the acquired images are cached in the acquisition card 102.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
Example 1
The present embodiment proposes a vehicle self-attitude detection method, the execution subject being the image processing unit in the above-described vehicle self-attitude detection system.
Referring to a flowchart of a vehicle self-attitude detection method shown in fig. 2, the vehicle self-attitude detection method may include the following specific steps:
and 200, acquiring a left eye image and a right eye image in the same blind area of the vehicle in the current frame image data acquired by the camera.
Here, the camera refers to the binocular stereo camera described above.
The current frame image data is image data acquired by the camera at the current moment in the lane changing or turning process of the vehicle.
The same blind area can be a left side blind area or a right side blind area of the vehicle.
Step 202, performing stereo correction on the left eye image and the right eye image, and determining feature point pairs matched with each other in the left eye image and the right eye image after the stereo correction.
Specifically, the above step 202 may perform the following steps (1) to (2):
(1) performing stereo correction on the left eye image and the right eye image obtained in the same blind area based on camera calibration parameters;
(2) respectively extracting image feature points of moving objects of the left eye image and the right eye image, performing feature matching operation of the left eye image and the right eye image based on the extracted feature points, and obtaining feature point pairs matched with each other in the left eye image and the right eye image from the extracted feature points.
In the step (1), the camera calibration parameters are preset in the image processing unit. The camera calibration parameters may include: the position relation parameter between the camera module and the ground and the camera module parameter.
The position relation parameters between the camera module and the ground, namely the external parameters of the binocular stereo camera, can be parameters obtained after the camera module is installed on a vehicle and then the camera is manually calibrated, and mainly comprise a rotation matrix and a translation vector.
The camera module parameters, namely the internal parameters of the binocular stereo camera, include: the parameters such as the focal length of the camera, the principal point of the camera, the length of the base line of the camera, the distortion coefficient of the lens and the like are related to the characteristics of the binocular stereo camera.
In the step (2), any method for performing stereo correction on two images in the prior art may be used to perform stereo correction on the left eye image and the right eye image obtained in the same blind area, which is not described herein again.
Since the left eye image and the right eye image have the same image area, the object in both the left eye image and the right eye image can be determined from the left eye image and the right eye image through the feature matching operation.
Moreover, the feature point pairs matched with each other in the left eye image and the right eye image can be obtained from the extracted feature points by using the existing image processing technology, and the details are not repeated here.
After the feature point pairs matching each other in the stereoscopically rectified left eye image and right eye image are determined in step 202, the following step 204 may be performed to determine the ground still feature point and the vehicle compartment side feature point in the current frame image data.
And 204, calculating three-dimensional coordinates of the matched feature point pairs, and determining the ground static feature point and the carriage side feature point in the current frame image data according to the obtained three-dimensional coordinates.
In one embodiment, the step 204 may specifically perform the following flow from the step (1) to the step (3):
(1) determining coordinates of the feature points in the feature point pairs which are matched with each other in the left eye image and the right eye image respectively, and calculating parallax between the feature points in the feature point pairs according to the acquired coordinates;
(2) calculating the three-dimensional coordinates of the feature point pairs according to the parallax between the feature points in the feature point pairs and the camera calibration parameters;
(3) and determining coplanar matching point pairs in each matching point pair as ground static characteristic points according to the calculated three-dimensional coordinates of each matching point pair, and determining the characteristic points on the side surface of the carriage from characteristic point pairs which are not the ground static characteristic points.
In the step (1), the existing image technology may be adopted to determine the coordinates of the feature points in the feature point pairs that are matched with each other in the left eye image and the right eye image, respectively, and calculate the disparity between the feature points in the feature point pairs according to the obtained coordinates, which is not described herein again.
In the step (2), the three-dimensional coordinates of the feature point pairs matched with each other may be obtained by using the existing method for calculating the three-dimensional coordinates of the feature points in the binocular stereo vision technology, which is not described herein again.
In the step (3), after the camera is mounted on the vehicle and calibrated, the relative position between the side surface of the vehicle cabin and the camera is fixed. Therefore, the position where the car side appears in the image data is relatively fixed, that is, the car side can only appear in a preset area of the image data, the position change of the car side feature point in the camera coordinate system is relatively small, the speed of the car side feature point in the camera coordinate system is relatively small, and in addition, the car side feature point is almost on one plane.
Therefore, the image processing unit can determine the area on the side of the vehicle compartment in the test image as the vehicle compartment side area by taking the test image. When the vehicle compartment side feature point needs to be determined, a feature point located in a vehicle compartment side region from among feature points other than the ground stationary feature point may be determined as a vehicle compartment side feature point. And step 206, obtaining the self-attitude information of the vehicle according to the ground static characteristic points and the carriage side characteristic points in the current frame image data.
The self-posture information includes the moving speed and the moving direction of the vehicle.
In order to calculate the moving speed and the moving direction of the vehicle, the above step 206 may perform the following steps (1) to (3):
(1) calculating the three-dimensional coordinates of the ground static feature points in the previous frame of image data of the current frame of image data;
(2) calculating the moving direction of the vehicle according to the three-dimensional coordinates of the ground static feature point in the current frame image data and the three-dimensional coordinates of the ground static feature point in the previous frame image data;
(3) and acquiring frame rate information of the camera, and calculating the moving speed of the vehicle based on the frame rate information, the three-dimensional coordinates of the ground static feature point in the current frame image data and the three-dimensional coordinates of the ground static feature point in the previous frame image data.
In the step (1), the three-dimensional coordinates of the ground still feature point in the image data of the previous frame of the current frame of the image data are calculated by the following formula:
Figure BDA0001827310730000081
wherein (X)C,YC,ZC) Three-dimensional coordinates of the previous frame of image data of the feature point in a camera coordinate system; b is the base length of the camera, and f is the focal length of the camera; d is parallax, D ═ XL-XR,XLIs the abscissa and X of the feature point in the left eye imageRIs the abscissa of the feature point in the right eye image, and Y is the ordinate of the feature point in the image.
The step (2) may calculate the moving direction of the vehicle by the following formula:
Figure BDA0001827310730000082
wherein the content of the first and second substances,
Figure BDA0001827310730000083
indicating a moving direction of the vehicle; gn(xn,yn,zn) Three-dimensional coordinates representing the ground static feature point in the previous frame of image data; gn+1(xn+1,yn+1,zn+1) And a three-dimensional coordinate representing the ground static feature point in the current frame image data.
In the above step (3), the frame rate information is stored in the image processing unit in advance.
The step (3) may calculate the moving speed of the vehicle by the following formula:
Figure BDA0001827310730000084
wherein n represents frame rate information; gn(xn,yn,zn) Three-dimensional coordinates representing the ground static feature point in the previous frame of image data; gn+1(xn+1,yn+1,zn+1) And a three-dimensional coordinate representing the ground static feature point in the current frame image data.
In the related art, the whole vehicle is regarded as a rigid body, and is only suitable for finished vehicles such as automobiles and passenger cars, while for non-finished vehicles such as trucks, semi-trailers and full-trailer vehicles with different rigid bodies, the motion states of the head and the carriage are not completely consistent, and the included angle between the head and the carriage of the vehicle needs to be calculated on the basis of calculating the moving direction and the moving speed of the vehicle, so that the complete vehicle posture information of the vehicle is obtained.
In order to calculate the included angle between the car head and the car, in step 206, the following steps may be further performed:
when the vehicle is an incomplete vehicle, calculating a vehicle side normal vector of the current vehicle, acquiring an initial normal vector of a carriage side under a camera coordinate system, and acquiring an included angle between a head and a carriage of the vehicle according to the vehicle side normal vector and the initial normal vector.
The calculating a vehicle side normal vector of the current vehicle specifically includes: and fitting the determined characteristic points of the side surface of the compartment by a least square method to obtain a vehicle side normal vector of the current vehicle.
The initial normal vector of the side surface of the carriage under the camera coordinate system is preset in the image processing unit.
In the step of obtaining the included angle between the head and the carriage of the vehicle according to the vehicle side normal vector and the initial normal vector, the included angle between the head and the carriage of the vehicle may be calculated by the following formula:
Figure BDA0001827310730000091
wherein alpha is0Representing an initial normal vector; alpha is alphan+1Representing the vehicle side normal vector.
The image processing unit can read vehicle information preset in an electronic control unit of the vehicle to determine whether the vehicle is an incomplete vehicle.
The above description shows that the included angle between the head and the carriage of the incomplete vehicle with different rigid bodies, such as a truck, a semitrailer, a full trailer and the like, can be calculated, so that the vehicle posture information of the incomplete vehicle is more complete, the self posture of the vehicle is described through the moving speed and the moving direction of the vehicle and the included angle between the head and the carriage, and the vehicle posture information further reflects the real self posture of the vehicle.
In summary, in the vehicle posture detection method provided in this embodiment, the feature point pairs matched with each other in the stereo-corrected left eye image and right eye image are determined, then the three-dimensional coordinates of the feature point pairs are calculated, and the ground static feature point and the carriage side feature point in the current frame image data are determined according to the obtained three-dimensional coordinates; and finally, obtaining the self-attitude information of the vehicle according to the ground static characteristic points and the carriage side characteristic points, wherein compared with the mode of obtaining the self-attitude of the vehicle through the moving speed and the moving direction of the vehicle obtained by an acceleration sensor or a gyro inertial navigation device in the related technology, the data obtained through calculation has high precision, and the real self-attitude of the vehicle can be reflected.
Example 2
Referring to the vehicle own posture detecting apparatus shown in fig. 3, the present embodiment proposes a vehicle own posture detecting apparatus including:
the acquiring module 300 is configured to acquire a left eye image and a right eye image in the same blind area of the vehicle in the current frame image data acquired by the camera;
a processing module 302, configured to perform stereo correction on the left eye image and the right eye image, and determine feature point pairs matched with each other in the left eye image and the right eye image after the stereo correction;
a calculating module 304, configured to calculate three-dimensional coordinates of the feature point pairs that are matched with each other, and determine a ground static feature point and a carriage side feature point in the current frame image data according to the obtained three-dimensional coordinates;
the detecting module 306 is configured to obtain the vehicle posture information according to the ground static feature point and the compartment side feature point in the current frame image data.
Specifically, the calculating module 304 is specifically configured to:
determining coordinates of the feature points in the feature point pairs which are matched with each other in the left eye image and the right eye image respectively, and calculating parallax between the feature points in the feature point pairs according to the acquired coordinates;
calculating the three-dimensional coordinates of the feature point pairs according to the parallax between the feature points in the feature point pairs and the camera calibration parameters;
and determining coplanar matching point pairs in each matching point pair as ground static characteristic points according to the calculated three-dimensional coordinates of each matching point pair, and determining the characteristic points on the side surface of the carriage from characteristic point pairs which are not the ground static characteristic points.
The self attitude information comprises the moving speed and the moving direction of the vehicle and the included angle between the vehicle head and the carriage;
specifically, the detecting module 306 is specifically configured to:
calculating the three-dimensional coordinates of the ground static feature points in the previous frame of image data of the current frame of image data;
calculating the moving direction of the vehicle according to the three-dimensional coordinates of the ground static feature point in the current frame image data and the three-dimensional coordinates of the ground static feature point in the previous frame image data;
and acquiring frame rate information of the camera, and calculating the moving speed of the vehicle based on the frame rate information, the three-dimensional coordinates of the ground static feature point in the current frame image data and the three-dimensional coordinates of the ground static feature point in the previous frame image data.
In the related art, the whole vehicle is regarded as a rigid body, and is only suitable for finished vehicles such as automobiles and passenger cars, while for non-finished vehicles such as trucks, semi-trailers and full-trailer vehicles with different rigid bodies, the motion states of the head and the carriage are not completely consistent, and the included angle between the head and the carriage of the vehicle needs to be calculated on the basis of calculating the moving direction and the moving speed of the vehicle, so that the complete vehicle posture information of the vehicle is obtained.
In order to calculate the included angle between the car head and the car, the detection module 306 may further be configured to:
when the vehicle is an incomplete vehicle, calculating a vehicle side normal vector of the current vehicle, acquiring an initial normal vector of a carriage side under a camera coordinate system, and acquiring an included angle between a head and a carriage of the vehicle according to the vehicle side normal vector and the initial normal vector.
The above description shows that the included angle between the head and the carriage of the incomplete vehicle with different rigid bodies, such as a truck, a semitrailer, a full trailer and the like, can be calculated, so that the vehicle posture information of the incomplete vehicle is more complete, the self posture of the vehicle is described through the moving speed and the moving direction of the vehicle and the included angle between the head and the carriage, and the vehicle posture information further reflects the real self posture of the vehicle.
The detecting module 306 is configured to calculate a moving direction of the vehicle according to the three-dimensional coordinates of the ground stationary feature point in the current frame of image data and the three-dimensional coordinates of the previous frame of image data, and includes:
calculating the moving direction of the vehicle by the following formula:
Figure BDA0001827310730000121
wherein the content of the first and second substances,
Figure BDA0001827310730000122
indicating a moving direction of the vehicle; gn(xn,yn,zn) Three-dimensional coordinates representing the ground static feature point in the previous frame of image data; gn+1(xn+1,yn+1,zn+1) And a three-dimensional coordinate representing the ground static feature point in the current frame image data.
In summary, the present embodiment provides a vehicle posture detection apparatus, which determines feature point pairs matched with each other in a left eye image and a right eye image after stereo correction, then calculates three-dimensional coordinates of the feature point pairs, and determines a ground static feature point and a carriage side feature point in current frame image data according to the obtained three-dimensional coordinates; and finally, obtaining the self-attitude information of the vehicle according to the ground static characteristic points and the carriage side characteristic points, wherein compared with the mode of obtaining the self-attitude of the vehicle through the moving speed and the moving direction of the vehicle obtained by an acceleration sensor or a gyro inertial navigation device in the related technology, the data obtained through calculation has high precision, and the real self-attitude of the vehicle can be reflected.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A vehicle own posture detecting method characterized by comprising:
acquiring a left eye image and a right eye image in the same blind area of a vehicle in current frame image data acquired by a camera; the vehicle is an incomplete vehicle with different rigid bodies, and comprises: trucks, semi-trailers and full trailers;
performing stereo correction on the left eye image and the right eye image, and determining feature point pairs matched with each other in the left eye image and the right eye image after the stereo correction;
calculating three-dimensional coordinates of the matched feature point pairs, and determining a ground static feature point and a carriage side feature point in the current frame image data according to the obtained three-dimensional coordinates;
and obtaining attitude information between the head and the carriage of the vehicle according to the ground static characteristic points and the carriage side characteristic points in the current frame image data.
2. The method according to claim 1, wherein the calculating three-dimensional coordinates of the pair of feature points that match each other and determining the ground stationary feature point and the vehicle compartment side feature point in the current frame image data based on the obtained three-dimensional coordinates comprises:
determining coordinates of the feature points in the feature point pairs which are matched with each other in the left eye image and the right eye image respectively, and calculating parallax between the feature points in the feature point pairs according to the obtained coordinates;
calculating the three-dimensional coordinates of the feature point pairs according to the parallax between the feature points in the feature point pairs and the camera calibration parameters;
and determining coplanar matching point pairs in the matching point pairs as ground static feature points according to the calculated three-dimensional coordinates of the matching point pairs, and determining the characteristic points on the side surface of the carriage from the characteristic point pairs which are not the ground static feature points.
3. The method of claim 1, wherein the self-attitude information of the vehicle comprises the moving speed, the moving direction and the included angle between the head and the carriage of the vehicle;
obtaining the self attitude information of the vehicle according to the ground static feature points and the carriage side feature points in the current frame image data, wherein the obtaining comprises the following steps:
calculating the three-dimensional coordinates of the ground static feature points in the previous frame of image data of the current frame of image data;
calculating the moving direction of the vehicle according to the three-dimensional coordinates of the ground static feature points in the current frame image data and the three-dimensional coordinates of the ground static feature points in the previous frame image data;
acquiring frame rate information of the camera, and calculating the moving speed of the vehicle based on the frame rate information, the three-dimensional coordinates of the ground static feature point in the current frame image data and the three-dimensional coordinates of the ground static feature point in the previous frame image data;
when the vehicle is an incomplete vehicle, calculating a vehicle side normal vector of the current vehicle, acquiring an initial normal vector of a carriage side in a camera coordinate system, and acquiring an included angle between a vehicle head and the carriage of the vehicle according to the vehicle side normal vector and the initial normal vector.
4. The method of claim 3, wherein calculating the moving direction of the vehicle according to the three-dimensional coordinates of the ground stationary feature point in the current frame image data and the three-dimensional coordinates of the previous frame image data comprises:
calculating a moving direction of the vehicle by the following formula:
Figure FDA0003318694830000021
wherein the content of the first and second substances,
Figure FDA0003318694830000022
indicating a moving direction of the vehicle; gn(xn,yn,zn) Representing three-dimensional coordinates of the ground static feature point in the previous frame of image data; gn+1(xn+1,yn+1,zn+1) And representing the three-dimensional coordinates of the ground static feature point in the current frame image data.
5. The method of claim 3, wherein the calculating the moving speed of the vehicle based on the frame rate information, the three-dimensional coordinates of the ground stationary feature point in the current frame of image data, and the three-dimensional coordinates of the previous frame of image data comprises:
calculating a moving speed of the vehicle by the following formula:
Figure FDA0003318694830000023
wherein n represents frame rate information; gn(xn,yn,zn) Representing three-dimensional coordinates of the ground static feature point in the previous frame of image data; gn+1(xn+1,yn+1,zn+1) And representing the three-dimensional coordinates of the ground static feature point in the current frame image data.
6. The method of claim 3, wherein obtaining the included angle between the head and the carriage of the vehicle according to the normal vector of the side of the vehicle and the initial normal vector comprises:
calculating the included angle between the head and the carriage of the vehicle through the following formula:
Figure FDA0003318694830000024
wherein alpha is0Representing an initial normal vector; alpha is alphan+1Representing the vehicle side normal vector.
7. A vehicle own posture detecting apparatus, characterized by comprising:
the acquisition module is used for acquiring a left eye image and a right eye image in the same blind area of the vehicle in the current frame image data acquired by the camera; the vehicle is an incomplete vehicle with different rigid bodies, and comprises: trucks, semi-trailers and full trailers;
the processing module is used for performing three-dimensional correction on the left eye image and the right eye image and determining feature point pairs matched with each other in the left eye image and the right eye image after the three-dimensional correction;
the calculation module is used for calculating the three-dimensional coordinates of the matched feature point pairs and determining the ground static feature point and the carriage side feature point in the current frame image data according to the obtained three-dimensional coordinates;
and the detection module is used for obtaining the attitude information between the vehicle head and the carriage of the vehicle according to the ground static characteristic points and the carriage side characteristic points in the current frame image data.
8. The apparatus of claim 7, wherein the computing module is specifically configured to:
determining coordinates of the feature points in the feature point pairs which are matched with each other in the left eye image and the right eye image respectively, and calculating parallax between the feature points in the feature point pairs according to the obtained coordinates;
calculating the three-dimensional coordinates of the feature point pairs according to the parallax between the feature points in the feature point pairs and the camera calibration parameters;
and determining coplanar matching point pairs in the matching point pairs as ground static feature points according to the calculated three-dimensional coordinates of the matching point pairs, and determining the characteristic points on the side surface of the carriage from the characteristic point pairs which are not the ground static feature points.
9. The device of claim 7, wherein the self-attitude information of the vehicle comprises the moving speed, the moving direction and the included angle between the head and the carriage of the vehicle;
the detection module is specifically configured to:
calculating the three-dimensional coordinates of the ground static feature points in the previous frame of image data of the current frame of image data;
calculating the moving direction of the vehicle according to the three-dimensional coordinates of the ground static feature points in the current frame image data and the three-dimensional coordinates of the ground static feature points in the previous frame image data;
acquiring frame rate information of the camera, and calculating the moving speed of the vehicle based on the frame rate information, the three-dimensional coordinates of the ground static feature point in the current frame image data and the three-dimensional coordinates of the ground static feature point in the previous frame image data;
when the vehicle is an incomplete vehicle, calculating a vehicle side normal vector of the current vehicle, acquiring an initial normal vector of a carriage side in a camera coordinate system, and acquiring an included angle between a vehicle head and the carriage of the vehicle according to the vehicle side normal vector and the initial normal vector.
10. The apparatus of claim 9, wherein the detecting module is configured to calculate the moving direction of the vehicle according to the three-dimensional coordinates of the ground stationary feature point in the current frame of image data and the three-dimensional coordinates of the previous frame of image data, and comprises:
calculating a moving direction of the vehicle by the following formula:
Figure FDA0003318694830000041
wherein the content of the first and second substances,
Figure FDA0003318694830000042
indicating a moving direction of the vehicle; gn(xn,yn,zn) Representing three-dimensional coordinates of the ground static feature point in the previous frame of image data; gn+1(xn+1,yn+1,zn+1) And representing the three-dimensional coordinates of the ground static feature point in the current frame image data.
CN201811190195.5A 2018-10-12 2018-10-12 Vehicle posture detection method and device Active CN109345591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811190195.5A CN109345591B (en) 2018-10-12 2018-10-12 Vehicle posture detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811190195.5A CN109345591B (en) 2018-10-12 2018-10-12 Vehicle posture detection method and device

Publications (2)

Publication Number Publication Date
CN109345591A CN109345591A (en) 2019-02-15
CN109345591B true CN109345591B (en) 2021-12-24

Family

ID=65309471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811190195.5A Active CN109345591B (en) 2018-10-12 2018-10-12 Vehicle posture detection method and device

Country Status (1)

Country Link
CN (1) CN109345591B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311693B (en) * 2020-03-16 2023-11-14 威海经济技术开发区天智创新技术研究院 Online calibration method and system for multi-camera
CN114742885B (en) * 2022-06-13 2022-08-26 山东省科学院海洋仪器仪表研究所 Target consistency judgment method in binocular vision system
CN116202424B (en) * 2023-04-28 2023-08-04 深圳一清创新科技有限公司 Vehicle body area detection method, tractor and tractor obstacle avoidance system
CN116863124B (en) * 2023-09-04 2023-11-21 所托(山东)大数据服务有限责任公司 Vehicle attitude determination method, controller and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607535A (en) * 2012-02-07 2012-07-25 湖州师范学院 High-precision real-time stereoscopic visual positioning method utilizing parallax space bundle adjustment
CN102607526A (en) * 2012-01-03 2012-07-25 西安电子科技大学 Target posture measuring method based on binocular vision under double mediums
CN102721409A (en) * 2012-05-29 2012-10-10 东南大学 Measuring method of three-dimensional movement track of moving vehicle based on vehicle body control point
CN104200689A (en) * 2014-08-28 2014-12-10 长城汽车股份有限公司 Road early warning method and device
CN104318561A (en) * 2014-10-22 2015-01-28 上海理工大学 Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow
CN105564447A (en) * 2014-10-31 2016-05-11 南车株洲电力机车研究所有限公司 Control system of virtual rail bus or train
CN108257161A (en) * 2018-01-16 2018-07-06 重庆邮电大学 Vehicle environmental three-dimensionalreconstruction and movement estimation system and method based on polyphaser

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607526A (en) * 2012-01-03 2012-07-25 西安电子科技大学 Target posture measuring method based on binocular vision under double mediums
CN102607535A (en) * 2012-02-07 2012-07-25 湖州师范学院 High-precision real-time stereoscopic visual positioning method utilizing parallax space bundle adjustment
CN102721409A (en) * 2012-05-29 2012-10-10 东南大学 Measuring method of three-dimensional movement track of moving vehicle based on vehicle body control point
CN104200689A (en) * 2014-08-28 2014-12-10 长城汽车股份有限公司 Road early warning method and device
CN104318561A (en) * 2014-10-22 2015-01-28 上海理工大学 Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow
CN105564447A (en) * 2014-10-31 2016-05-11 南车株洲电力机车研究所有限公司 Control system of virtual rail bus or train
CN108257161A (en) * 2018-01-16 2018-07-06 重庆邮电大学 Vehicle environmental three-dimensionalreconstruction and movement estimation system and method based on polyphaser

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
双目立体视觉的无人机位姿估计算法及验证;张梁等;《哈尔滨工业大学学报》;20140531;第46卷(第5期);第66-72页 *
基于双目视觉的航天器间相对位置和姿态的测量方法;张庆君等;《宇航学报》;20080131;第29卷(第1期);第156-161页 *

Also Published As

Publication number Publication date
CN109345591A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109345591B (en) Vehicle posture detection method and device
US10549694B2 (en) Vehicle-trailer rearview vision system and method
CN106408611B (en) Pass-by calibration of static targets
CN110745140B (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
US20190266751A1 (en) System and method for identifying a camera pose of a forward facing camera in a vehicle
CN202035096U (en) Mobile operation monitoring system for mobile machine
US20180150976A1 (en) Method for automatically establishing extrinsic parameters of a camera of a vehicle
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
CN104802710B (en) A kind of intelligent automobile reversing aid system and householder method
DE10229336A1 (en) Method and device for calibrating image sensor systems
US10997737B2 (en) Method and system for aligning image data from a vehicle camera
US11828828B2 (en) Method, apparatus, and system for vibration measurement for sensor bracket and movable device
US10832428B2 (en) Method and apparatus for estimating a range of a moving object
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
WO2015122124A1 (en) Vehicle periphery image display apparatus and vehicle periphery image display method
CN112298040A (en) Auxiliary driving method based on transparent A column
CN112129313A (en) AR navigation compensation system based on inertial measurement unit
CN111145262A (en) Vehicle-mounted monocular calibration method
JP6768554B2 (en) Calibration device
CN113435224A (en) Method and device for acquiring 3D information of vehicle
CN113538594B (en) Vehicle-mounted camera calibration method based on direction sensor
US20240223915A1 (en) Systems and methods for downsampling images
CN116416303A (en) Vehicle-mounted binocular range obstacle avoidance system
DE102022111163A1 (en) MICRO-ELECTROMECHANICAL INERTIAL UNIT
CN116310068A (en) HUD display method and system based on AR technology and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant