CN115127547B - Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning - Google Patents

Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning Download PDF

Info

Publication number
CN115127547B
CN115127547B CN202210736964.7A CN202210736964A CN115127547B CN 115127547 B CN115127547 B CN 115127547B CN 202210736964 A CN202210736964 A CN 202210736964A CN 115127547 B CN115127547 B CN 115127547B
Authority
CN
China
Prior art keywords
tunnel
detection vehicle
tunnel detection
coordinate system
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210736964.7A
Other languages
Chinese (zh)
Other versions
CN115127547A (en
Inventor
王建锋
马建
宋宏勋
吴学勤
尹义壮
王新阳
邱雨
刘文豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN202210736964.7A priority Critical patent/CN115127547B/en
Publication of CN115127547A publication Critical patent/CN115127547A/en
Application granted granted Critical
Publication of CN115127547B publication Critical patent/CN115127547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The invention relates to a tunnel detection vehicle positioning method based on a strapdown inertial navigation system and image positioning, which comprises the steps of firstly solving latitude and longitude of a tunnel detection vehicle on the surface of the earth; shooting intermittent broken lines between lanes in a tunnel by monocular area array cameras at the front end and the rear end of the tunnel, solving a yaw angle of the tunnel inspection vehicle by taking an end point of an identification line and a central point at the bottom of the camera as characteristic points, carrying out fusion processing on the yaw angle and the position quantity of an encoder, and combining the initial moment position of the tunnel inspection vehicle entering the tunnel to obtain a movement track of the tunnel inspection vehicle in the tunnel under an earth coordinate system; the latitude is converted into a numerical value under the earth coordinate system and is marked as M, the M is compared with the X, the absolute value is marked as O, the longitude is converted into a numerical value under the earth coordinate system and is marked as N, the N is compared with the Y, the absolute value is marked as P, when both the O and the P are within a set error value, the tunneling car is positioned by using the M and the N, otherwise, the M and the N, X are fused by using a Kalman filter, and the fused numerical value is positioned by the tunneling car.

Description

Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning
Technical Field
The invention belongs to the field of tunnel detection positioning, and particularly relates to a tunnel detection vehicle positioning method based on a strapdown inertial navigation system and image positioning.
Background
With the continuous development of the transportation industry, the number and length of tunnels are increasing. Regular detection of tunnels is an important guarantee for normal operation of roads and personal safety. Traditional tunnel detection is usually manual operation, long in detection time and often needs to seal off tunnels, and has the defects of low efficiency, high detection intensity, serious traffic pressure and the like.
In the prior art, a tunnel inspection vehicle carrying a monocular area array camera, a laser, visible light and other devices is used for inspecting the inner wall of a tunnel. However, in a longer tunnel, satellite signals are weak or even almost weak, and determining the position of the tunnel detection vehicle by means of satellites is difficult to achieve in actual tunnel detection.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a tunnel detection vehicle positioning method based on a strapdown inertial navigation system and image positioning, which can accurately reflect the carrying direction of the tunnel detection vehicle and realize the rapid detection of the tunnel detection vehicle in a tunnel.
The invention is realized by the following technical scheme:
a tunnel detection vehicle positioning method based on a strapdown inertial navigation system and image positioning comprises the following steps:
S1, firstly, obtaining a position updating differential equation of a tunnel detection vehicle by using an accelerometer, and then solving the position updating differential equation to obtain latitude and longitude of the tunnel detection vehicle on the earth surface;
Shooting intermittent dotted lines among lanes in a tunnel by a front monocular area array camera of a tunnel detection vehicle, shooting the intermittent dotted lines among lanes in the tunnel after the rear monocular area array camera of the tunnel detection vehicle, wherein the intermittent dotted lines shot for two times are identical, resolving the yaw angle of the tunnel detection vehicle by taking an end point of a mark line and a central point of the bottom of the monocular area array camera as characteristic points, fusing the obtained yaw angle with the position quantity of an encoder, and combining the initial moment position of the tunnel detection vehicle entering the tunnel to obtain a movement track of the tunnel detection vehicle in an earth coordinate system, wherein the movement track is a point set of X and Y, X corresponds to the numerical value of the tunnel detection vehicle in the X direction in the earth coordinate system, and Y corresponds to the numerical value of the tunnel detection vehicle in the Y direction in the earth coordinate system;
S2, converting latitude of the tunnel detection vehicle on the earth surface into a numerical value under an earth coordinate system, marking the numerical value as M, comparing M with X in the motion track described in S1, marking the obtained absolute value as O, converting longitude of the tunnel detection vehicle on the earth surface into a numerical value under the earth coordinate system, marking N, comparing N with Y in the motion track described in S1, marking the obtained absolute value as P, positioning the tunnel detection vehicle by using M and N when both O and P are within a set error value, fusing M and N and X and Y in the motion track described in S1 in a Kalman filter when at least one of O and P is greater than the set error value, and positioning the tunnel detection vehicle by using the fused numerical value.
Preferably, the location update differential equation described in S1 is as follows:
Wherein L is the latitude of the tunnel detection vehicle on the earth surface, v y is the velocity component of the tunnel detection vehicle in the y 1 direction in the coordinate system b, R M is the radius of the earth meridian corresponding to the point of the tunnel detection vehicle, and h is the altitude; lambda is the longitude of the tunnel detection vehicle on the earth surface, v x is the velocity component of the tunnel detection vehicle in the direction x 1 in the coordinate system b, and R N is the curvature radius of the corresponding mortise ring of the point where the tunnel detection vehicle is located;
In the coordinate system b, the centroid of the tunnel detection vehicle is taken as an origin, the positive direction of the x 1 axis is right along the axis of the tunnel detection vehicle, and the positive direction of the y 1 axis is forward along the vertical x 1 axis.
Preferably, the front-end monocular area camera and the rear-end monocular area camera in S1 extract feature point sets of two identical images, which are respectively recorded as N 1、N2, and then find out a feature point b and a feature point c with minimum euclidean distance and the next minimum euclidean distance between any one feature point a in N 1 and N 2, where the euclidean distance between the feature point a and the feature point b, and the euclidean distance between the feature point a and the feature point c are respectively d 1、d2. If the ratio of d 1 to d 2 is smaller than the threshold value, judging the characteristic point a and the characteristic point b as matching point pairs, marking the characteristic point a and the characteristic point b as (a, b), obtaining matching point pair sets of two images according to the same process, removing mismatching in the matching point pair sets, and selecting the end points of the identification line as the characteristic points.
Further, matching point pairs with the number of intersection points of any one feature point and other matched feature points being larger than 3 are found out from the matching point pair set, the matching point pairs are judged to be error matching pairs, the error matching point pairs are eliminated, the Euclidean distance ratio r i of the remaining matching point pairs is calculated, r i is formed into a new matching point pair set according to increasing sequence, the new matching point pair set is evenly divided into four groups, four matching point pairs are arbitrarily selected from the first group, if the four matching point pairs are all non-internal points, the first group is abandoned, the remaining matching point pairs are not judged, if two matching point pairs in the first group are internal points, the remaining matching point pairs in the first group are judged to be internal points, the same operation is carried out on the remaining groups, the group with the largest number of internal points is selected as an optimal model, and the error matching elimination in the matching point pair set is completed.
Preferably, S1 carries out world coordinate system transformation on an identification line end point extracted by a front-end monocular area array camera and a bottom center point of the front-end monocular area array camera, and marks as (X 1,Y1) and (X 2,Y2) respectively; the end point of the identification line extracted by the rear-end monocular area array camera and the bottom center point of the rear-end monocular area array camera are subjected to world coordinate system transformation and respectively marked as (X 1′,Y1′),(X2′,Y2'); the distance L 1 from the front-end monocular array camera to the end point of the identification line and the distance L 2 from the rear-end monocular array camera to the end point of the identification line are obtained by the following formulas:
The yaw angle of the front-end monocular area array camera and the yaw angle of the rear-end monocular area array camera are expressed by the following formulas in sequence:
Wherein x 1 and x 2 are the vertical distances between the front end monocular area array camera and the rear end monocular area array camera and the identification line respectively, and the yaw angle of the tunnel detection vehicle is represented by the following formula:
Preferably, S1, fusing the obtained yaw angle and the position quantity of the encoder by adopting a recursive algorithm, wherein the specific formula is as follows;
Where τ ε (t k,tk+1), v (τ) is the speed of the encoder in the driving direction of the tunnel-detecting vehicle output by the encoder at different moments τ, v x represents the speed component of the tunnel-detecting vehicle along the x-direction of the earth coordinate system, v y represents the speed component of the tunnel-detecting vehicle along the y-direction of the earth coordinate system, x (t k+1) and y (t k+1) represent the position of the tunnel-detecting vehicle at the k+1 moment, For the yaw angle of the tunnel detection vehicle at different moments tau, the coordinate value point sets of the tunnel detection vehicle at different moments are the movement track of the tunnel detection vehicle in the tunnel, and are specifically expressed as follows :s={(x(t0),y(t0)),(x(t1),y(t1)),…,(x(tk),y(tk)),(x(tk+1),y(tk+1))}.
Further, when k=0, x (t 0),y(t0) is the initial time position of the tunnel detection vehicle entering the tunnel, respectively, the initial time position being provided by the GPS.
Preferably, S2 is a formula as follows when converting latitude and longitude of the tunnel detection vehicle on the earth surface into a numerical value in the earth coordinate system:
Wherein the earth coordinate system is marked as a coordinate system e;
Wherein: θ, γ, ψ are pitch angle, roll angle, and yaw angle of the tunnel detection vehicle, respectively, and λ and L are longitude and latitude of the tunnel detection vehicle, respectively.
Further, the pitch angle, the roll angle and the yaw angle of the tunnel detection vehicle are obtained according to the following process:
Defining a coordinate system n, wherein the centroid of the tunnel detection vehicle is taken as an origin, the direction east is taken as the positive direction of the x 2 axis, the direction north is taken as the positive direction of the y 2 axis, the direction vertically upwards is taken as the positive direction of the z 2 axis, then defining a vector r b,rb =xi+yi+zk in the coordinate system b, converting the vector into the coordinate system n, marking the vector as r n, and then defining a quaternion Using the quaternion algorithm, r n can be expressed as:
the following formula is further obtainable:
Gesture position matrix from coordinate system b to coordinate system n Expressed by the following formula:
represented as Q 0、q1、q2、q3 are real numbers, i, j and k are directional vectors,/>Is/>Is a conjugate matrix of (a).
And/>The relation of (2) is expressed in matrix form as follows:
In the method, in the process of the invention, For the angular velocity of the tunnel inspection vehicle measured by the MEMS gyroscope in the coordinate system b,/>Respectively/>Angular velocities about the x 1 axis, the y 1 axis, and the z 1 axis, the vertical x 1 axis being the positive z 1 axis; wherein/> Angular momentum of tunnel detection vehicle is output for MEMS gyroscope,/>Is the rotation angular velocity of the earth,/>For the angular velocity of the geographic coordinate system relative to the earth,/>Is/>Is a transposed matrix of (a);
For a pair of And/>The relation formula of (2) is calculated by using a quaternion Picard algorithm to obtain the result of the following formula:
Wherein: further performing Taylor series expansion to obtain a gesture position matrix/>, which is shown in the following formula
Further represented by T containing different double subscriptsGesture position matrix/>The method is characterized by comprising the following steps:
finally, the following steps are carried out:
preferably, when the S2 fuses M and N and X and Y in the motion track described in the S1 by using a Kalman filter, the following process is adopted:
First solving for the noise variance of M and N And S1, noise variance/>, of X and Y in the motion trailThe relationship between the current position Pos (k) of the tunnel inspection vehicle and the positions Pos 1 (k) corresponding to M and N, and the positions Pos 2 (k) corresponding to X and Y in the motion track of S1 is as follows:
And obtaining the fused numerical value by using the current position Pos (k) of the tunnel detection vehicle.
Compared with the prior art, the invention has the following beneficial technical effects:
According to the tunnel detection vehicle positioning method based on the strapdown inertial navigation system and the image positioning, the GPS can not be used for positioning the tunnel detection vehicle because the GPS can hardly receive signals in the tunnel, so that an accelerometer can be used for obtaining a position update differential equation of the tunnel detection vehicle, and further, the latitude and the longitude of the tunnel detection vehicle on the earth surface can be solved, and the tracking of the position of the tunnel detection vehicle is realized; the front end monocular area array camera and the rear end monocular area array camera can shoot the same intermittent broken line among all lanes in the tunnel successively, then the yaw angle of the tunnel detection vehicle is resolved by taking the end points of the identification line and the central point of the bottom of the monocular area array camera as characteristic points, the obtained yaw angle and the position quantity of the encoder can be subjected to fusion processing, and the initial moment position of the tunnel detection vehicle entering the tunnel is combined, so that the movement track of the tunnel detection vehicle in the earth coordinate system can be obtained, wherein the movement track is a point set of X and Y, X corresponds to the numerical value of the tunnel detection vehicle in the X direction in the earth coordinate system, and Y corresponds to the numerical value of the tunnel detection vehicle in the Y direction in the earth coordinate system, and the real-time positioning of the tunnel detection vehicle is realized. In theory, the two modes should keep complete track superposition, but the former has serious error accumulation, and the latter has reduced positioning accuracy due to the fact that the monocular area array camera is easily affected by illumination and the shaking of the tunnel detection vehicle. The latitude and longitude of the tunnel detection vehicle on the earth surface are converted into numerical values under the earth coordinate system, then the numerical values are respectively compared with X and Y in the motion track obtained by the monocular area array camera, when the difference values are within the set error value, the positioning error is smaller, the tunnel detection vehicle is positioned by the converted latitude and longitude, otherwise, the converted latitude numerical values, longitude numerical values and X and Y in the motion track obtained by the monocular area array camera are fused by using a Kalman filter, the tunnel detection vehicle is positioned by using the fused numerical values, and finally high-precision comprehensive positioning is realized, and the positioning of the tunnel detection vehicle during the defect detection in the tunnel is completed.
Drawings
Fig. 1 is a schematic diagram of the installation positions of the monocular area array camera and the LED lamp on the tunnel inspection vehicle.
FIG. 2 is a schematic diagram of the installation position of the strapdown inertial navigation system on the chassis of the tunnel inspection vehicle.
Fig. 3 is a schematic diagram of two monocular area array cameras shooting the same road section.
Fig. 4 is a schematic diagram of yaw angle resolution of the tunnel inspection vehicle according to the present invention.
Fig. 5 is a general flow chart of the present invention.
FIG. 6 is a flow chart of inertial navigation positioning according to the present invention.
FIG. 7 is a flow chart of image localization according to the present invention.
FIG. 8 is a fused positioning flow chart of the present invention.
In the figure: the system comprises a tunnel detection vehicle 1, a front-end monocular area array camera 2, a front-end LED lamp 3, a rear-end monocular area array camera 4, a rear-end LED lamp 5 and a strapdown inertial navigation system 6.
Detailed Description
The invention will now be described in further detail with reference to specific examples, which are intended to illustrate, but not to limit, the invention.
The invention discloses a linear positioning method of a tunnel detection vehicle based on a Strapdown Inertial Navigation System (SINS) and image positioning, which is shown in figure 1 and comprises a front monocular area array camera 2, a front LED lamp 3, a rear monocular area array camera 4 and a rear LED lamp 5 which are arranged at the top of the front end of the tunnel detection vehicle 1, and a strapdown inertial navigation system 6 which is arranged on a middle cross beam of a chassis of the tunnel detection vehicle 1 as shown in figure 2, wherein the tunnel detection vehicle 1 is refitted for a chassis of a passenger car, and the tunnel detection vehicle 1 keeps stable vehicle speed during defect detection in the tunnel. The front-end monocular area array camera 2 and the rear-end monocular area array camera 4 are used for image acquisition of the pavement marking line. The front end LED lamp 3 and the rear end LED lamp 5 are used for realizing illumination, have the effect of high brightness, and can ensure illumination of image acquisition in a tunnel. The strapdown inertial navigation system 7 comprises an MEMS gyroscope, an accelerometer and a microcomputer, and the position of the tunnel detection vehicle 1 is obtained by solving a position updating differential equation obtained by combining the accelerometer, so that the positioning of the tunnel detection vehicle 1 in a tunnel is realized. The MEMS gyroscope can update the attitude of the tunnel inspection vehicle 1 in real time to obtain corresponding attitude angles, namely pitch angle, roll angle and yaw angle mentioned later. As shown in fig. 5, before the tunnel detection vehicle 1 enters the tunnel, the vehicle-mounted GPS is used to position the tunnel detection vehicle 1, when the image in the tunnel is collected, the tunnel detection vehicle 1 keeps running at a stable speed, after entering the tunnel, the monocular area array camera is used to collect the identification line in the tunnel to realize positioning, and by comparing the image positioning track based on the identification line in the tunnel with the positioning track based on the strapdown inertial navigation system 7 in the track map, whether the two are fused through the kalman filter is optionally judged, and the positioning position of the tunnel detection vehicle 1 is output. The whole set of positioning system realizes high-precision positioning by using a fusion technology of images and inertial navigation.
The algorithmic portion of the present invention is specifically described below. The algorithm comprises three parts of strapdown inertial navigation system positioning, image positioning and fusion positioning.
First, as shown in fig. 6, the strapdown inertial navigation system positioning is realized by means of gesture position calculation;
First two coordinate systems are established: a coordinate system b and a coordinate system n. Taking the mass center of the tunnel detection vehicle 1 as an origin o, taking the right direction of the axis of the tunnel detection vehicle 1 as the positive direction of the x 1 axis, taking the forward direction of the vertical x 1 axis as the positive direction of the y 1 axis, taking the axial direction of the vertical x 1 axis as the positive direction of the z 1 axis, and defining the determined coordinate system as a coordinate system b. The centroid of the tunnel inspection vehicle 1 is taken as an origin o, the direction east is taken as the positive direction of the x 2 axis, the direction north is taken as the positive direction of the y 2 axis, the direction vertical to the sky is taken as the positive direction of the z 2 axis, and the coordinate system is defined as a coordinate system n.
The posture position calculation is to calculate the change of the current coordinate system b of the tunnel inspection vehicle 1 with respect to the coordinate system n. There is a matrix of gesture positions from coordinate system b to coordinate system n, and for a vector in coordinate system b, when it is multiplied by the matrix of gesture positions to the left, it can be derived that the vector corresponds to the size of the vector in coordinate system n. The object of the gesture position calculation is to calculate the gesture position matrix in real time according to the data output by the MEMS gyroscope on the tunnel detection vehicle 1. The invention adopts a quaternion method updating algorithm to calculate the attitude position of the tunnel detection vehicle 1, wherein the quaternion method updating algorithm is a commonly used attitude calculating algorithm of the current strapdown inertial navigation system, and besides, the quaternion method updating algorithm, a direction cosine method updating algorithm and an equivalent rotation vector method updating algorithm are adopted.
When the posture of the tunnel inspection vehicle 1 changes, the positions of the origin of the coordinate system b and the origin of the coordinate system n and the corresponding coordinates thereof change. The MEMS gyroscope arranged on the tunnel detection vehicle 1 can sense the angular momentum of the tunnel detection vehicle 1 in the coordinate system b at intervals, so that the angular velocity of the tunnel detection vehicle 1 in the coordinate system b can be obtainedThe accelerometer may sense the linear velocity of the tunnel inspection vehicle 1 in the coordinate system b. The sensing data of the two are fed back to a microcomputer in real time, and the microcomputer can calculate the gesture position matrix/>, from the coordinate system b to the coordinate system n, by utilizing a quaternion Picard algorithmFurther, 3 angles with respect to the posture of the tunnel-inspecting vehicle 1, that is, the pitch angle θ of the tunnel-inspecting vehicle 1 rotated about the x 1 axis, the roll angle γ of the tunnel-inspecting vehicle 1 rotated about the y 1 axis, and the yaw angle ψ of the tunnel-inspecting vehicle 1 rotated about the z 1 axis are obtained. The MEMS gyroscope output is typically angular increment over a sampling time interval, which should be used directly to determine the quaternion in order to avoid differential amplification of noise. The quaternion Picard algorithm is a common algorithm for calculating a quaternion update algorithm by an angle increment.
A vector r b =xi+yi+zk in the coordinate system b is defined, and a vector converted into the coordinate system n is denoted as r n. Definition of quaternionsUsing the quaternion algorithm, r n can be expressed as: /(I)Is unfolded with
Can obtain the gesture position matrix as
Q 0,q1,q2,q3 are real numbers, i, j and k are directional vectors,Is/>Is a conjugate matrix of (a).
To obtain the gesture position matrixThe quaternion/>, is first foundQuaternion/>And angular velocity/>The relation of (2) is expressed in matrix form as the following formula (1):
In the method, in the process of the invention, The angular velocities of the tunnel inspection vehicle 1 around the x 1 axis, the y 1 axis and the z 1 axis measured by the MEMS gyroscopes are respectively. /(I)Wherein/>Output of angular momentum of tunnel inspection vehicle 1 for MEMS gyroscopes,/>Is the rotation angular velocity of the earth,/>For the angular velocity of the geographic coordinate system relative to the earth,/>Is/>The transposed matrix of (2) can be obtained as described above, and then the formula is substituted into formula (1).
For the solution of equation (1), a quaternion Picard algorithm is used to calculate, a result can be obtained,
Wherein:
And then performing Taylor series expansion on the result to obtain quaternion Further determining a real-time attitude and position matrix
The matrix of the second row in formula (2) is denoted by T with a different double subscript, respectively, then formula (2) can be noted as:
since each of the 9 matrix values in the expression (3) can be obtained, it is further obtained that:
In the formula (4), θ, γ, ψ are pitch angle, roll angle, and yaw angle of the tunnel-detecting vehicle 1 mentioned earlier, respectively, and three attitude angles of the tunnel-detecting vehicle 1 are referred to, which can be used for conversion of the earth coordinate system and the coordinate system b in the later fusion portion.
The location update differential equation of the tunnel inspection vehicle 1 is:
in the formula (5), v x、vy、vz is the speed of three coordinate axes in the coordinate system b, the acceleration of the three coordinate axes measured by the accelerometer is a= [ a x,ay,az]T ], v x、vy、vz.RM、RN is the radius of the earth meridian corresponding to the point of the tunnel detection vehicle 1 and the radius of curvature of the corresponding mortise and tenon circle, L is the latitude of the tunnel detection vehicle 1 on the earth surface, lambda is the longitude of the tunnel detection vehicle 1 on the earth surface, and h is the altitude. The latitude, longitude and altitude of the position of the tunnel inspection vehicle 1 can be obtained by integrating the formula (5) once.
Second, image localization as shown in FIG. 7
Image positioning is completed based on the tunnel pavement marking line, images of the pavement marking line are acquired through the front-end monocular area array camera 2 and the rear-end monocular area array camera 4, as shown in fig. 3, two curved solid lines represent road boundaries, a middle virtual dot-dash line is the road marking line, fig. 3 schematically shows the curved line, the curvature is smaller in an actual tunnel, v refers to the advancing direction of the tunnel detection vehicle 1, t k represents the advancing direction of the tunnel detection vehicle 1 at the time t k, and a later algorithm positions the position of the tunnel detection vehicle 1 by integrating time periods. The acquired image is processed, the extracted characteristic points are subjected to coordinate calculation by utilizing a pinhole imaging model, specifically, the front-end monocular area array camera 2 and the rear-end monocular area array camera 4 project the space points on a two-dimensional photo, the projection process can be represented by using a pinhole imaging model corresponding to the monocular area array camera, and the relation between picture coordinates and space coordinates can be obtained by utilizing the pinhole imaging model, wherein the pinhole imaging model is a camera imaging model commonly used at present. And then, the yaw angles of the tunnel detection vehicle 1 are calculated by utilizing the attitude angles of the front-end monocular area array camera 2 and the rear-end monocular area array camera 4, wherein the yaw angles are different from the previous obtained yaw angles, and the respective yaw angles need to be calculated for the strapdown inertial navigation system positioning and the image positioning. And finally, fusing the yaw angle of the tunnel detection vehicle 1 with the position quantity of the encoder, and calculating the linear positioning of the tunnel detection vehicle 1 based on the tunnel pavement marking line.
(1) Preprocessing an image
The shooting environment of the monocular area array camera is on an outdoor highway, and the outdoor environment can interfere shooting imaging, so that filtering processing is needed for the image. The median filtering method is adopted, firstly, odd data are taken out from a sampling window, and then the median of the data is used as the gray value of the current pixel point:
Performing binarization processing on the filtered image, setting a threshold value T, and recognizing white when the pixel value of the image is larger than the threshold value T; conversely, when the image pixel value is less than the threshold T, it is deemed to be black.
The Roberts operator is used for extracting the edges of the image, the method of approximating gradient amplitude by the difference between two adjacent pixels in the diagonal direction is used for detecting the edges, and the operation formula is as follows:
where f (x, y) is the input image, g (x, y) is the output image, and x and y are the pixel coordinate points of the image, respectively.
(2) Extracting and matching image feature points
The lines between lanes in the tunnel are broken lines of white, so that the characteristic points can be selected based on the broken lines between lanes in the tunnel. The image positioning is based on that the front end monocular area array camera 2 and the rear end monocular area array camera 4 at the top of the tunnel detection vehicle 1 respectively shoot the image segments of the same identification line, so that the position of the tunnel detection vehicle 1 is calculated, and therefore, the same image segments are required to be subjected to three-dimensional matching.
First, a Hessian matrix is calculated on a sigma scale for a point in an image, and the expression is as follows:
Where L xx (x, σ) represents the image point and the Gaussian second order differential L xy (x, σ) represents the convolution of the image point with the Gaussian second order derivative/>L yx (x, σ) represents the convolution of the image point with the Gaussian second order derivative/>L yy (x, σ) represents the convolution of the image point with the Gaussian second order derivative/>Is a convolution of (a) and (b).
The box type filtering structure is utilized to replace Gaussian second-order differential, hessian values of the feature points to be selected and surrounding points of the feature points are calculated, and quick calculation can be performed by combining an integral graph. The discrimination is as follows:
Det(Hessian(x,σ))=Dxx×Dyy-(0.9Dxy)2 (9)
Where Det (Hessian (x, σ)) represents the determinant of the Hessian matrix, and D xx、Dyy、Dxy is L xx(x,σ)、Lyy(x,σ)、Lxy (x, σ), respectively, obtained with a box filter. If the determinant value of the Hessian matrix is positive and the characteristic value of the pixel point in the nearby pixel points is maximum, the pixel point is considered as the characteristic point. The weighting factor of 0.9 is to balance the approximation error of the box filter.
The feature point set of the two images extracted by the front-end monocular area camera 2 and the rear-end monocular area camera 4 is denoted as N 1、N2. And (3) finding out a feature point b and a feature point c with minimum Euclidean distance between a feature point a and N 2 in N 1 and with minimum Euclidean distance between the feature point a and the feature point b, wherein the Euclidean distance between the feature point a and the feature point c is d 1、d2 respectively. If the ratio of the characteristic points is smaller than the threshold value, judging that the characteristic point a and the characteristic point b are matched point pairs, and marking the matched point pairs as (a, b), wherein a judgment formula is as follows:
And obtaining a matching point pair set of the two images according to the same process.
(3) Improved RANSAC algorithm and eliminating mismatching
Firstly, a present improved RANSAC algorithm is adopted, matching point pairs with the number of intersection points of any one feature point and other matched feature points being more than 3 are found out from the matching point pair set, the matching point pairs are judged to be wrong matching pairs, and the wrong matching pairs are removed. The Euclidean distance ratio r i of the remaining matching point pairs is calculated, and r i is formed into a new matching point pair set according to the increasing sequence. And the set is equally divided into four groups, so that no less than five pairs of matching point pairs exist in the first group, and the four pairs of matching point pairs are arbitrarily selected from the first group to be used as the judgment of an optimal model. If the four selected matching point pairs are not internal points, the probability of the group of models as optimal models is very small, and the group is abandoned and other matching point pairs are not judged; if two matching point pairs in the first group are found to be inner points, then the inner points of the remaining matching point pairs in the first group are judged. And carrying out the same operation on the rest groups, and selecting the group with the largest number of inner points as an optimal model.
(4) Image positioning
Calibrating the monocular area array camera by using Zhang Zhengyou calibration method to obtain the internal parameters and external parameters of the area array camera, and further obtaining the conversion relation between the pixel coordinate system of the feature point and the world coordinate system as follows:
Wherein f x=f/dx,fy=f/dy,fx and f y respectively represent scale factors of a horizontal axis and a vertical axis of the image, c x and c y respectively represent coordinates of an origin of an image coordinate system in a pixel coordinate system, Z c is a constant, and u and v respectively represent an abscissa value and an ordinate value of a feature point in the pixel coordinate system. R, t are external parameters, X w、Yw and Z w respectively represent three-dimensional coordinates in a world coordinate system, M is a projection matrix, M 1、M2 is determined by internal and external parameters obtained by calibrating an area-array camera, Is the coordinate value of the feature point in the world coordinate system. In M 1, f x、fy、cx and c y only have a relation with the internal structure of the monocular area array camera, and these parameters are called as the internal parameters of the monocular area array camera; m 2 is determined by the azimuth of the monocular area array camera to the world coordinate system, and is called the monocular area array camera external parameter, M 1、M2 is determined by the internal parameter and the external parameter obtained by calibrating the monocular area array camera, M is a projection matrix,/>Is the coordinate value of the feature point in the world coordinate system.
And selecting the end point of the identification line and the center point of the bottom of the monocular camera as characteristic points, and after coordinate transformation, respectively obtaining world coordinates (X 1,Y1),(X2,Y2) of the end point of the identification line and the center point of the bottom of the monocular camera extracted by the front monocular area array camera 2, and respectively obtaining world coordinates (X 1′,Y1′),(X2′,Y2') of the end point of the identification line and the center point of the bottom of the monocular camera extracted by the rear monocular area array camera 4. The distance between the tunnel inspection vehicle 1 photographed by the front-end monocular area array camera 2 and the identification line can be obtained and is recorded as L 1. Similarly, the distance between the tunnel detection vehicle 1 photographed by the rear-end monocular area array camera 4 and the identification line can be obtained and is recorded as L 2.
When the tunnel detection vehicle 1 is positioned in two dimensions, the identification line is shot by using the monocular area array camera 2 arranged at the front end of the vehicle top, and the yaw angle of the monocular area array camera 2 at the front end is calculated. The tunnel inspection vehicle 1 travels forward, and then the original road section is photographed again by using the monocular area camera 4 mounted on the rear end of the vehicle roof, and the yaw angle of the rear monocular area camera 2 is calculated, as shown in fig. 4.
In the formula, x 1,x2 is the vertical distance between the front monocular area camera 2 and the rear monocular area camera 4 and the mark line respectively. If the tunnel detection vehicle 1 does not deflect at the road section, yaw angles of the front end and the rear end of the tunnel detection vehicle 1 are equal; if the tunnel detection vehicle 1 deflects on the road section, the difference between the yaw angles of the front and rear ends of the tunnel detection vehicle is the yaw angle of the vehicle body on the road section.
In the method, in the process of the invention,The yaw angles of the front end and the rear end of the tunnel detection vehicle 1 are respectively the magnitude.
The encoders arranged on the wheel output shafts of the tunnel detection vehicles 1 can obtain the position quantity of the tunnel detection vehicles 1, the position quantity of the encoders is fused with the yaw angle to obtain the position of the tunnel detection vehicles 1 under the corresponding angles, and the linear positioning of the tunnel detection vehicles based on the tunnel identification lines is realized. The recurrence algorithm is as follows:
where τ ε (t k,tk+1), v (τ) is the speed of the encoder in the driving direction of the tunneling detection vehicle 1 output by the encoder at different moments τ, v x represents the component of the speed of the tunneling detection vehicle 1 along the direction of the earth coordinate system x 2, v y represents the component of the speed of the tunneling detection vehicle 1 along the direction of the earth coordinate system y 2, x (t k+1) and y (t k+1) represent the positions of the tunneling detection vehicle 1 at the moment k+1, respectively, For yaw angles of the tunnel detection vehicle 1 at different times τ based on the tunnel identification line estimation, when k=0, x (t 0),y(t0) is an initial position of the tunnel detection vehicle 1 at the initial time of entering the tunnel, and this position amount information is provided by the GPS.
The motion track of the tunnel detection vehicle 1 in the tunnel is a point set of coordinate values of the tunnel detection vehicle 1 at different moments, namely:
s={(x(t0),y(t0)),(x(t1),y(t1)),…,(x(tk),y(tk)),(x(tk+1),y(tk+1))} (16)
In addition, the marking lines in the tunnels can cause the situation that part of road sections are not provided with the marking lines due to abrasion or road surface design factors. Therefore, the invention can not detect the transverse position of the vehicle through the image positioning tunnel under the condition of no short-distance mark line road condition, the white solid line on the expressway is called a lane dividing line, and the standard length is 6 meters, and the interval is 9 meters. The virtual line and the real line at the center of the road are continuously lost, and the short distance is 15 meters, so that the virtual line and the real line are defaulted to be the transverse position at the last moment, and if the distance is exceeded, the technical scheme of the invention is not applicable.
Taking t 0 as an example before the tunnel inspection vehicle 1 enters the tunnel, the coordinate position of the tunnel inspection vehicle 1 is (x (t 0),y(t0)), if the tunnel inspection vehicle does not deviate on the road section, the tunnel inspection vehicle is brought into (15) and has
If the tunnel inspection vehicle 1 is deviated between the time t 1-t2, there is
Third, fusion positioning
The image positioning and the strapdown inertial navigation system positioning are two independent systems, and before fusion is realized, the problem of unification of a coordinate system is solved firstly, namely coordinate conversion is carried out on the coordinate system b and an earth coordinate system, and the conversion relation is as follows:
Wherein θ, γ, ψ are pitch angle, roll angle and yaw angle of the tunnel inspection vehicle 1, respectively; (lambda, L) are the longitude and latitude of the tunnel inspection vehicle 1, respectively.
And displaying the image positioning track based on the tunnel identification line and the positioning track based on the strapdown inertial navigation system (namely, converted latitude numerical value and longitude numerical value) in a track map. The path traveled by the tunnel inspection vehicle 1 at each moment and the real-time coordinate information of the corresponding point can be seen in the trajectory diagram. In theory, the two should keep complete track coincidence, but the strapdown inertial navigation error accumulation is serious. The single-eye area-array camera is easily affected by illumination and the vibration of the tunnel detection vehicle 1 reduces the positioning accuracy. Based on the point, the integrated positioning can be realized by utilizing the fusion of the image and the strapdown inertial navigation.
As shown in fig. 8, by comparing the real-time position data of the tunnel inspection vehicle 1 in the track map, it is determined whether fusion positioning is required by using the set error value x. When the absolute value of the difference between any coordinate values (x, y) of the tunnel detection vehicles 1 in the two figures exceeds the set error value x=0.2m, namely the strapdown inertial navigation positioning error at the moment is determined to be larger, at the moment, the positioning value of the image is adopted as compensation for fusion positioning, a Kalman filter is utilized for fusing the two, and firstly, the noise variance of the inertial navigation positioning (namely the latitude value and the longitude value after conversion) is solvedSecondly, solving the noise variance/>, of x and y in the motion trail of image positioningSolving the relation between the current position Pos (k) of the tunnel detection vehicle 1 and the positioning position Pos 1 (k) corresponding to the converted latitude value and longitude value and the positioning position Pos 2 (k) corresponding to x and y in the image positioning motion trackAnd further, the fused position parameters can be obtained, and the tunnel detection vehicle can be further positioned. If the difference of any coordinate values of the tunnel detection vehicles 1 in the two figures does not exceed the error value of 0.2m, the moment inertial navigation positioning is determined to be accurate, and inertial navigation positioning data is output to position the tunnel detection vehicles. /(I)

Claims (10)

1. A tunnel detection vehicle positioning method based on a strapdown inertial navigation system and image positioning is characterized by comprising the following steps:
S1, firstly, obtaining a position updating differential equation of a tunnel detection vehicle by using an accelerometer, and then solving the position updating differential equation to obtain latitude and longitude of the tunnel detection vehicle on the earth surface;
Shooting intermittent dotted lines among lanes in a tunnel by a front monocular area array camera of a tunnel detection vehicle, shooting the intermittent dotted lines among lanes in the tunnel after the rear monocular area array camera of the tunnel detection vehicle, wherein the intermittent dotted lines shot for two times are identical, resolving the yaw angle of the tunnel detection vehicle by taking an end point of a mark line and a central point of the bottom of the monocular area array camera as characteristic points, fusing the obtained yaw angle with the position quantity of an encoder, and combining the initial moment position of the tunnel detection vehicle entering the tunnel to obtain a movement track of the tunnel detection vehicle in an earth coordinate system, wherein the movement track is a point set of X and Y, X corresponds to the numerical value of the tunnel detection vehicle in the X direction in the earth coordinate system, and Y corresponds to the numerical value of the tunnel detection vehicle in the Y direction in the earth coordinate system;
S2, converting latitude of the tunnel detection vehicle on the earth surface into a numerical value under an earth coordinate system, marking the numerical value as M, comparing M with X in the motion track described in S1, marking the obtained absolute value as O, converting longitude of the tunnel detection vehicle on the earth surface into a numerical value under the earth coordinate system, marking N, comparing N with Y in the motion track described in S1, marking the obtained absolute value as P, positioning the tunnel detection vehicle by using M and N when both O and P are within a set error value, fusing M and N and X and Y in the motion track described in S1 in a Kalman filter when at least one of O and P is greater than the set error value, and positioning the tunnel detection vehicle by using the fused numerical value.
2. The tunnel inspection vehicle positioning method based on the strapdown inertial navigation system and the image positioning according to claim 1, wherein the position update differential equation in S1 is as follows:
Wherein L is the latitude of the tunnel detection vehicle on the earth surface, v y is the velocity component of the tunnel detection vehicle in the y 1 direction in the coordinate system b, R M is the radius of the earth meridian corresponding to the point of the tunnel detection vehicle, and h is the altitude; lambda is the longitude of the tunnel detection vehicle on the earth surface, v x is the velocity component of the tunnel detection vehicle in the direction x 1 in the coordinate system b, and R N is the curvature radius of the corresponding mortise ring of the point where the tunnel detection vehicle is located;
In the coordinate system b, the centroid of the tunnel detection vehicle is taken as an origin, the positive direction of the x 1 axis is right along the axis of the tunnel detection vehicle, and the positive direction of the y 1 axis is forward along the vertical x 1 axis.
3. The tunnel detection vehicle positioning method based on the strapdown inertial navigation system and the image positioning according to claim 1, wherein the front end monocular area array camera and the rear end monocular area array camera in the S1 extract feature point sets of two identical images, which are respectively marked as N 1、N2, then a feature point b and a feature point c with the minimum euclidean distance and the next minimum euclidean distance in any one feature point a and N 2 in the N 1 are found out, and the euclidean distance of the feature point a and the feature point b and the euclidean distance of the feature point a and the feature point c are respectively d 1、d2; if the ratio of d 1 to d 2 is smaller than the threshold value, judging the characteristic point a and the characteristic point b as matching point pairs, marking the characteristic point a and the characteristic point b as (a, b), obtaining matching point pair sets of two images according to the same process, removing mismatching in the matching point pair sets, and selecting the end points of the identification line as the characteristic points.
4. The tunnel detection vehicle positioning method based on the strapdown inertial navigation system and the image positioning according to claim 3 is characterized in that matching point pairs with the number of intersection points of any one feature point and other matching feature points being larger than 3 are found out from the matching point pair set, the matching point pairs are judged to be error matching point pairs, the error matching point pairs are eliminated, the Euclidean distance ratio r i of the remaining matching point pairs is calculated, r i is formed into a new matching point pair set according to increasing sequence, the new matching point pair set is evenly divided into four groups, four matching point pairs are arbitrarily selected from the first group, if the four matching point pairs are not inner points, the first group is discarded and the remaining matching point pairs are not judged, if two matching point pairs in the first group are inner points, the remaining matching point pairs in the first group are judged to be inner points, the same operation is carried out on the remaining matching point pairs, the most inner point pairs are selected as an optimal model, and the error matching elimination in the matching point pair set is completed.
5. The tunnel detection vehicle positioning method based on the strapdown inertial navigation system and the image positioning according to claim 1, wherein S1 performs world coordinate system transformation on an identification line endpoint extracted by a front-end monocular area array camera and a bottom center point of the front-end monocular area array camera, and is respectively marked as (X 1,Y1) and (X 2,Y2); the end point of the identification line extracted by the rear-end monocular area array camera and the bottom center point of the rear-end monocular area array camera are subjected to world coordinate system transformation and respectively marked as (X 1′,Y1′),(X2′,Y2'); the distance L 1 from the front-end monocular array camera to the end point of the identification line and the distance L 2 from the rear-end monocular array camera to the end point of the identification line are obtained by the following formulas:
The yaw angle of the front-end monocular area array camera and the yaw angle of the rear-end monocular area array camera are expressed by the following formulas in sequence:
Wherein x 1 and x 2 are the vertical distances between the front end monocular area array camera and the rear end monocular area array camera and the identification line respectively, and the yaw angle of the tunnel detection vehicle is represented by the following formula:
6. The tunnel detection vehicle positioning method based on the strapdown inertial navigation system and the image positioning according to claim 1, wherein the S1 carries out fusion processing on the obtained yaw angle and the position quantity of the encoder by adopting a recursive algorithm, and a specific formula is as follows;
Where τ ε (t k,tk+1), v (τ) is the speed of the encoder in the driving direction of the tunnel-detecting vehicle output by the encoder at different moments τ, v x represents the speed component of the tunnel-detecting vehicle along the x-direction of the earth coordinate system, v y represents the speed component of the tunnel-detecting vehicle along the y-direction of the earth coordinate system, x (t k+1) and y (t k+1) represent the position of the tunnel-detecting vehicle at the k+1 moment, For the yaw angle of the tunnel detection vehicle at different moments tau, the coordinate value point sets of the tunnel detection vehicle at different moments are the movement track of the tunnel detection vehicle in the tunnel, and are specifically expressed as follows :s={(x(t0),y(t0)),(x(t1),y(t1)),…,(x(tk),y(tk)),(x(tk+1),y(tk+1))}.
7. The method for positioning a tunnel inspection vehicle based on a strapdown inertial navigation system and image positioning according to claim 6, wherein when k=0, x (t 0),y(t0) is the initial time position of the tunnel inspection vehicle entering the tunnel, and the initial time position is provided by the GPS.
8. The tunnel detection vehicle positioning method based on the strapdown inertial navigation system and the image positioning according to claim 2, wherein the following formula is adopted when the latitude and the longitude of the tunnel detection vehicle on the earth surface are converted into the numerical values under the earth coordinate system by S2:
Wherein the earth coordinate system is marked as a coordinate system e;
Wherein: θ, γ, ψ are pitch angle, roll angle, and yaw angle of the tunnel detection vehicle, respectively, and λ and L are longitude and latitude of the tunnel detection vehicle, respectively.
9. The tunnel detection vehicle positioning method based on the strapdown inertial navigation system and the image positioning according to claim 8, wherein the pitch angle, the roll angle and the yaw angle of the tunnel detection vehicle are obtained according to the following procedures:
Defining a coordinate system n, wherein the centroid of the tunnel detection vehicle is taken as an origin, the direction east is taken as the positive direction of the x 2 axis, the direction north is taken as the positive direction of the y 2 axis, the direction vertically upwards is taken as the positive direction of the z 2 axis, then defining a vector r b,rb =xi+yi+zk in the coordinate system b, converting the vector into the coordinate system n, marking the vector as r n, and then defining a quaternion Using the quaternion algorithm, r n can be expressed as:
the following formula is further obtainable:
Gesture position matrix from coordinate system b to coordinate system n Expressed by the following formula: represented as Q 0、q1、q2、q3 are real numbers, i, j and k are directional vectors,/>Is/>Is a conjugate matrix of (a);
and/> The relation of (2) is expressed in matrix form as follows:
In the method, in the process of the invention, For the angular velocity of the tunnel inspection vehicle measured by the MEMS gyroscope in the coordinate system b,/>Respectively/>Angular velocities about the x 1 axis, the y 1 axis, and the z 1 axis, the vertical x 1 axis being the positive z 1 axis; /(I)Wherein/>Angular momentum of tunnel detection vehicle is output for MEMS gyroscope,/>Is the rotation angular velocity of the earth,/>For the angular velocity of the geographic coordinate system relative to the earth,/>Is/>Is a transposed matrix of (a);
For a pair of And/>The relation formula of (2) is calculated by using a quaternion Picard algorithm to obtain the result of the following formula:
Wherein: further performing Taylor series expansion to obtain a gesture position matrix/>, which is shown in the following formula
Further represented by T containing different double subscriptsGesture position matrix/>The method is characterized by comprising the following steps:
finally, the following steps are carried out:
10. The tunnel detection vehicle positioning method based on the strapdown inertial navigation system and the image positioning according to claim 1, wherein when the S2 fuses M and N and X and Y in the motion track of the S1 by using a Kalman filter, the following process is adopted:
First solving for the noise variance of M and N And S1, noise variance/>, of X and Y in the motion trailThe relationship between the current position Pos (k) of the tunnel inspection vehicle and the positions Pos 1 (k) corresponding to M and N, and the positions Pos 2 (k) corresponding to X and Y in the motion track of S1 is as follows:
And obtaining the fused numerical value by using the current position Pos (k) of the tunnel detection vehicle.
CN202210736964.7A 2022-06-27 2022-06-27 Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning Active CN115127547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210736964.7A CN115127547B (en) 2022-06-27 2022-06-27 Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210736964.7A CN115127547B (en) 2022-06-27 2022-06-27 Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning

Publications (2)

Publication Number Publication Date
CN115127547A CN115127547A (en) 2022-09-30
CN115127547B true CN115127547B (en) 2024-04-19

Family

ID=83378999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210736964.7A Active CN115127547B (en) 2022-06-27 2022-06-27 Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning

Country Status (1)

Country Link
CN (1) CN115127547B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106197407A (en) * 2016-06-23 2016-12-07 长沙学院 A kind of subway localization method based on inertial sensor and system
CN107806874A (en) * 2017-10-23 2018-03-16 西北工业大学 A kind of inertial navigation polar region Initial Alignment Method of vision auxiliary
CN109974697A (en) * 2019-03-21 2019-07-05 中国船舶重工集团公司第七0七研究所 A kind of high-precision mapping method based on inertia system
CN111065043A (en) * 2019-10-25 2020-04-24 重庆邮电大学 System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN114061611A (en) * 2021-11-17 2022-02-18 腾讯科技(深圳)有限公司 Target object positioning method, apparatus, storage medium and computer program product

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7162367B2 (en) * 1999-11-29 2007-01-09 American Gnc Corporation Self-contained/interruption-free positioning method and system thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106197407A (en) * 2016-06-23 2016-12-07 长沙学院 A kind of subway localization method based on inertial sensor and system
CN107806874A (en) * 2017-10-23 2018-03-16 西北工业大学 A kind of inertial navigation polar region Initial Alignment Method of vision auxiliary
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN109974697A (en) * 2019-03-21 2019-07-05 中国船舶重工集团公司第七0七研究所 A kind of high-precision mapping method based on inertia system
CN111065043A (en) * 2019-10-25 2020-04-24 重庆邮电大学 System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication
CN114061611A (en) * 2021-11-17 2022-02-18 腾讯科技(深圳)有限公司 Target object positioning method, apparatus, storage medium and computer program product

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种实用的飞行器捷联惯性导航算法;陈柯勋;邱伟;;太原理工大学学报;20201231(05);142-148 *
掘进机全站仪与捷联惯导组合定位方法;张旭辉;刘博兴;张超;杨文娟;赵建勋;;工矿自动化;20201231(09);4-10 *
无人车捷联惯导***仿真器设计;李超;王吉华;郭栋;曲金玉;;计算机仿真;20200315(03);123-126 *

Also Published As

Publication number Publication date
CN115127547A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN110631593B (en) Multi-sensor fusion positioning method for automatic driving scene
EP3631494B1 (en) Integrated sensor calibration in natural scenes
CN109631887B (en) Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope
CN107704821B (en) Vehicle pose calculation method for curve
JP5435306B2 (en) Image processing system and positioning system
JP5057184B2 (en) Image processing system and vehicle control system
CN101473195B (en) Positioning device
CN110332945B (en) Vehicle navigation method and device based on traffic road marking visual identification
CN111065043B (en) System and method for fusion positioning of vehicles in tunnel based on vehicle-road communication
CN108885106A (en) It is controlled using the vehicle part of map
CN107229063A (en) A kind of pilotless automobile navigation and positioning accuracy antidote merged based on GNSS and visual odometry
CN108759823B (en) Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
CN110307836B (en) Accurate positioning method for welt cleaning of unmanned cleaning vehicle
WO2020083103A1 (en) Vehicle positioning method based on deep neural network image recognition
JP4596566B2 (en) Self-vehicle information recognition device and self-vehicle information recognition method
CN111426320B (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
JP5286653B2 (en) Stationary object map generator
CN112819711B (en) Monocular vision-based vehicle reverse positioning method utilizing road lane line
CN111649737A (en) Visual-inertial integrated navigation method for precise approach landing of airplane
KR20210034253A (en) Method and device to estimate location
CN110018503B (en) Vehicle positioning method and positioning system
CN114088114A (en) Vehicle pose calibration method and device and electronic equipment
JP4986883B2 (en) Orientation device, orientation method and orientation program
CN113405555B (en) Automatic driving positioning sensing method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant