CN113403942B - Label-assisted bridge detection unmanned aerial vehicle visual navigation method - Google Patents

Label-assisted bridge detection unmanned aerial vehicle visual navigation method Download PDF

Info

Publication number
CN113403942B
CN113403942B CN202110767675.9A CN202110767675A CN113403942B CN 113403942 B CN113403942 B CN 113403942B CN 202110767675 A CN202110767675 A CN 202110767675A CN 113403942 B CN113403942 B CN 113403942B
Authority
CN
China
Prior art keywords
coordinate system
unmanned aerial
aerial vehicle
matrix
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110767675.9A
Other languages
Chinese (zh)
Other versions
CN113403942A (en
Inventor
张夷斋
杨奇磊
黄攀峰
张帆
刘正雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110767675.9A priority Critical patent/CN113403942B/en
Publication of CN113403942A publication Critical patent/CN113403942A/en
Application granted granted Critical
Publication of CN113403942B publication Critical patent/CN113403942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01DCONSTRUCTION OF BRIDGES, ELEVATED ROADWAYS OR VIADUCTS; ASSEMBLY OF BRIDGES
    • E01D19/00Structural or constructional details of bridges
    • E01D19/10Railings; Protectors against smoke or gases, e.g. of locomotives; Maintenance travellers; Fastening of pipes or cables to bridges
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Structural Engineering (AREA)
  • Civil Engineering (AREA)
  • Architecture (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a label-assisted bridge detection unmanned aerial vehicle visual navigation method, which comprises the following steps of arranging a plurality of positioning two-dimensional code labels on a bridge to be detected at intervals along the flight route of an unmanned aerial vehicle; continuously observing the surface of the bridge to be detected through a camera on the unmanned aerial vehicle in the flying process of the unmanned aerial vehicle, and inquiring a corresponding coordinate value combination according to the positioning two-dimensional code label when the camera observes the positioning two-dimensional code label; determining a first conversion relation matrix between the two-dimensional code label and the camera according to the coordinate value combination; combining the first conversion relation matrix, the second conversion relation matrix and the coordinate value combination to calculate first position information of the unmanned aerial vehicle; replacing second position information of the unmanned aerial vehicle obtained through VIO with the first position information, and continuing unmanned aerial vehicle navigation; the invention can eliminate the drift error generated by the VIO algorithm, improve the positioning precision and reduce the workload of back-end optimization.

Description

Label-assisted bridge detection unmanned aerial vehicle visual navigation method
Technical Field
The invention belongs to the technical field of large-span bridge detection, and particularly relates to a label-assisted Unmanned Aerial Vehicle (UAV) visual navigation method for bridge detection.
Background
With the development of innovation, national economy is rapidly developed, the number of bridges in China is greatly increased, and the safety detection of the bridges becomes a problem which needs to be considered. When the manual detection means is applied to a bridge with high altitude, deep water, wide width and complex structure, the outstanding engineering problems of high detection difficulty, low efficiency, large blind area, safety and the like are faced, and the bridge detection difficulty lies in the detection of inaccessible areas under the bridge. Therefore, the method with development prospect at present adopts the unmanned aerial vehicle to carry out bridge detection.
At present, most of the navigation technologies adopted by unmanned aerial vehicles are GPS satellite navigation, inertial navigation or combined navigation, and the like. Such as strapdown inertial navigation, laser radar and inertial navigation fusion, etc. For bridge detection, due to complex environment and weak signals in the area under the bridge, the navigation method is not suitable for being adopted, the Visual navigation method is mainly based on SLAM (Simultaneous localization and mapping, instant positioning and map reconstruction), in order to make up for the defects of uncertain scale of SLAM and the like, an IMU (inertial navigation Unit) is added on the basis of SLAM at present, and the robustness is improved, namely, the current mainstream navigation method based on VIO (Visual-inertial navigation, visual mileage and inertia meter).
VIO research is relatively mature, but the algorithm frame comprises Kalman filtering, pre-integration, gaussian Newton method and the like, so that the algorithm is high in complexity and large in resource occupation.
Disclosure of Invention
The invention aims to provide a label-assisted Unmanned Aerial Vehicle (UAV) visual navigation method for bridge detection, which improves navigation precision, reduces calculated amount and improves large-span bridge detection efficiency.
The invention adopts the following technical scheme: a tag-assisted bridge detection unmanned aerial vehicle visual navigation method comprises the following steps:
a plurality of positioning two-dimensional code labels are distributed on the bridge to be detected at intervals along the flight path of the unmanned aerial vehicle;
continuously observing the surface of the bridge to be detected through a camera on the unmanned aerial vehicle in the flying process of the unmanned aerial vehicle, and inquiring a corresponding coordinate value combination according to the positioning two-dimensional code label when the camera observes the positioning two-dimensional code label;
determining a first conversion relation matrix between the two-dimensional code label and the camera according to the coordinate value combination;
combining the first conversion relation matrix, the second conversion relation matrix and the coordinate value combination to calculate first position information of the unmanned aerial vehicle; wherein, the second conversion relation matrix is a conversion relation matrix between the unmanned aerial vehicle and the camera;
and replacing the second position information of the unmanned aerial vehicle obtained by the VIO with the first position information, and continuing to perform unmanned aerial vehicle navigation.
Further, before the unmanned aerial vehicle takes off, coordinate values corresponding to the plurality of positioning two-dimensional code labels are combined and stored in storage equipment of the unmanned aerial vehicle;
and the coordinate value combination consists of coordinate values of four corner points of the positioning two-dimensional code label.
Further, determining a first conversion relation matrix between the two-dimensional code tag and the camera according to the coordinate value combination comprises:
the coordinate value combination is brought into a preset relational expression, and eight equations can be obtained;
the relation is as follows:
Figure BDA0003152493450000021
wherein f is the focal length of the camera; f. of x =1/d x ,d x Is the real physical scale of the unit pixel on the u axis in the image coordinate system; f. of y =1/d y ,d y Is the real physical scale of the unit pixel on the v-axis in the image coordinate system; r is 3×3 Is a rotation matrix of the world coordinate system to the camera coordinate system,
Figure BDA0003152493450000031
(u 0 ,v 0 ) Coordinates of a central point of the image in an image coordinate system; coordinates of a point in pixel coordinates are represented by (u, v); with (x) w ,y w ,z w ) Coordinates representing points in a world coordinate system; by T 3×1 Is a translation matrix from the world coordinate system to the camera coordinate system,
Figure BDA0003152493450000032
solving a rotation matrix R by combining eight equations 3×3 And translation matrix T 3×1
Combined rotation matrix R 3×3 And translation matrix T 3×1 Obtaining a first transformation relation matrix
Figure BDA0003152493450000033
Wherein, T 1 Is a first transformation relation matrix.
Further, the relation is obtained by the following method:
determining the conversion relation from image coordinate system to pixel coordinate system
Figure BDA0003152493450000034
And a transformation matrix
Figure BDA0003152493450000035
Wherein, (x, y) is the coordinate of the midpoint in the image coordinate system;
determining a transformation matrix from a camera coordinate system to an image coordinate system
Figure BDA0003152493450000036
Wherein (x) c ,y c ,z c ) Is the coordinate of a point in the camera coordinate system, Z c Is a scale transformation factor;
determining a transformation matrix from a world coordinate system to a camera coordinate system
Figure BDA0003152493450000041
Determining a transformation matrix from a world coordinate system to a pixel coordinate system
Figure BDA0003152493450000042
And expanding and simplifying a conversion matrix from the world coordinate system to the pixel coordinate system to obtain a relational expression.
Further, expanding and simplifying the transformation matrix from the world coordinate system to the pixel coordinate system comprises:
expanding a conversion matrix from a world coordinate system to a pixel coordinate system to obtain:
Figure BDA0003152493450000043
simplifying the transformation matrix from the expanded world coordinate system to the pixel coordinate system can obtain:
Figure BDA0003152493450000044
furthermore, the distance between two adjacent positioning two-dimensional code labels is 10-20 m.
Furthermore, the positioning two-dimensional code label position rectangle has the area size of 0.1-0.3 m 2
The invention has the beneficial effects that: the invention provides a navigation method of a bridge detection unmanned aerial vehicle for correcting VIO drift error based on label assistance for a specific scene, namely, in long-span bridge detection, a bridge detection area is provided with labels according to a specified method, in a label observable area, the drift error generated by a VIO algorithm can be eliminated by fusing VIO calculation data and label calculation data in real time, so that the precision of position data is improved, the positioning precision is improved, and the workload of back-end optimization (error processing) is reduced.
Drawings
Fig. 1 is a schematic flow diagram of a tag-assisted bridge detection unmanned aerial vehicle visual navigation method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a transformation relationship between coordinate systems according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a two-dimensional positioning code label and coordinates of its corner points in the embodiment of the present invention;
fig. 4 is a schematic view of an application scenario according to an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
The embodiment of the invention discloses a label-assisted bridge detection unmanned aerial vehicle visual navigation method, which comprises the following steps of:
s10, arranging a plurality of positioning two-dimensional code labels on a bridge to be detected at intervals along the flight path of the unmanned aerial vehicle; s20, continuously observing the surface of the bridge to be detected through a camera on the unmanned aerial vehicle in the flying process of the unmanned aerial vehicle, and inquiring a corresponding coordinate value combination according to the positioning two-dimensional code label when the camera observes the positioning two-dimensional code label; s30, determining a first conversion relation matrix between the two-dimensional code label and the camera according to the coordinate value combination; s40, combining the first conversion relation matrix, the second conversion relation matrix and the coordinate value combination to calculate first position information of the unmanned aerial vehicle; the second conversion relation matrix is a conversion relation matrix between the unmanned aerial vehicle and the camera; and S50, replacing the second position information of the unmanned aerial vehicle obtained through the VIO with the first position information, and continuing to perform unmanned aerial vehicle navigation.
The method is a bridge detection unmanned aerial vehicle navigation method for correcting VIO drift error based on label assistance in specific scenes, namely in long-span bridge detection, labels are arranged in a bridge detection area according to a specified method, and in a label observable area, the drift error generated by a VIO algorithm can be eliminated by fusing VIO calculation data and label calculation data in real time, so that the precision of position data is improved, the positioning precision is improved, and the workload of back-end optimization (error processing) is reduced.
As shown in fig. 4, the method of the embodiment is mainly applied to a situation that an inaccessible area is detected in a bridge, an unmanned aerial vehicle flies along the length direction of the bridge, image information of the bridge is acquired through photographing or photographing equipment carried by the unmanned aerial vehicle, and the state of the bridge is analyzed according to the image information at a later stage.
In the embodiment of the present invention, first, a mathematical model based on the VIO method needs to be established, including a motion equation and an observation equation. Specifically, the pose of the automatic mobile unmanned aerial vehicle is represented by x, the time is discretized, and the pose at each moment is recorded as x 1 、x 2 ……x k Then pose can be expressed as x k =[p k v k q k a k b k ]Wherein p is k Position of the autonomous mobile drone, v k Representing the speed of the autonomous moving drone, q k Representing the attitude of the drone, a k Represents acceleration, b k Representing the bias of the gyroscope.
The connected positions and postures at all times are obtained to be the track of the autonomous mobile unmanned aerial vehicle, and can be expressed by the following equation (namely a motion equation): x is a radical of a fluorine atom k+1 =f(x k ,o,w k ) Where o is the reading of the sensors (visual and inertial) in the motion of the drone, w k Is ambient noise.
Assuming that there are n landmark points in the map, usey 1 、y 2 ……y n Expressed, then the observation equation is z kj =h(y j ,x k ,V kj ) Wherein V is kj Is the observation noise, z kj Are observed data.
After the mathematical model based on the VIO method is established, the construction and transformation relationship of each coordinate system also needs to be explained. Defining a pixel coordinate system o-uv, an image coordinate system o-xy, and a camera coordinate system o-x c y c z c World coordinate system o-x w y w z w And finishing the derivation of the coordinate transformation relation.
The pixel coordinate system is defined to be a two-dimensional rectangular coordinate system, the origin is located at the upper left corner of the image, and the two axes are respectively parallel to the two vertical sides of the image. The origin of the image coordinate system is the intersection point of the optical axis of the camera and the phase plane, the center of the image is taken, and the two axes are respectively parallel to the two axes of the pixel coordinate system. The camera coordinate system is a three-dimensional rectangular coordinate system, the origin is located at the optical center of the lens, the x axis and the y axis are respectively parallel to two sides of the phase plane, and the z axis is the optical axis of the lens and is vertical to the image plane. The world coordinate system is also the spatial coordinate system described by the real position in space.
The following also needs to perform the combing of the coordinate transformation of each coordinate system.
First, determining the transformation relationship from image coordinate system to pixel coordinate system
Figure BDA0003152493450000071
And a transformation matrix
Figure BDA0003152493450000072
Wherein, (x, y) is the coordinates of the point in the image coordinate system.
Secondly, determining a transformation matrix from a camera coordinate system to an image coordinate system according to a pinhole imaging principle and combining a triangle similarity principle
Figure BDA0003152493450000073
Wherein (x) c ,y c ,z c ) As coordinates of a point in the camera coordinate system, Z c Is a scale factor.
Thirdly, determining a transformation matrix from the world coordinate system to the camera coordinate system
Figure BDA0003152493450000074
Through the three transformation matrixes, the transformation matrix from the world coordinate system to the pixel coordinate system can be deduced and determined
Figure BDA0003152493450000075
And expanding the above formula and simplifying a conversion matrix from a world coordinate system to a pixel coordinate system to obtain a relational expression. The method comprises the following specific steps:
expanding a conversion matrix from the world coordinate system to the pixel coordinate system to obtain:
Figure BDA0003152493450000081
simplifying the conversion matrix from the expanded world coordinate system to the pixel coordinate system to obtain:
Figure BDA0003152493450000082
wherein f is the focal length of the camera; f. of x =1/d x ,d x Is the real physical scale of the unit pixel on the u axis in the image coordinate system; f. of y =1/d y ,d y Is the real physical scale of the unit pixel on the v-axis in the image coordinate system; r 3×3 Is a rotation matrix of the world coordinate system to the camera coordinate system,
Figure BDA0003152493450000083
(u 0 ,v 0 ) Coordinates of a central point of the image in an image coordinate system; the coordinates of the point in pixel coordinates are represented by (u, v); with (x) w ,y w ,z w ) Coordinates representing points in a world coordinate system; by T 3×1 Is a translation matrix from the world coordinate system to the camera coordinate system,
Figure BDA0003152493450000084
in the embodiment of the invention, the positioning two-dimensional code label (namely, the label) has the following characteristics:
the label needs to have definite shape characteristics, and the shape of the label can be selected to be square or rectangular in the embodiment of the invention; the color can be distinguished from that of the bridge, and red, yellow, green and the like can be selected; for convenient identification, the size of the label is not too small, and for cost control, the size of the label is selected to be 0.1-0.3 m 2 . The number of labels is determined according to the detection length, and is generally l/s (l is the detection length, s is the correction distance, and s is generally 10-20 meters). The label adopts a two-dimensional code format, records coordinates of four vertexes of the label corresponding to the accurate installation position, and can be identified through two-dimensional code identification software.
In addition, before the unmanned aerial vehicle takes off, the coordinate value combinations corresponding to the plurality of positioning two-dimensional code tags need to be stored in the storage device of the unmanned aerial vehicle, so that the corresponding coordinate value information can be conveniently acquired after the tags are identified. And the coordinate value combination consists of coordinate values of four corner points of the positioning two-dimensional code label.
In the embodiment of the invention, the label material needs to meet the characteristics of high temperature resistance, corrosion resistance, moisture resistance, weather resistance and the like, and is suitable for the special environment of the bottom of a bridge.
For the installation method of the label, the label can be placed at a position convenient for installation of the bridge, and the label can be placed at intervals of s.
In summary, the coordinate value combinations of the labels can be known. Then, after the tag is detected, a first conversion relation matrix between the two-dimensional code tag and the camera needs to be determined according to the coordinate value combination. The method comprises the following specific steps:
combining the coordinate values ((x) wi ,y wi ,z wi ) I =1,2,3,4) into a predetermined relationship, we can obtain:
Figure BDA0003152493450000091
due to the fact thatThe shape and the size of the label are known, namely the positions of all corner points of the label are known, four constraint conditions, namely eight equations, can be obtained, and the rotation matrix R is solved by combining the eight equations 3×3 And translation matrix T 3×1 (ii) a Combined rotation matrix R 3×3 And translation matrix T 3×1 Obtaining a first transformation relation matrix
Figure BDA0003152493450000092
Wherein, T 1 Is a first transformation relation matrix.
According to the installation position of the camera on the unmanned aerial vehicle holder, the relative relation between the position of the camera and the position of the unmanned aerial vehicle can be obtained and recorded as T 2 . Therefore, the tag position to drone position coordinate transformation relationship may be expressed in a matrix as: t is 1 *T 2 . Since all the position information of the tags is known, the position can be resolved according to the position relation.
For a large-span bridge, the realization that the label can be continuously observed has certain difficulty, therefore, in the embodiment, the assumption that the label observation is discontinuous is made, the assumption that the label of the bridge detection unmanned aerial vehicle can be completely recorded by the image sensor at the moment k, four corners can be completely recorded, the two-dimensional code on the label can be identified, the unmanned aerial vehicle acquires the coordinate information of the four corners of the label stored in the memory module in advance through the two-dimensional code identification module, the position information is resolved according to the method, and the position information resolved according to the label is recorded as p k VIO-based positioning calculation result is known and is marked as p' k By p k Substitute p' k Therefore, the purpose of real-time correction of the VIO navigation position error by the label can be realized.
According to the method, the label-assisted error correction method is provided, namely, the label is placed below the bridge in advance, the unmanned aerial vehicle can pass the accurate position information of the label at the moment when the label can be observed, the position information is accurate and does not contain the accumulated error, so that the position error in positioning can be corrected, the positioning accuracy is improved, and long-time positioning is facilitated.
The method is low in cost, suitable for being applied to a bridge detection scene and effective in correcting the drift error of the VIO method. In addition, the drift error of the VIO method can be corrected by the aid of the labels, so that the calculation amount of the original VIO method rear-end optimization is reduced, the operation cost is reduced, the hardware space is saved, and the load of the bridge detection unmanned aerial vehicle can be reduced.

Claims (5)

1. A tag-assisted-based bridge detection unmanned aerial vehicle visual navigation method is characterized by comprising the following steps:
arranging a plurality of positioning two-dimensional code labels on the bridge to be detected at intervals along the flight path of the unmanned aerial vehicle;
before the unmanned aerial vehicle takes off, combining and storing coordinate values corresponding to the positioning two-dimensional code labels into storage equipment of the unmanned aerial vehicle; the coordinate value combination consists of coordinate values of four corner points of the positioning two-dimensional code label;
continuously observing the surface of a bridge to be detected through a camera on the unmanned aerial vehicle in the flying process of the unmanned aerial vehicle, and inquiring a corresponding coordinate value combination according to the positioning two-dimensional code label when the camera observes the positioning two-dimensional code label;
determining a first conversion relation matrix between the positioning two-dimensional code label and the camera according to the coordinate value combination;
combining the first conversion relation matrix, the second conversion relation matrix and the coordinate value combination to calculate first position information of the unmanned aerial vehicle; wherein the second conversion relation matrix is a conversion relation matrix between the unmanned aerial vehicle and the camera;
replacing second position information of the unmanned aerial vehicle obtained through VIO with the first position information, and continuing to carry out unmanned aerial vehicle navigation;
determining a first conversion relation matrix between the positioning two-dimensional code tag and the camera according to the coordinate value combination comprises:
bringing the coordinate value combination into a preset relational expression to obtain eight equations;
the relational expression is as follows:
Figure FDA0003777587260000011
wherein f is the focal length of the camera; f. of x =1/d x ,d x Is the real physical scale of the unit pixel on the u axis in the image coordinate system; f. of y =1/d y ,d y Is the real physical scale of the unit pixel on the v-axis in the image coordinate system; r is 3×3 Is a rotation matrix of the world coordinate system to the camera coordinate system,
Figure FDA0003777587260000021
(u 0 ,v 0 ) Coordinates of a central point of the image in an image coordinate system; coordinates of a point in pixel coordinates are represented by (u, v); with (x) w ,y w ,z w ) Coordinates representing points in a world coordinate system; by T 3×1 Is a translation matrix from the world coordinate system to the camera coordinate system,
Figure FDA0003777587260000022
solving the rotation matrix R by combining the eight equations 3×3 And translation matrix T 3×1
Incorporating said rotation matrix R 3×3 And translation matrix T 3×1 Obtaining a first transformation relation matrix
Figure FDA0003777587260000023
Wherein, T 1 Is the first transformation relation matrix.
2. The label-assisted-based visual navigation method for bridge inspection unmanned aerial vehicle of claim 1, wherein the relation is obtained by:
determining the conversion relation from image coordinate system to pixel coordinate system
Figure FDA0003777587260000024
And a transformation matrix
Figure FDA0003777587260000025
Wherein, (x, y) is the coordinate of the midpoint in the image coordinate system;
determining a transformation matrix from a camera coordinate system to an image coordinate system
Figure FDA0003777587260000026
Wherein (x) c ,y c ,z c ) As coordinates of a point in the camera coordinate system, Z c Is a scale transformation factor;
determining a transformation matrix from a world coordinate system to a camera coordinate system
Figure FDA0003777587260000031
Determining a transformation matrix from a world coordinate system to a pixel coordinate system
Figure FDA0003777587260000032
And expanding and simplifying a conversion matrix from the world coordinate system to the pixel coordinate system to obtain the relational expression.
3. The visual navigation method for the bridge inspection unmanned aerial vehicle based on the label assistance of claim 2, wherein the expanding and simplifying the transformation matrix from the world coordinate system to the pixel coordinate system comprises:
expanding the conversion matrix from the world coordinate system to the pixel coordinate system to obtain:
Figure FDA0003777587260000033
simplifying the expanded transformation matrix from the world coordinate system to the pixel coordinate system to obtain:
Figure FDA0003777587260000034
4. the label-assisted-based visual navigation method for the bridge inspection unmanned aerial vehicle according to any one of claims 1 to 3, wherein the distance between two adjacent positioning two-dimensional code labels is 10-20 m.
5. The label-assisted-based visual navigation method for unmanned aerial vehicle for bridge inspection according to claim 4, wherein the positioning two-dimensional code label is rectangular and has an area size of 0.1-0.3 m 2
CN202110767675.9A 2021-07-07 2021-07-07 Label-assisted bridge detection unmanned aerial vehicle visual navigation method Active CN113403942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110767675.9A CN113403942B (en) 2021-07-07 2021-07-07 Label-assisted bridge detection unmanned aerial vehicle visual navigation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110767675.9A CN113403942B (en) 2021-07-07 2021-07-07 Label-assisted bridge detection unmanned aerial vehicle visual navigation method

Publications (2)

Publication Number Publication Date
CN113403942A CN113403942A (en) 2021-09-17
CN113403942B true CN113403942B (en) 2022-11-15

Family

ID=77685421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110767675.9A Active CN113403942B (en) 2021-07-07 2021-07-07 Label-assisted bridge detection unmanned aerial vehicle visual navigation method

Country Status (1)

Country Link
CN (1) CN113403942B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237262B (en) * 2021-12-24 2024-01-19 陕西欧卡电子智能科技有限公司 Automatic berthing method and system for unmanned ship on water surface

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3397170B2 (en) * 1999-05-27 2003-04-14 株式会社デンソー Information code recording position detecting device and optical information reading device
KR20160022065A (en) * 2014-08-19 2016-02-29 한국과학기술원 System for Inspecting Inside of Bridge
CN106556341B (en) * 2016-10-08 2019-12-03 浙江国自机器人技术有限公司 A kind of shelf pose deviation detecting method and system based on characteristic information figure
CN106570820B (en) * 2016-10-18 2019-12-03 浙江工业大学 A kind of monocular vision three-dimensional feature extracting method based on quadrotor drone
CN106645205A (en) * 2017-02-24 2017-05-10 武汉大学 Unmanned aerial vehicle bridge bottom surface crack detection method and system
CN106969766A (en) * 2017-03-21 2017-07-21 北京品创智能科技有限公司 A kind of indoor autonomous navigation method based on monocular vision and Quick Response Code road sign
CN109060281B (en) * 2018-09-18 2022-01-18 山东理工大学 Integrated bridge detection system based on unmanned aerial vehicle
CN110533718A (en) * 2019-08-06 2019-12-03 杭州电子科技大学 A kind of navigation locating method of the auxiliary INS of monocular vision artificial landmark
CN110705433A (en) * 2019-09-26 2020-01-17 杭州鲁尔物联科技有限公司 Bridge deformation monitoring method, device and equipment based on visual perception
CN112581795B (en) * 2020-12-16 2022-04-29 东南大学 Video-based real-time early warning method and system for ship bridge and ship-to-ship collision

Also Published As

Publication number Publication date
CN113403942A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN109341706B (en) Method for manufacturing multi-feature fusion map for unmanned vehicle
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN103822635B (en) The unmanned plane during flying spatial location real-time computing technique of view-based access control model information
Chiabrando et al. UAV and RPV systems for photogrammetric surveys in archaelogical areas: two tests in the Piedmont region (Italy)
CN106017463A (en) Aircraft positioning method based on positioning and sensing device
CN111024072B (en) Satellite map aided navigation positioning method based on deep learning
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN115690338A (en) Map construction method, map construction device, map construction equipment and storage medium
CN109460046B (en) Unmanned aerial vehicle natural landmark identification and autonomous landing method
KR102239562B1 (en) Fusion system between airborne and terrestrial observation data
CN101545776A (en) Method for obtaining digital photo orientation elements based on digital map
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN114923477A (en) Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
CN114459467A (en) Target positioning method based on VI-SLAM in unknown rescue environment
CN115574816A (en) Bionic vision multi-source information intelligent perception unmanned platform
CN113403942B (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method
CN113284239B (en) Method and device for manufacturing electronic sand table of smart city
CN113673386A (en) Method for marking traffic signal lamp in prior-to-check map
CN117470259A (en) Primary and secondary type space-ground cooperative multi-sensor fusion three-dimensional map building system
CN113155126A (en) Multi-machine cooperative target high-precision positioning system and method based on visual navigation
CN116957360A (en) Space observation and reconstruction method and system based on unmanned aerial vehicle
CN112489118B (en) Method for quickly calibrating external parameters of airborne sensor group of unmanned aerial vehicle
Ready et al. Improving accuracy of MAV pose estimation using visual odometry
Mounier et al. High-precision positioning in GNSS-challenged environments: a LiDAR-based multi-sensor fusion approach with 3D digital maps registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant