CN114610047A - QMM-MPC underwater robot vision docking control method for on-line depth estimation - Google Patents

QMM-MPC underwater robot vision docking control method for on-line depth estimation Download PDF

Info

Publication number
CN114610047A
CN114610047A CN202210226486.5A CN202210226486A CN114610047A CN 114610047 A CN114610047 A CN 114610047A CN 202210226486 A CN202210226486 A CN 202210226486A CN 114610047 A CN114610047 A CN 114610047A
Authority
CN
China
Prior art keywords
auv
docking
mpc
underwater robot
qmm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210226486.5A
Other languages
Chinese (zh)
Other versions
CN114610047B (en
Inventor
岳伟
季嘉诚
邹存名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202210226486.5A priority Critical patent/CN114610047B/en
Publication of CN114610047A publication Critical patent/CN114610047A/en
Application granted granted Critical
Publication of CN114610047B publication Critical patent/CN114610047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0692Rate of change of altitude or depth specially adapted for under-water vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a QMM-MPC underwater robot vision docking control method for on-line depth estimation, which belongs to the field of underwater robot vision docking and comprises the following steps: establishing a six-degree-of-freedom AUV visual servo model under a weak light working condition and under the uncertain influence of depth information; on the basis of a six-degree-of-freedom AUV visual servo model, establishing an online depth estimator for estimating real-time depth information between an underwater robot and a docking station by combining AUV and motion measurement data of characteristic points of the docking station on an image plane; establishing a target function and optimizing a problem based on a six-degree-of-freedom AUV visual servo model and a control target; the invention obtains the underwater robot vision docking control system based on the real-time depth information between the underwater robot and the docking station and by combining the improved QMM-MPC algorithm, realizes the docking of the underwater robot and the docking station, greatly reduces the problem of calculation amount, and is more suitable for the work tasks with higher real-time requirements, such as AUV underwater vision docking and the like.

Description

QMM-MPC underwater robot vision docking control method for on-line depth estimation
Technical Field
The invention relates to the field of underwater robot vision docking, in particular to a QMM-MPC underwater robot vision docking control method for on-line depth estimation.
Background
In the past decades, Autonomous Underwater Vehicles (AUV) have been applied in depth-finding mapping, inspection, maintenance, mine exploration, environmental monitoring, intervention and the like, and attract more and more high attention of research teams in academic and industrial fields by virtue of the potential of greatly improving the operating efficiency in complex Underwater environments. The AUV has the main characteristic of autonomously coping with changes in the process of executing tasks and keeping stable control performance without human operation. Considering the complex dynamics characteristics, task complexity and environmental risks of the AUV, the problem of AUV control under the complex environment becomes one of the hot research fields at present. Therefore, in recent years, autonomous recovery of the AUV by installing a recovery platform underwater has become an important research direction. At present, various underwater recovery docking systems are designed at home and abroad according to the characteristics of AUV and the types of recovery platforms. However, the requirement of high precision in the final close-range docking operation in any docking manner is always a difficult point.
Considering that the current research work mostly depends on visual servo to estimate the pose of the AUV to complete the docking task, but the underwater environment is complex, the visibility is low, and the depth information of a visual camera is seriously influenced by light refraction, absorption and scattering, a scholars for the problem provides visual servo based on images, and the visual servo completely depends on the movement of characteristics on an image plane, but the technical means of the visual servo is only to set the depth information to be a constant value, so that the problem of uncertain depth is not completely solved, and the underwater docking task can fail; in the traditional visual servo based on images, the depth value is mostly determined as a constant value, and the existing research shows that the failure rate of the docking task is high due to the fact that the depth information is assumed as the constant value; and because the traditional QMM-MPC controller only designs one Lyapunov upper bound, the conservation of tasks with high precision and high real-time requirements for six-degree-of-freedom AUV underwater visual docking is too strong, and thus the task fails.
Disclosure of Invention
According to the technical problem that the existing AUV underwater docking task has complex environment and uncertain depth, so that the AUV cannot effectively execute the docking task, the invention provides the technical scheme that:
a QMM-MPC underwater robot vision docking control method for online depth estimation comprises the following steps:
establishing a six-degree-of-freedom AUV visual servo model under a weak light working condition and under the uncertain influence of depth information;
on the basis of a six-degree-of-freedom AUV visual servo model, establishing an online depth estimator for estimating real-time depth information between the underwater robot and the docking station by combining the AUV and motion measurement data of the characteristic points of the docking station on an image plane, and merging the real-time depth estimated value into the model to obtain the AUV visual servo model with the online depth estimator;
establishing an objective function and an optimization problem according to a control target;
and designing a controller by combining an improved QMM-MPC algorithm, updating an objective function, establishing a total LMIS, solving to obtain a visual docking control system of the underwater robot, and docking the underwater robot and a docking station.
Further, the establishment of the six-degree-of-freedom AUV visual servo model under the low-light working condition and under the uncertain influence of depth information comprises the following steps:
establishing a camera perspective model through a conversion relation of a world coordinate system, an AUV local coordinate system, a camera coordinate system and an image plane coordinate system;
based on a camera perspective model, a six-degree-of-freedom AUV visual servo model under a low-light working condition and under uncertain influence of depth information is established.
Further, the expression of the AUV visual servo model with online depth estimator is as follows:
Figure BDA0003539384690000021
wherein:
Figure BDA0003539384690000022
Figure BDA0003539384690000023
ei(k) a lateral difference m between the coordinates of the image plane representing the ith feature point and the coordinates of the desired image planeie(k) Longitudinal difference nie(k) (ii) a T is sampling time, v is total velocity vector of the underwater robot, including linear velocity and angular velocity of each axis, and Jacobian matrix of the ith characteristic point in a visual servo error system is given
Figure BDA0003539384690000024
The following were used:
Figure BDA0003539384690000031
wherein f is the focal length of the camera,
Figure BDA0003539384690000032
for the desired coordinates of the ith feature point on the image plane,
Figure BDA0003539384690000033
and representing the depth value of the ith characteristic point estimation, wherein the expression is as follows:
Figure BDA0003539384690000034
wherein the content of the first and second substances,
Figure BDA0003539384690000035
v1=[u,v,w]T,v2=[p,q,r]Trespectively representing the linear velocity and the angular velocity of the AUV on three coordinate axes. Finally, the least squares solution can be used to obtain the estimated depth values.
Further, the method is characterized in that the establishing of the objective function and the optimization problem according to the control target comprises the following steps:
the convex polyhedron is solved, and the image coordinate m can be determined according to the resolution of the cameraie(k) And nie(k) The variation range of (A) is as follows:
Figure BDA0003539384690000036
the change boundary is brought into a Jacobian matrix L (p (k)), and L (p (k)) is decomposed into a convex combination form of a known vertex matrix, so that the k, L (p (k)) at any sampling time is in the vertex matrix Lr(R ═ 1,2, …, R) in a convex polyhedron Ω, namely: l (p (k)) ∈ Co{L1,L2,…,LR};
And optimizing the objective function by using a min-max control strategy to obtain an optimization problem.
Further, the optimization problem is as follows:
Figure BDA0003539384690000037
wherein: e (0| k) ═ e (k),
Figure BDA0003539384690000038
an objective function value of k +1 to infinity, QeAnd QuThe state and control input weighting matrices, respectively.
Further, in a visual system mode formed by different vertexes in the convex polyhedron, a Lyapunov function is designed for each vertex, and the specific process is as follows;
the Lyapunov function is designed for each vertex of the convex body as follows:
Vr(e(i|k))=e(i|k)TPr(k)e(i|k),r=1,…,R (25)
wherein, Pr(k) For the symmetric positive definite matrix to be solved, if the closed-loop system is made to be asymptotically stable, Vr(e (i | k)) is required to satisfy:
Vr(e(i+1|k))-Vr(e(i|k))≤-[e(i|k)TQee(i|k)+u(i|k)TQuu(i|k)] (26)
the left end of formula (26) is added from i ═ 1 to ∞ and can be obtained:
Figure BDA0003539384690000041
asymptotic stability V of closed loop systemrThe simplification of the above equation can result in the system mode configured for different vertices, where (e (∞) k) is 0
Figure BDA0003539384690000042
Upper bound of (2):
Figure BDA0003539384690000043
substituting the state feedback controller u (i | k) ═ f (k) e (i | k) into the degressive constraint (26) yields:
e(i|k)T{(I+L(i|k)F(k))TP(k)(I+L(i|k)F(k))-P(k)+Qe+F(k)TQuF(k)}e(k+i|k)<0 (29)
according to the nature of a linear system, when the vertices of the various polyhedrons satisfy the constraint (29), the system model at the future moment must satisfy the constraint (29), which (29) can be rewritten in the form:
Figure BDA0003539384690000044
and the following performance requirements are attached:
V(0,k)=e(k|k)TP(0|k)e(k|k)<γ (31)
establishing the following optimization problem according to (9) and (10), and designing the following linear matrix inequality:
Figure BDA0003539384690000045
Figure BDA0003539384690000046
Figure BDA0003539384690000047
wherein the content of the first and second substances,
Figure BDA0003539384690000048
q is obtained from the formula (13) according to the Schuler's complement theoryr>0,G+GT>QrMatrix of
Figure BDA0003539384690000049
Satisfy the non-negativity, therefore;
Figure BDA00035393846900000410
according to formula (35), formula (33) can be converted to:
Figure BDA0003539384690000051
left-multiplying (36) by diag { QrG-TI, I }, right-handed diag { G }-1QjI, I }, and (33) may be equivalent to the following LMI:
Figure BDA0003539384690000052
wherein G ═ F-1Y,
Figure BDA0003539384690000053
The (16) is an optimization problem;
the optimization problem (32) has a solution QrR is 1,2, …, R and a pair of matrices Y, G. To obtain
Figure BDA0003539384690000054
Define Qm=min[Qr](ii) a Can obtain the product
Figure BDA0003539384690000055
Is e (1| k)TPm(k) e (1| k), wherein Pm=Qm/γ。
Further: the process of updating the objective function, establishing a total linear matrix inequality set and solving to obtain the underwater robot vision docking control system is as follows:
from the current time k, e (0| k) ═ e (k) and u (0| k) ═ u (k), we can obtain:
Figure BDA0003539384690000056
the six-degree-of-freedom AUV visual servo quasi-maximum minimum model predictive control problem is expressed as follows according to the kinetic energy constraint, the camera view field constraint and the constraint conditions of (10) and (17):
Figure BDA0003539384690000057
Figure BDA0003539384690000058
the following LMIs are further available:
Figure BDA0003539384690000059
Figure BDA0003539384690000061
Figure BDA0003539384690000062
in order to make the objective function easier to solve, the objective function of the optimization problem (18) is made to satisfy:
e(k)TQee(k)+u(k)TQuu(k)+e(1|k)TPm(k)e(1|k)≤γ (44)
substituting e (1| k) ═ e (k) + L (p (k)) u (k) into (41), and using schuler's complement theory, the equivalence condition can be obtained:
H(k)≥0 (45)
applying the schulk's complement to the degressive constraints and control input constraints in (40) yields an equivalence condition:
Cr(k)≥0,Dr(k)≥0 (46)
in conjunction with (44) - (46), the optimization problem (39) can be rewritten as:
Figure BDA0003539384690000063
Figure BDA0003539384690000064
wherein (46) is emin-e(k)≤B(p(k))u(k)≤emax-e (k) for the visual visibility constraint, transforming the optimization problem (39) into a linear matrix inequality description that is more convenient to solve online, assuming that the problem is solved, solving the optimal solution γ*,u(k)*,Qm(k)*And Y (k)*Defining AUV visual servo controller as umpc(k)=u(k)*The terminal weight P (k) ═ γ Qm(k)*-1,F(k)=Y(k)*G(k)*The corresponding AUV closed-loop visual servo control system is as follows.
e(k+1)=e(k)+L(p(k))umpc(k) (49)
The invention provides a visual docking control method of a QMM-MPC underwater robot for online depth estimation.
The invention adopts a six-degree-of-freedom AUV visual servo model based on images, so that the AUV visual servo model completely depends on the movement of characteristics on an image plane;
the online depth estimator designed by the invention is only related to the feature extraction precision and is not influenced by the installation position of the camera and the movement of the AUV;
according to the method, the underwater robot is docked with the docking station, the accuracy is guaranteed by fusing depth estimation information, and meanwhile a Lyapunov min-max control strategy is used for reference, in the process of solving an upper bound, a Lyapunov function is respectively designed for multiple modes of an AUV visual servo parameter system model, so that the conservatism of a controller is reduced;
the method converts the AUV underwater butt joint real-time optimization problem into the online solution of the linear matrix inequality, greatly reduces the calculation amount problem, and is more suitable for the work tasks with higher real-time requirements, such as AUV underwater visual butt joint and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a projection view of a visual docking system on an image plane;
FIG. 2 is a camera trajectory diagram with or without a depth estimator;
FIG. 3(a) a conventional QMM-MPC algorithm (without depth estimator) image plane, (b) the improved QMM-MPC algorithm image plane of the present application;
FIG. 4(a) error variation graph of conventional QMM-MPC algorithm; (b) the improved QMM-MPC algorithm error change graph of the application;
FIG. 5(a) a speed planning diagram of a conventional QMM-MPC algorithm; (b) the application discloses an improved QMM-MPC algorithm speed planning map.
Detailed Description
It should be noted that, in the case of conflict, the embodiments and features of the embodiments may be combined with each other, and the present invention will be described in detail with reference to the accompanying drawings in combination with the embodiments.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. Any specific values in all examples shown and discussed herein are to be construed as exemplary only and not as limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In the description of the present invention, it is to be understood that the orientation or positional relationship indicated by the directional terms such as "front, rear, upper, lower, left, right", "lateral, vertical, horizontal" and "top, bottom", etc., are generally based on the orientation or positional relationship shown in the drawings, and are used for convenience of description and simplicity of description only, and in the absence of any contrary indication, these directional terms are not intended to indicate and imply that the device or element so referred to must have a particular orientation or be constructed and operated in a particular orientation, and therefore should not be considered as limiting the scope of the present invention: the terms "inner and outer" refer to the inner and outer relative to the profile of the respective component itself.
Spatially relative terms, such as "above … …," "above … …," "above … … surface," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial relationship to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary term "above … …" can include both an orientation of "above … …" and "below … …". The device may be otherwise variously oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
It should be noted that the terms "first", "second", and the like are used to define the components, and are only used for convenience of distinguishing the corresponding components, and unless otherwise stated, the terms have no special meaning, and therefore, the scope of the present invention should not be construed as being limited.
A QMM-MPC control method based on improvement and with an online depth estimator comprises the following steps:
s1: establishing a six-degree-of-freedom AUV visual servo model under a weak light working condition and under the uncertain influence of depth information;
s2, establishing an online depth estimator for estimating real-time depth information between the underwater robot and the docking station based on the six-degree-of-freedom AUV visual servo model and combining the AUV and the motion measurement data of the characteristic points of the docking station on the image plane, and merging the real-time depth estimation value into the model to obtain the AUV visual servo model with the online depth estimator;
s3, establishing an objective function and an optimization problem according to the control objective;
s4, designing a controller by combining the improved QMM-MPC algorithm, aiming at reducing the conservatism of the original QMM-MPC algorithm; and updating the objective function, establishing a total LMIS, solving to obtain a visual docking control system of the underwater robot, and realizing docking of the underwater robot and the docking station.
FIG. 1 is a perspective view of a visual docking system on an image plane;
steps S1, S2, S3, S4 are performed sequentially;
wherein: the improved QMM-MPC algorithm (QMM-MPC) also refers to Quasi-maximum minimum model prediction control of a Dupliapunov function;
further, the establishment of the six-degree-of-freedom AUV visual servo model under the low-light working condition and under the uncertain influence of depth information comprises the following steps:
establishing a camera perspective model through a conversion relation of a world coordinate system, an AUV local coordinate system, a camera coordinate system and an image plane coordinate system; FIG. 2 is a camera trajectory diagram with or without a depth estimator; the comparison in the figure shows that although the algorithm without the depth estimator can also enable the AUV to reach the expected pose, the camera track is not ideal, and compared with the algorithm with the depth estimator, the algorithm generates many extra motions, so that the energy consumption of the AUV is greatly wasted.
Based on a camera perspective model, a six-degree-of-freedom AUV visual servo model under a low-light working condition and under uncertain influence of depth information is established.
Further, an online depth estimator is established, and the real-time depth estimation value is merged into the model, so that an AUV visual servo model with the online depth estimator is obtained, and the method comprises the following steps:
and designing an online depth estimator according to the AUV and the motion measurement data of the characteristic points of the docking station on the image plane, solving by using a least square method, and obtaining a depth estimation value in real time.
And obtaining the six-degree-of-freedom AUV visual servo model in discrete time by adopting an Euler approximate discrete six-degree-of-freedom AUV visual servo model.
And merging the depth estimation value into a six-degree-of-freedom AUV visual servo model under discrete time, thereby obtaining the AUV visual servo model with the online depth estimator.
Further, an objective function and an optimization problem are established according to the control target;
the convex polyhedron is solved, and the image coordinate m can be determined according to the resolution of the cameraie(k) And nie(k) The variation range of (A) is as follows:
Figure BDA0003539384690000101
the change boundaries are substituted into the Jacobian matrix L (p (k)), which can be decomposed into convex combinations of known vertex matrices such that k, L (p (k)) at any sampling time is at the vertex matrix Lr(R ═ 1,2, …, R) in a convex polyhedron Ω, namely: l (p (k)) ∈ Co{L1,L2,…,LR};
Establishing an objective function to obtain an optimization problem; the control goal is to make the error of the image plane approach zero in a short time under the condition of minimum kinetic energy input, and based on the control goal and further analysis, the visual characteristic parameter vector p (k) in the Jacobian matrix L (p (k)) in the visual servo error system can be measured at the current k moment. The value of the future time cannot be estimated in advance, and the kinetic energy range of the polyhedron can be ensured to be in the convex polyhedron omega only. Therefore, the objective functions are respectively defined aiming at the current moment and the future moment, and the future moment objective functions have parameter uncertainty, so that the objective functions need to be optimized by utilizing a min-max control strategy, and the optimization problem is obtained.
Further, the strong conservation of the original QMM-MPC algorithm to the system is reduced by combining the improved QMM-MPC algorithm, and in order to obtain the optimal performance index, the control performance under the worst condition is required to be obtained, namely, the control performance under the worst condition is determined
Figure BDA0003539384690000102
The upper bound of (3) is that in the traditional QMM-MPC, only one upper bound of Lyapunov function is designed, but the system has too strong conservation for AUV visual docking. The method adopts a visual system mode formed by different vertexes in the convex polyhedron, and designs a Lyapunov function for each vertex. And finally, solving the optimal upper bound of the objective function.
Further, according to the obtained optimal objective function upper bound, the objective function is updated, a total LMIS is established, system parameters and control input are obtained through solving, a visual docking control system of the underwater robot is obtained, and docking of the underwater robot and the docking station is achieved.
Further, based on a camera perspective model, a six-degree-of-freedom AUV visual servo model under a low-light working condition and under the uncertain influence of depth information is established. Specifically, the method comprises the following steps:
the coordinate P of the ith characteristic point under the known world coordinate system { G }i=(Xi,Yi,Zi) The coordinate mapping relationship with its projection point pi ═ (mi, ni) in the image coordinate system is as follows:
Figure BDA0003539384690000103
wherein Z isiIndicates the depth of the ith feature point, MI,MEAn internal reference matrix and an external reference matrix of the camera are respectively represented and defined as follows:
Figure BDA0003539384690000111
Figure BDA0003539384690000112
where f represents the focal length of the camera and (m)0,n0) Representing the central pixel coordinates of the image plane, wherein the sizes of each pixel point in the x axis and the y axis of the image physical coordinate system are dx and dy; r, T represent the rotation matrix and translation vector of the camera coordinate system { C } to the world coordinate system { G } respectively. The rotation matrix R is generated from the ZYX euler angles and can be described by:
Figure BDA0003539384690000113
where ψ, θ and in ZYX Euler angles
Figure BDA0003539384690000114
The camera firstly rotates around a Z axis by a psi angle, rotates around a Y axis by an theta angle and then rotates around an X axis
Figure BDA0003539384690000115
And (4) an angle.
Let the coordinate of the feature point in the world coordinate system be pi=[Xi,Yi,Zi]T3To p foriThe derivation can be:
Figure BDA0003539384690000116
wherein v is1=[u,v,w]T,v2=[p,q,r]TRespectively representing the linear velocity and the angular velocity of the AUV in three coordinate axes;
from the perspective projection of the camera, the following is given:
Figure BDA0003539384690000117
wherein s isi=[mi,ni]For the current image coordinates on the image plane for the ith feature point,
Figure BDA0003539384690000118
is the desired image coordinates on the image plane. The visual servoing model obtained by taking the derivative of equation (6) and combining equation (5) is as follows:
Figure BDA0003539384690000119
wherein the content of the first and second substances,
Figure BDA0003539384690000121
an image jacobian matrix between the camera speed and the imaging rate; v ═ v1,v2]TRepresenting the velocity vector, Z, of the AUViDepth information representing the ith feature point; considering that visual tracking requires N visual feature points, the current image coordinates and the desired image coordinates may be defined as:
Figure BDA0003539384690000122
the AUV visual servo model is obtained according to equation (7) as follows:
Figure BDA0003539384690000123
wherein Z ═ Z1,…,Zi,…,ZN]T2NFor the depth vector corresponding to each feature point, L (s, z) represents the stacked Jacobian matrix:
Figure BDA0003539384690000124
further, an online depth estimator for estimating real-time depth information between the underwater robot and the docking station is established based on the six-degree-of-freedom AUV visual servo model and by combining motion measurement data of characteristic points of the AUV and the docking station on an image plane, and the AUV visual servo model with the online depth estimator is finally obtained. Specifically, the method comprises the following steps:
considering the visual servo model (9) of the ith feature point, it can be known that the first three columns in the jacobian matrix are related to the depth, and the equation (9) is rearranged to obtain:
Figure BDA0003539384690000125
wherein, JtAnd JωRespectively representing the influence of translational motion and rotational motion of the camera on the image feature vector, respectively expressed as follows:
Figure BDA0003539384690000131
the compact visual servoing system linear equation is obtained as follows:
Aθ=b (12)
wherein A ═ Jtv1,
Figure BDA0003539384690000132
b is the residual optical flow, i.e., the observed optical flow and the camera rotation, i.e., the optical flow differential that will be generated, solving (12) yields:
Figure BDA0003539384690000133
wherein the content of the first and second substances,
Figure BDA0003539384690000134
is a depth estimate, is a model predictive controllerDesigning depth estimation information providing a high approximation and a true depth value;
considering the AUV visual servo model (9), further defining the visual tracking error:
Figure BDA0003539384690000135
the derivation of equation (14) can be:
Figure BDA0003539384690000136
considering an AUV visual servo error system (15), selecting T as sampling time, and utilizing an Euler approximate discrete system to obtain an AUV visual servo model with an online depth estimator:
Figure BDA0003539384690000137
wherein e (k) ═ e1(k),…,ei(k),…,eN(k)]T=[m1e(k),n1e(k),…,mie(k),nie(k),…,mNe(k),nNe(k)]TAnd represents the lateral and longitudinal errors of each feature point on the image plane.
Figure BDA0003539384690000138
Wherein the content of the first and second substances,
Figure BDA0003539384690000139
is the depth value vector of all the feature points estimated online, the Jacobian matrix of the ith feature point can be expressed as follows:
Figure BDA00035393846900001310
further, based on a six-degree-of-freedom AUV visual servo model and a control target, an objective function and an optimization problem are established:
as shown in the formula (18), the Jacobian matrix
Figure BDA00035393846900001311
Is about the variable mie(k) And nie(k) A function of, i.e.
Figure BDA00035393846900001312
With parameter vector p (k) ═ mie(k),nie(k)]Can be determined from the resolution of the cameraie(k) And nie(k) The variation ranges of (A) are respectively:
Figure BDA0003539384690000141
l (p (k)) can be decomposed into a convex combination of known vertex matrices such that at any sampling time k, L (p (k)) at vertex matrix Lr(R ═ 1,2, …, R) in a convex polyhedron Ω, namely:
L(p(k))∈Ω=Co{L1,L2,…,LR} (19)
the error-considered system (16) satisfies the following constraints:
Figure BDA00035393846900001411
Figure BDA0003539384690000142
considering (18) - (21), the visual feature parameter vector p (k) in the jacobian matrix L (p (k)) is measurable at the current time k, but the value at the future time cannot be predicted in advance, and the kinetic energy range is guaranteed to be in the convex polyhedron Ω, so that the prediction controller is designed for the current time and the future time respectively:
the objective function is defined as follows:
Figure BDA0003539384690000143
wherein u (0| k) ═ u (k) ([ v ]1,v2],e(0|k)=e(k),
Figure BDA0003539384690000144
An objective function value of k +1 to infinity, QeAnd QuThe state and control input weighting matrices, respectively. Because the target function at the future moment has parameter uncertainty, the target function needs to be optimized by utilizing a min-max control strategy, which is equivalent to pair
Figure BDA0003539384690000145
The worst-case performance due to future time uncertainty is set to
Figure BDA0003539384690000146
The optimal performance index can be obtained by minimizing the performance index. The optimization problem is thus obtained as follows:
Figure BDA0003539384690000147
wherein the content of the first and second substances,
Figure BDA0003539384690000148
for the control input from time k +1 to infinity, e (0| k) is e (k),
Figure BDA0003539384690000149
an objective function value of k +1 to infinity, QeAnd QuThe state and control input weighting matrices, respectively.
Further, in order to stabilize the underwater robot vision docking control system, an improved quasi-maximum minimum model predictive controller design introducing depth estimation information is designed, which is mainly divided into two steps, specifically as follows:
s41, in the traditional QMM-MPC, only one Lyapunov function upper bound exists, and the AUV visual docking system has too strong conservatism, so that the AUV cannot be planned with proper speed, and the docking task fails; the visual system mode formed by different vertexes in the convex polyhedron is adopted, and a Lyapunov function is designed for each vertex, so that the strong conservation of the traditional QMM-MPC is reduced, and the following concrete steps are taken (note that (i | k) represents the prediction of k + i moment at k moment):
for the optimization problem (23) above, define
Figure BDA00035393846900001410
State feedback control law:
u(i|k)=F(k)e(i|k),i=1,2,…,∞ (24)
where F (k) is the feedback control gain.
Designing Lyapunov function V for each vertexr(e (i | k)) is as follows:
Vr(e(i|k))=e(i|k)TPr(k)e(i|k),r=1,…,R (25)
wherein, Pr(k) For the symmetric positive definite matrix to be solved, R is the number of vertexes, if the closed-loop system is enabled to be asymptotically stable, the Lyapunov function Vr(e (i | k)) is satisfied,
Figure BDA0003539384690000151
the left end of formula (26) is added from i ═ 1 to ∞ and can be obtained:
Figure BDA0003539384690000152
consider asymptotic stability V of a closed loop systemr(e (∞) k)) is 0, and the simplification of the above formula can obtain system modes formed by aiming at different vertexes
Figure BDA0003539384690000153
Upper bound of (d) is the value of the objective function from time k +1 to time infinity, QeAnd QuWeighting moments for state and control inputs, respectivelyArraying:
Figure BDA0003539384690000154
further, substituting the state feedback controller u (i | k) ═ f (k) e (i | k) (24) into the degressive constraint (26) yields:
e(i|k)T{(I+L(i|k)F(k))TP(k)(I+L(i|k)F(k))-P(k)+Qe+F(k)TQuF(k)}e(k+i|k)<0 (29)
according to the nature of a linear system, when the vertices of the various polyhedrons satisfy the constraint (29), the system model at the future moment must satisfy the constraint (29), which (29) can be rewritten in the form:
Figure BDA0003539384690000155
and the following performance requirements are attached:
V(0,k)=e(k|k)TP(0|k)e(k|k)<γ (31)
the following optimization problem is established according to (30) and (31), and the following linear matrix inequality is designed:
Figure BDA0003539384690000156
Figure BDA0003539384690000157
Figure BDA0003539384690000158
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003539384690000161
q is obtained from the formula (34) according to the Schuler's complement theoryr>0,G+GT>QrGo forward and go forwardOne step, matrix
Figure BDA0003539384690000162
Satisfy a non-negative definite condition, therefore
Figure BDA0003539384690000163
According to formula (35), formula (33) can be converted to:
Figure BDA0003539384690000164
pre-multiplying (36) by diag { QrG-TI, I }, right-handed diag { G }-1QjI, I }, and (33) may be equivalent to the following LMI:
Figure BDA0003539384690000165
wherein G ═ F-1Y,
Figure BDA0003539384690000166
If the optimization problem (32) has a solution QrR is 1,2, …, R and a pair of matrices Y, G. To obtain
Figure BDA0003539384690000167
Define Qm=min[Qr]Is obtained by
Figure BDA0003539384690000168
Is e (1| k)TPm(k) e (1| k), wherein Pm=Qm/γ;
S42, updating the objective function, and establishing a total Linear Matrix Inequality Set (LMIS), which is as follows:
considering the current time k, e (0| k) ═ e (k) and u (0| k) ═ u (k), equation (22) is formulated to be available,
Figure BDA0003539384690000169
according to (20), (21), (30) and (38), the AUV visual servo quasi-maximum-minimum model predictive control problem is expressed as follows:
Figure BDA00035393846900001610
Figure BDA00035393846900001611
the following LMIs are further available:
Figure BDA0003539384690000171
Figure BDA0003539384690000172
Figure BDA0003539384690000173
in order to make the objective function easier to solve, the objective function of the optimization problem (39) is made to satisfy:
e(k)TQee(k)+u(k)TQuu(k)+e(1|k)TPm(k)e(1|k)≤γ (44)
substituting e (1| k) ═ e (k) + L (p (k)) u (k) into (41), and using schuler's complement theory, the equivalence condition can be obtained:
H(k)≥0 (45)
applying the schulk's complement to the degressive constraints and control input constraints in (40) yields an equivalence condition:
Cr(k)≥0,Dr(k)≥0 (46)
in conjunction with (44) - (46), the optimization problem (39) can be rewritten as:
Figure BDA0003539384690000174
Figure BDA0003539384690000175
wherein (46b) the last constraint is a visual visibility constraint. And converting the optimization problem (39) into linear matrix inequality description which is convenient and fast to solve online. The problem is assumed to be solved to obtain an optimal solution gamma*,u(k)*,Qm(k)*And Y (k)*. Define AUV visual servo controller as umpc(k)=u(k)*The terminal weight P (k) ═ γ Qm(k)*-1,F(k)=Y(k)*G(k)*The corresponding AUV closed-loop visual servo control system is as follows:
e(k+1)=e(k)+L(p(k))umpc(k) (49)
the above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
FIG. 3(a) a conventional QMM-MPC algorithm (without depth estimator) image plane, (b) the improved QMM-MPC algorithm image plane of the present application; comparing (a) and (b), it can be seen that the image track on the image plane of the present application is more tortuous and generates a severe back-off, appearing jaggy, compared with the conventional QMM-MPC algorithm (without depth estimator).
FIG. 4(a) error variation graph of conventional QMM-MPC algorithm; (b) the improved QMM-MPC algorithm error change graph of the application; comparing (a) and (b) shows that the method has better control performance in terms of convergence and response time compared with the traditional algorithm.
FIG. 5(a) a speed planning diagram of a conventional QMM-MPC algorithm; (b) the application discloses an improved QMM-MPC algorithm speed planning map. Comparing (a) and (b), it can be known that the speed value of the method is larger and satisfies the constraint compared with the conventional algorithm, so that the AUV can reach the expected position faster, and the strong conservatism of the conventional algorithm is greatly reduced.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A QMM-MPC underwater robot vision docking control method for online depth estimation is characterized by comprising the following steps:
establishing a six-degree-of-freedom AUV visual servo model under a weak light working condition and under the uncertain influence of depth information;
on the basis of a six-degree-of-freedom AUV visual servo model, establishing an online depth estimator for estimating real-time depth information between the underwater robot and the docking station by combining the AUV and motion measurement data of the characteristic points of the docking station on an image plane, and merging the real-time depth estimated value into the model to obtain the AUV visual servo model with the online depth estimator;
establishing an objective function and an optimization problem according to a control target;
and designing a controller by combining an improved QMM-MPC algorithm, updating a target function, establishing a total LMIS, solving to obtain a visual docking control system of the underwater robot, and realizing docking of the underwater robot and a docking station.
2. The QMM-MPC underwater robot vision docking control method for on-line depth estimation as recited in claim 1, wherein the establishing of the six-DOF AUV vision servo model under the weak light condition and under the uncertain influence of depth information comprises the following steps:
establishing a camera perspective model through a conversion relation of a world coordinate system, an AUV local coordinate system, a camera coordinate system and an image plane coordinate system;
based on a camera perspective model, a six-degree-of-freedom AUV visual servo model under a low-light working condition and under uncertain influence of depth information is established.
3. The QMM-MPC underwater robot vision docking control method for online depth estimation as claimed in claim 1, wherein the AUV vision servo model with online depth estimator is expressed as follows:
Figure FDA0003539384680000011
wherein:
e(k)=[e1(k),…,ei(k),…,eN(k)]T=[m1e(k),n1e(k),…,mNe(k),nNe(k)]T
Figure FDA0003539384680000012
ei(k) a lateral difference m between the coordinates of the image plane representing the ith feature point and the coordinates of the desired image planeie(k) Longitudinal difference nie(k) (ii) a T is sampling time, v is total velocity vector of the underwater robot, including linear velocity and angular velocity of each axis, and Jacobian matrix of the ith characteristic point in a visual servo error system is given
Figure FDA0003539384680000021
The following were used:
Figure FDA0003539384680000022
wherein f is the focal length of the camera,
Figure FDA0003539384680000023
for the desired coordinates of the ith feature point on the image plane,
Figure FDA0003539384680000024
and representing the depth value of the ith characteristic point estimation, wherein the expression is as follows:
Figure FDA0003539384680000025
wherein the content of the first and second substances,
Figure FDA0003539384680000026
v1=[u,v,w]T,v2=[p,q,r]Trespectively representing the linear velocity and the angular velocity of the AUV on three coordinate axes. Finally, the least squares solution can be used to obtain the estimated depth values.
4. The improved QMM-MPC control method with on-line depth estimator as claimed in claim 1, wherein said establishing objective function and optimization problem based on control objectives comprises the steps of:
the convex polyhedron is solved, and the image coordinate m can be determined according to the resolution of the cameraie(k) And nie(k) The variation range of (A) is as follows:
Figure FDA0003539384680000027
substituting its variation boundary into Jacobian matrix L (p (k)), decomposing L (p (k)) into convex combination form of known vertex matrix, so that at any sampling time k, L (p (k)) in vertex matrix Lr(R ═ 1,2, …, R) in a convex polyhedron Ω, namely: l (p (k)) ∈ Co{L1,L2,…,LR};
And optimizing the objective function by using a min-max control strategy to obtain an optimization problem.
5. The improved QMM-MPC control method with on-line depth estimator as claimed in claim 1, wherein said optimization problem is as follows:
Figure FDA0003539384680000028
wherein: e (0| k) ═ e (k),
Figure FDA0003539384680000031
an objective function value of k +1 to infinity, QeAnd QuThe state and control input weighting matrices, respectively.
6. The improved QMM-MPC control method with an online depth estimator as claimed in claim 4, wherein the visual system modality formed by different vertices in the convex polyhedron designs the Lyapunov function for each vertex as follows;
the Lyapunov function is designed for each vertex of the convex body as follows:
Vr(e(i|k))=e(i|k)TPr(k)e(i|k),r=1,…,R (25)
wherein, Pr(k) For the symmetric positive definite matrix to be solved, if the closed-loop system is made to be asymptotically stable, Vr(e (i | k)) is required to satisfy:
Figure FDA0003539384680000032
the left end of formula (26) is added from i ═ 1 to ∞ and can be obtained:
Figure FDA0003539384680000033
asymptotic stability V of closed loop systemr(e (∞) k)) is 0, and the simplification of the above formula can obtain system modes formed by aiming at different vertexes
Figure FDA0003539384680000034
The upper bound of (c):
Figure FDA0003539384680000035
substituting the state feedback controller u (i | k) ═ f (k) e (i | k) into the decreasing constraint (26) yields:
e(i|k)T{(I+L(i|k)F(k))TP(k)(I+L(i|k)F(k))-P(k)+Qe+F(k)TQuF(k)}e(k+i|k)<0 (29)
according to the nature of a linear system, when the vertices of the various polyhedrons satisfy the constraint (29), the system model at the future moment must satisfy the constraint (29), which (29) can be rewritten in the form:
Figure FDA0003539384680000036
and the following performance requirements are attached:
V(0,k)=e(k|k)TP(0|k)e(k|k)<γ (31)
establishing the following optimization problem according to (9) and (10), and designing the following linear matrix inequality:
Figure FDA0003539384680000037
Figure FDA0003539384680000041
Figure FDA0003539384680000042
wherein the content of the first and second substances,
Figure FDA0003539384680000043
q is obtained from the formula (13) according to the Schuler's complement theoryr>0,G+GT>QrMatrix of
Figure FDA0003539384680000044
Satisfy the non-negativity, therefore;
Figure FDA0003539384680000045
according to formula (35), formula (33) can be converted to:
Figure FDA0003539384680000046
pre-multiplying (36) by diag { QrG-TI, I }, right-handed diag { G }-1QjI, I }, and (33) may be equivalent to the following LMI:
Figure FDA0003539384680000047
wherein G ═ F-1Y,
Figure FDA0003539384680000048
The (16) is an optimization problem;
the optimization problem (32) has a solution QrR is 1,2, …, R and a pair of matrices Y, G. To obtain
Figure FDA0003539384680000049
Define Qm=min[Qr](ii) a Can obtain
Figure FDA00035393846800000410
Is e (1| k)TPm(k) e (1| k), wherein Pm=Qm/γ。
7. The QMM-MPC control method based on the improvement with the on-line depth estimator of claim 4, wherein: the process of updating the objective function, establishing a total linear matrix inequality set and solving to obtain the underwater robot vision docking control system is as follows:
from the current time k, e (0| k) ═ e (k) and u (0| k) ═ u (k), we can obtain:
Figure FDA00035393846800000411
the six-degree-of-freedom AUV visual servo quasi-maximum minimum model predictive control problem is expressed as follows according to the kinetic energy constraint, the camera view field constraint and the constraint conditions of (10) and (17):
Figure FDA0003539384680000051
Figure FDA0003539384680000052
the following LMIs are further available:
Figure FDA0003539384680000053
Figure FDA0003539384680000054
Figure FDA0003539384680000055
in order to make the objective function easier to solve, the objective function of the optimization problem (18) is made to satisfy:
e(k)TQee(k)+u(k)TQuu(k)+e(1|k)TPm(k)e(1|k)≤γ (44)
substituting e (1| k) ═ e (k) + L (p (k)) u (k) into (41), and using schuler's complement theory, the equivalence condition can be obtained:
H(k)≥0 (45)
applying the schulk's complement to the degressive constraints and control input constraints in (40) yields an equivalence condition:
Cr(k)≥0,Dr(k)≥0 (46)
in conjunction with (44) - (46), the optimization problem (39) can be rewritten as:
Figure FDA0003539384680000056
Figure FDA0003539384680000057
wherein (46) is emin-e(k)≤B(p(k))u(k)≤emax-e (k) for the visual visibility constraint, transforming the optimization problem (39) into a linear matrix inequality description that is more convenient to solve online, assuming that the problem is solved, solving the optimal solution γ*,u(k)*,Qm(k)*And Y (k)*Defining AUV visual servo controller as umpc(k)=u(k)*The terminal weight p (k) ═ γ Qm(k)*-1,F(k)=Y(k)*G(k)*The corresponding AUV closed-loop visual servo control system comprises:
e(k+1)=e(k)+L(p(k))umpc(k) (49) 。
CN202210226486.5A 2022-03-09 2022-03-09 QMM-MPC underwater robot vision docking control method for online depth estimation Active CN114610047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210226486.5A CN114610047B (en) 2022-03-09 2022-03-09 QMM-MPC underwater robot vision docking control method for online depth estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210226486.5A CN114610047B (en) 2022-03-09 2022-03-09 QMM-MPC underwater robot vision docking control method for online depth estimation

Publications (2)

Publication Number Publication Date
CN114610047A true CN114610047A (en) 2022-06-10
CN114610047B CN114610047B (en) 2024-05-28

Family

ID=81860906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210226486.5A Active CN114610047B (en) 2022-03-09 2022-03-09 QMM-MPC underwater robot vision docking control method for online depth estimation

Country Status (1)

Country Link
CN (1) CN114610047B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614560A (en) * 2018-05-31 2018-10-02 浙江工业大学 A kind of mobile robot visual servo guaranteed cost tracking and controlling method
CN108839026A (en) * 2018-07-19 2018-11-20 浙江工业大学 A kind of mobile robot visual servo tracking forecast Control Algorithm
CN110298144A (en) * 2019-07-30 2019-10-01 大连海事大学 The output adjusting method of handover network flight control system based on alternative events triggering
CN112256001A (en) * 2020-09-29 2021-01-22 华南理工大学 Visual servo control method for mobile robot under visual angle constraint
CN113034399A (en) * 2021-04-01 2021-06-25 江苏科技大学 Binocular vision based autonomous underwater robot recovery and guide pseudo light source removing method
CN113031590A (en) * 2021-02-06 2021-06-25 浙江同筑科技有限公司 Mobile robot vision servo control method based on Lyapunov function
US20210252700A1 (en) * 2020-02-18 2021-08-19 Harbin Institute Of Technology Hybrid visual servoing method based on fusion of distance space and image feature space
CN114115285A (en) * 2021-11-29 2022-03-01 大连海事大学 Multi-agent search emotion target path planning method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614560A (en) * 2018-05-31 2018-10-02 浙江工业大学 A kind of mobile robot visual servo guaranteed cost tracking and controlling method
CN108839026A (en) * 2018-07-19 2018-11-20 浙江工业大学 A kind of mobile robot visual servo tracking forecast Control Algorithm
CN110298144A (en) * 2019-07-30 2019-10-01 大连海事大学 The output adjusting method of handover network flight control system based on alternative events triggering
US20210252700A1 (en) * 2020-02-18 2021-08-19 Harbin Institute Of Technology Hybrid visual servoing method based on fusion of distance space and image feature space
CN112256001A (en) * 2020-09-29 2021-01-22 华南理工大学 Visual servo control method for mobile robot under visual angle constraint
CN113031590A (en) * 2021-02-06 2021-06-25 浙江同筑科技有限公司 Mobile robot vision servo control method based on Lyapunov function
CN113034399A (en) * 2021-04-01 2021-06-25 江苏科技大学 Binocular vision based autonomous underwater robot recovery and guide pseudo light source removing method
CN114115285A (en) * 2021-11-29 2022-03-01 大连海事大学 Multi-agent search emotion target path planning method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
岳伟;郭戈;: "通讯网络影响下自主车队的控制", 控制理论与应用, no. 07, 15 July 2011 (2011-07-15) *

Also Published As

Publication number Publication date
CN114610047B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
Wang et al. A multilayer path planner for a USV under complex marine environments
Xiang et al. Robust fuzzy 3D path following for autonomous underwater vehicle subject to uncertainties
Wang et al. Dynamics-level finite-time fuzzy monocular visual servo of an unmanned surface vehicle
Li et al. Adaptive trajectory tracking of nonholonomic mobile robots using vision-based position and velocity estimation
Shen et al. Autonomous multi-floor indoor navigation with a computationally constrained MAV
Murali et al. Perception-aware trajectory generation for aggressive quadrotor flight using differential flatness
Li et al. Visual servo regulation of wheeled mobile robots with simultaneous depth identification
Liu et al. Scale-only visual homing from an omnidirectional camera
Sun et al. Autonomous state estimation and mapping in unknown environments with onboard stereo camera for micro aerial vehicles
Zhang et al. Tracking fault-tolerant control based on model predictive control for human occupied vehicle in three-dimensional underwater workspace
Arif et al. Adaptive visual servo control law for finite-time tracking to land quadrotor on moving platform using virtual reticle algorithm
Jiang et al. A method for underwater human–robot interaction based on gestures tracking with fuzzy control
Miranda-Moya et al. Ibvs based on adaptive sliding mode control for a quadrotor target tracking under perturbations
Toro-Arcila et al. Visual path following with obstacle avoidance for quadcopters in indoor environments
Copot et al. Image-based and fractional-order control for mechatronic systems
Zhou et al. RETRACTED: uncalibrated dynamic visual servoing via multivariate adaptive regression splines and improved incremental extreme learning machine
CN114610047A (en) QMM-MPC underwater robot vision docking control method for on-line depth estimation
Rioux et al. Cooperative vision-based object transportation by two humanoid robots in a cluttered environment
Kanjanawanishkul Coordinated path following for mobile robots using a virtual structure strategy with model predictive control
Cao et al. Visual-Inertial-Laser SLAM Based on ORB-SLAM3
Zhou et al. Visual servo control of underwater vehicles based on image moments
Aspragkathos et al. Event-triggered image moments predictive control for tracking evolving features using UAVs
Wang et al. Research on SLAM road sign observation based on particle filter
Malis Hybrid vision-based robot control robust to large calibration errors on both intrinsic and extrinsic camera parameters
Prats et al. Template tracking and visual servoing for alignment tasks with autonomous underwater vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant