CN114610047B - QMM-MPC underwater robot vision docking control method for online depth estimation - Google Patents

QMM-MPC underwater robot vision docking control method for online depth estimation Download PDF

Info

Publication number
CN114610047B
CN114610047B CN202210226486.5A CN202210226486A CN114610047B CN 114610047 B CN114610047 B CN 114610047B CN 202210226486 A CN202210226486 A CN 202210226486A CN 114610047 B CN114610047 B CN 114610047B
Authority
CN
China
Prior art keywords
auv
docking
underwater robot
mpc
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210226486.5A
Other languages
Chinese (zh)
Other versions
CN114610047A (en
Inventor
岳伟
季嘉诚
邹存名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202210226486.5A priority Critical patent/CN114610047B/en
Publication of CN114610047A publication Critical patent/CN114610047A/en
Application granted granted Critical
Publication of CN114610047B publication Critical patent/CN114610047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0692Rate of change of altitude or depth specially adapted for under-water vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a QMM-MPC underwater robot vision docking control method for online depth estimation, which belongs to the field of underwater robot vision docking, and comprises the following steps: establishing a six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information; based on a six-degree-of-freedom AUV visual servo model, combining motion measurement data of AUV and docking station feature points on an image plane, and establishing an online depth estimator for estimating real-time depth information between the underwater robot and the docking station; based on the six-degree-of-freedom AUV visual servo model and a control target, establishing an objective function and optimizing the problem; based on real-time depth information between the underwater robot and the docking station, an improved QMM-MPC algorithm is combined to obtain an underwater robot vision docking control system, so that the docking of the underwater robot and the docking station is realized.

Description

QMM-MPC underwater robot vision docking control method for online depth estimation
Technical Field
The invention relates to the field of underwater robot vision docking, in particular to a QMM-MPC underwater robot vision docking control method for online depth estimation.
Background
Over the past few decades, autonomous underwater vehicles (Autonomous Underwater Vehicle, AUV) have evolved in depth sounding, inspection, maintenance, mine exploration, environmental monitoring, surveillance, intervention, and the like, with their potential to greatly improve operational efficiency in complex underwater environments, attracting a great deal of attention from increasingly academic and industrial research teams. The AUV is mainly characterized in that the AUV can automatically cope with changes in the process of executing tasks, and the stable control performance of the AUV is maintained without the need of human operation. Considering the complex dynamics characteristics, task complexity and environmental dangers of the AUV, the control problem of the AUV in the complex environment at present becomes one of the hot spot research fields. Therefore, in recent years, autonomous recovery of AUV by providing a recovery platform under water has become an important research direction. At present, various underwater recovery docking systems are designed for the characteristics of AUV and the types of recovery platforms at home and abroad. However, the requirement of high precision in the final close-range docking operation is always a research difficulty regardless of the docking mode.
Considering that the current research work mostly relies on visual servoing to estimate the pose of an AUV to finish a docking task, but the complex underwater environment has low visibility, light refraction, absorption and scattering seriously affect the depth information of a visual camera, aiming at the problem, a learner proposes an image-based visual servoing which completely depends on the movement of the features on an image plane, but the technical means is to set the depth information to be a constant value, so that the problem of uncertain depth is not thoroughly solved, and the underwater docking task is likely to fail; and the traditional visual servo based on the image mostly determines the depth value as a constant value, and the existing researches show that the failure rate of the butt joint task is higher due to the fact that the depth information is assumed to be the constant value; and because the traditional QMM-MPC controller only designs a Lyapunov upper bound, the task conservation required for high precision, high real-time performance and the like of the six-degree-of-freedom AUV underwater vision docking is too strong, thereby causing task failure.
Disclosure of Invention
According to the technical problems that the AUV cannot effectively execute the docking task due to the complex environment, the uncertainty of depth and the like of the conventional AUV underwater docking task, the invention provides the following technical scheme:
a QMM-MPC underwater robot vision docking control method for online depth estimation comprises the following steps:
establishing a six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information;
Based on the AUV visual servo model with six degrees of freedom, combining with motion measurement data of AUV and docking station characteristic points on an image plane, establishing an online depth estimator for estimating real-time depth information between the underwater robot and the docking station, and combining real-time depth estimation values into the model to obtain the AUV visual servo model with the online depth estimator;
Establishing an objective function and an optimization problem according to a control target;
and combining an improved QMM-MPC algorithm to design a controller, updating an objective function, establishing a total LMIS, solving to obtain an underwater robot vision docking control system, and realizing docking of the underwater robot and a docking station.
Further, the step of establishing the six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information comprises the following steps:
establishing a camera perspective model through the conversion relation among a world coordinate system, an AUV local coordinate system, a camera coordinate system and an image plane coordinate system;
based on the camera perspective model, a six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information is established.
Further, the AUV visual servoing model expression with online depth estimator is as follows:
Wherein:
e i (k) represents a lateral difference m ie (k) between the image plane coordinates of the i-th feature point and the desired image plane coordinates, and a longitudinal difference n ie (k); t is sampling time, v is total speed vector of the underwater robot, and the total speed vector comprises linear speed and angular speed of each axis, and jacobian matrix of the ith characteristic point in a visual servo error system is given The following are provided:
wherein f is the focal length of the camera, For the expected coordinates of the ith feature point on the image plane,/>The depth value representing the i-th feature point estimate is expressed as follows:
Wherein, V 1=[u,v,w]T,v2=[p,q,r]T represents the linear and angular velocity of the AUV in three axes, respectively. Finally, the estimated depth value can be obtained by solving the least square method.
Further, the method is characterized in that the objective function and the optimization problem are established according to the control target, and the method comprises the following steps:
the convex polyhedron is obtained, and the change range of the image coordinates m ie (k) and n ie (k) can be determined according to the resolution of the camera, wherein the change range is as follows: Bringing its variation boundaries into the jacobian matrix L (p (k)), decomposing L (p (k)) into a convex combination of known vertex matrices such that any sampling instant k, L (p (k)) varies in a convex polyhedron Ω composed of vertex matrices L r (r=1, 2, …, R), namely: l (p (k))eΩ=c o{L1,L2,…,LR;
And optimizing the objective function by using a min-max control strategy to obtain an optimization problem.
Further, the optimization problem is as follows:
Wherein: e (0|k) =e (k), For the objective function values from time k+1 to infinity, Q e and Q u are the state and control input weighting matrices, respectively.
Further, the visual system mode formed by different vertexes in the convex polyhedron is designed for each vertex, and the specific process is as follows;
the lyapunov function was designed for each vertex of the convexity as follows:
Vr(e(i|k))=e(i|k)TPr(k)e(i|k),r=1,…,R (25)
wherein, P r (k) is a symmetric positive definite matrix to be solved, if the closed loop system is asymptotically stabilized, V r (e (i|k)) needs to satisfy:
Vr(e(i+1|k))-Vr(e(i|k))≤-[e(i|k)TQee(i|k)+u(i|k)TQuu(i|k)] (26)
adding the left end of equation (26) from i=1 to +.:
Asymptotic stability V r (e (+|k))=0 of the closed-loop system, and the above simplification can be achieved in the system modes composed for different vertices Upper bound of (2):
Substituting the state feedback controller u (i|k) =f (k) e (i|k) into the decremental constraint (26) can be obtained:
e(i|k)T{(I+L(i|k)F(k))TP(k)(I+L(i|k)F(k))-P(k)+Qe+F(k)TQuF(k)}e(k+i|k)<0 (29)
According to the nature of a linear system, when each polyhedral vertex satisfies constraint (29), the system model at a future time must satisfy constraint (29), and (29) can be rewritten as follows:
and the following performance requirements are added:
V(0,k)=e(k|k)TP(0|k)e(k|k)<γ (31)
The following optimization problem is established according to (9) and (10), and the following linear matrix inequality is designed:
Wherein,
According to the Shu 'er's index, Q r>0,G+GT>Qr, matrix, is obtained from equation (13)Satisfying a non-negative definite, and therefore;
according to equation (35), equation (33) may be converted into:
Taking (36) the left-hand diag { Q rG-T, I, I, I }, the right-hand diag { G -1Qj, I, I, I }, can be equated with (33) the following LMI:
Wherein, G=F -1 Y, Said (16) is an optimization problem;
The optimization problem (32) has a solution Q r, r=1, 2, …, R and a pair of matrices Y, G. To calculate Defining Q m=min[Qr; availability/>Is e (1|k) TPm (k) e (1|k), where P m=Qm/gamma.
Further: the updating objective function establishes a total linear matrix inequality group, and the process of solving to obtain the underwater robot vision docking control system is as follows:
From the current time k, e (0|k) =e (k) and u (0|k) =u (k), it is possible to obtain:
based on the kinetic energy constraints, the camera field of view constraints, and the constraints of (10) and (17), the six-degree-of-freedom AUV visual servoing quasi-maximum-minimum model predictive control problem is expressed as follows:
Further, the following LMIs can be obtained:
in order to make the objective function easier to solve, the objective function of the optimization problem (18) is made to satisfy:
e(k)TQee(k)+u(k)TQuu(k)+e(1|k)TPm(k)e(1|k)≤γ (44)
substituting e (1|k) =e (k) +l (p (k)) u (k) into (41), from the sull's complement, the equivalent condition can be obtained:
H(k)≥0 (45)
applying the schulplement theorem to the decremental constraint and the control input constraint in (40) yields an equivalent condition:
Cr(k)≥0,Dr(k)≥0 (46)
Combining (44) - (46), the optimization problem (39) can be rewritten as:
wherein e min-e(k)≤B(p(k))u(k)≤emax -e (k) in (46) is a visual visibility constraint, the optimization problem (39) is converted into a linear matrix inequality description which is convenient to solve online, and the AUV visual servo controller is defined as u mpc(k)=u(k)*, the terminal weight P (k) =γQ m(k)*-1,F(k)=Y(k)*G(k)* and the corresponding AUV closed-loop visual servo control system is provided that the problem is solved to obtain the optimal Jie *,u(k)*,Qm(k)* and Y (k) *.
e(k+1)=e(k)+L(p(k))umpc(k) (49)
The invention provides a QMM-MPC underwater robot visual docking control method for online depth estimation, which is characterized in that an online depth controller is designed through AUV and motion data of docking station characteristic points on an image plane, an estimated depth value is used as information to be integrated into a QMM-MPC algorithm, meanwhile, the upper bound of a multi-Lyapunov function is designed to reduce the strong conservation of the traditional QMM-MPC, and the underwater robot visual docking control system is used for realizing the docking of an underwater robot and the docking station through the steps.
The invention adopts an AUV visual servo model with six degrees of freedom based on images, so that the AUV visual servo model is completely dependent on the motion of the features on an image plane;
The online depth estimator designed by the invention is only relevant to the feature extraction precision and is not influenced by the installation position of the camera and the AUV motion;
According to the invention, the underwater robot is docked with the docking station, the precision is guaranteed by fusing depth estimation information, meanwhile, the control strategy of the Duoyapunov min-max is used for reference, and in the process of solving the upper bound, lyapunov functions are respectively designed for a plurality of modes of an AUV visual servo parametric system model, so that the conservation of a controller is reduced;
The invention converts the AUV underwater docking real-time optimization problem into the online solution of the linear matrix inequality, greatly reduces the calculation amount problem, and is more suitable for work tasks with higher real-time requirements, such as AUV underwater vision docking and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
FIG. 1 is a projection view of a visual docking system on an image plane;
FIG. 2 is a camera trajectory diagram with or without a depth estimator;
FIG. 3 (a) a conventional QMM-MPC algorithm (without depth estimator) image plane, (b) a modified QMM-MPC algorithm image plane of the present application;
FIG. 4 (a) is a diagram of the error variance of a conventional QMM-MPC algorithm; (b) the improved QMM-MPC algorithm error map of the present application;
FIG. 5 (a) is a velocity map of a conventional QMM-MPC algorithm; (b) improved QMM-MPC algorithm speed planning of the application.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features in the embodiments may be combined with each other, and the present invention will be described in detail below with reference to the drawings and the embodiments.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise. Meanwhile, it should be clear that the dimensions of the respective parts shown in the drawings are not drawn in actual scale for convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In the description of the present invention, it should be understood that the azimuth or positional relationships indicated by the azimuth terms such as "front, rear, upper, lower, left, right", "lateral, vertical, horizontal", and "top, bottom", etc., are generally based on the azimuth or positional relationships shown in the drawings, merely to facilitate description of the present invention and simplify the description, and these azimuth terms do not indicate and imply that the apparatus or elements referred to must have a specific azimuth or be constructed and operated in a specific azimuth, and thus should not be construed as limiting the scope of protection of the present invention: the orientation word "inner and outer" refers to inner and outer relative to the contour of the respective component itself.
Spatially relative terms, such as "above … …," "above … …," "upper surface on … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial location relative to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "over" other devices or structures would then be oriented "below" or "beneath" the other devices or structures. Thus, the exemplary term "above … …" may include both orientations "above … …" and "below … …". The device may also be positioned in other different ways (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
In addition, the terms "first", "second", etc. are used to define the components, and are only for convenience of distinguishing the corresponding components, and the terms have no special meaning unless otherwise stated, and therefore should not be construed as limiting the scope of the present invention.
A QMM-MPC control method with an online depth estimator based on improvement, comprising the steps of:
S1: establishing a six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information;
S2, based on the AUV visual servo model with six degrees of freedom, combining with motion measurement data of the AUV and docking station characteristic points on an image plane, establishing an online depth estimator for estimating real-time depth information between the underwater robot and the docking station, and combining the real-time depth estimation value into the model to obtain the AUV visual servo model with the online depth estimator;
s3, establishing an objective function and optimizing the problem according to the control target;
S4, designing a controller by combining the improved QMM-MPC algorithm, wherein the aim is to reduce the conservation of the original QMM-MPC algorithm; updating an objective function, establishing a total LMIS, solving to obtain an underwater robot vision docking control system, and realizing docking of the underwater robot and a docking station.
FIG. 1 is a projection view of a visual docking system on an image plane;
steps S1, S2, S3 and S4 are sequentially carried out;
wherein: the modified QMM-MPC algorithm (Quasi-min-max model predictive control, QMM-MPC for short) is also referred to as Quasi-maximum-minimum model predictive control of the Duoyapunov function;
Further, the step of establishing the six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information comprises the following steps:
Establishing a camera perspective model through the conversion relation among a world coordinate system, an AUV local coordinate system, a camera coordinate system and an image plane coordinate system; FIG. 2 is a camera trajectory diagram with or without a depth estimator; in the figure, although the algorithm without the depth estimator can enable the AUV to reach the expected pose, the track of the camera is quite non-ideal, and compared with the algorithm with the depth estimator, the algorithm with the depth estimator generates a lot of extra motions, so that the AUV energy consumption is greatly wasted.
Based on the camera perspective model, a six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information is established.
Further, establishing an online depth estimator, and incorporating the real-time depth estimation value into the model to obtain an AUV visual servo model with the online depth estimator, comprising:
according to the AUV and the motion measurement data of the docking station characteristic points on the image plane, an online depth estimator is designed, and a depth estimation value can be obtained in real time by solving the problems through a least square method.
And obtaining the six-degree-of-freedom AUV visual servo model under discrete time by adopting the Euler approximate discrete six-degree-of-freedom AUV visual servo model.
The depth estimation value is integrated into the six-degree-of-freedom AUV visual servo model under discrete time, so that the AUV visual servo model with the online depth estimator is obtained.
Further, establishing an objective function and optimizing the problem according to the control target;
the convex polyhedron is obtained, and the change range of the image coordinates m ie (k) and n ie (k) can be determined according to the resolution of the camera, wherein the change range is as follows: Bringing its changing boundaries into the jacobian matrix L (p (k)), L (p (k)) can be decomposed into a convex combination of known vertex matrices such that any sampling instant k, L (p (k)) changes in the convex polyhedron Ω composed of the vertex matrices L r (r=1, 2, …, R), namely: l (p (k))eΩ=c o{L1,L2,…,LR;
Establishing an objective function to obtain an optimization problem; the control target is to make the error of the image plane approach zero in a short time with the minimum kinetic energy input, and based on the control target and further analysis, the visual characteristic parameter vector p (k) in the jacobian matrix L (p (k)) in the visual servo error system can be measured and obtained at the current k moment. The numerical value at the future moment cannot be estimated in advance, and the kinetic energy range of the numerical value can only be ensured to be in the convex polyhedron omega. Therefore, the objective function is respectively defined for the current moment and the future moment, and the objective function is required to be optimized by using the min-max control strategy due to the uncertainty of parameters in the future moment, so that the optimization problem is obtained.
Further, combining the improved QMM-MPC algorithm to reduce the strong conservation of the original QMM-MPC algorithm to the system requires taking the worst case control performance, i.e. determining, in order to obtain the optimal performance indexIn conventional QMM-MPC, only one upper boundary of the lyapunov function is often designed, while there is too strong conservation for the AUV visual docking system. The application adopts a visual system mode formed by different vertexes in a convex polyhedron, and designs a Lyapunov function for each vertex. And finally, solving the optimal upper bound of the objective function.
Further, updating an objective function according to the obtained upper bound of the optimal objective function, establishing a total LMIS, solving to obtain system parameters and control input so as to obtain the visual docking control system of the underwater robot, and realizing docking of the underwater robot and a docking station.
Further, based on the camera perspective model, a six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information is established. Specifically, the method comprises the following steps:
The coordinate mapping relationship of the coordinate P i=(Xi,Yi,Zi of the ith feature point in the world coordinate system { G } and the projected point pi= (mi, ni) thereof in the image coordinate system is known as follows:
Wherein Z i represents the depth of the ith feature point, and M I,ME represents the internal reference matrix and the external reference matrix of the camera, respectively, defined as follows:
wherein f represents the focal length of the camera, (m 0,n0) represents the central pixel coordinate of the image plane, and the dimensions of each pixel point on the x axis and the y axis of the image physical coordinate system are dx and dy; r, T represent the rotation matrix and translation vector of the camera coordinate system { C } to the world coordinate system { G } respectively. The rotation matrix R is generated from ZYX euler angles and can be described by the following formula:
Wherein, phi, theta and in the ZYX Euler angle Refers to the camera rotating around the Z axis for phi angle, rotating around the Y axis for theta angle, and then rotating around the X axis/>Angle.
Let the coordinates of the feature point in the world coordinate system be p i=[Xi,Yi,Zi]T3, and derive p i:
wherein v 1=[u,v,w]T,v2=[p,q,r]T represents the linear velocity and the angular velocity of the AUV in three coordinate axes respectively;
from the perspective projection of the camera, the following equation is given:
Wherein s i=[mi,ni is the current image coordinate of the ith feature point on the image plane, Is the desired image coordinates on the image plane. Deriving the formula (6), and combining (5) to obtain a visual servo model as follows:
Wherein,
A jacobian matrix for an image between camera speed and imaging rate; v= [ v 1,v2]T ] represents a velocity vector of the AUV, and Z i represents depth information of the i-th feature point; considering that visual tracking requires N visual feature points, the current image coordinates and the desired image coordinates can be defined as:
The AUV visual servoing model available according to equation (7) is as follows:
Where z= [ Z 1,…,Zi,…,ZN]T2N is the depth vector corresponding to each feature point, and L (s, Z) represents the stacked jacobian matrix:
Further, based on the six-degree-of-freedom AUV visual servo model, combining the motion measurement data of the AUV and the docking station characteristic points on the image plane, establishing an online depth estimator for estimating real-time depth information between the underwater robot and the docking station, and finally obtaining the AUV visual servo model with the online depth estimator. Specifically:
Considering the visual servo model (9) of the ith feature point, it is known that the first three columns in the jacobian matrix are depth dependent, and the equation (9) is rearranged to obtain:
Wherein J t and J ω represent the effects of the translational motion and the rotational motion of the camera on the image feature vector, respectively, expressed as follows:
the linear equation for the compact visual servoing system is obtained as follows:
Aθ=b (12)
wherein, A=J tv1, B is the remaining optical flow, i.e., the optical flow differential that will result from the observed optical flow and camera rotation, solving (12) yields:
Wherein, The depth estimation value is the depth estimation information of the high approximation and the real depth value is provided for the design of the model predictive controller;
considering the AUV visual servo model (9), further define the visual tracking error:
the derivation of formula (14) can be obtained:
Taking an AUV visual servo error system (15) into consideration, selecting T as sampling time, and obtaining an AUV visual servo model with an online depth estimator by using an Euler approximate discrete system:
Wherein ,e(k)=[e1(k),…,ei(k),…,eN(k)]T=[m1e(k),n1e(k),…,mie(k),nie(k),…,mNe(k),nNe(k)]T, denotes the lateral and longitudinal errors of each feature point in the image plane.
Wherein,The depth value vector of on-line estimation of all feature points is that the ith feature point jacobian matrix can be expressed as follows:
Further, based on the six-degree-of-freedom AUV visual servo model and the control target, an objective function and an optimization problem are established:
from equation (18), the jacobian matrix Is a function of the variables m ie (k) and n ie (k), i.eAs the parameter change amount p (k) = [ m ie(k),nie (k) ] changes, the range of changes in the image coordinates m ie (k) and n ie (k) can be determined to be:/>, respectively, according to the resolution of the cameraL (p (k)) can be decomposed into a convex combination of known vertex matrices such that any sampling instant k, L (k)) varies in a convex polyhedron Ω composed of vertex matrices L r (r=1, 2, …, R), i.e.:
L(p(k))∈Ω=Co{L1,L2,…,LR} (19)
consider that the error system (16) satisfies the following constraint:
The visual characteristic parameter vector p (k) in the jacobian matrix L (p (k)) is measurable and known at the current k moment, whereas the values for the future moment cannot be estimated in advance, only the kinetic energy range thereof can be guaranteed to be in the convex polyhedron Ω, and therefore, the predictive controller is designed for the current moment and the future moment respectively:
the objective function is defined as follows:
Where u (0|k) =u (k) = [ v 1,v2 ], e (0|k) =e (k), For the objective function values from time k+1 to infinity, Q e and Q u are the state and control input weighting matrices, respectively. Because the objective function at the future moment has parameter uncertainty, the objective function needs to be optimized by using a min-max control strategy, which is equivalent to the optimization of/>Optimizing, setting the performance in the worst case due to future time uncertainty as/>And the optimal performance index can be obtained by minimizing the performance index. The optimization problem is thus obtained as follows:
Wherein,
E (0|k) =e (k),/>, the control input from time k+1 to infinityFor the objective function values from time k+1 to infinity, Q e and Q u are the state and control input weighting matrices, respectively. /(I)
Further, in order to stabilize the underwater robot vision docking control system, an improved quasi-maximum-minimum model prediction controller design introducing depth estimation information is designed, which is mainly divided into two steps, specifically as follows:
S41, in the traditional QMM-MPC, only one Lyapunov function upper bound exists, and the AUV visual docking system is subjected to strong conservation, so that proper speed cannot be planned for the AUV, and the docking task fails; the visual system mode formed by different vertexes in the convex polyhedron is adopted, and a Lyapunov function is designed for each vertex, so that the strong conservation of the traditional QMM-MPC is reduced, and the following concrete steps are adopted (i|k represents the prediction of k+i moment in the k moment):
For the optimization problem (23), define Is a state feedback control law:
u(i|k)=F(k)e(i|k),i=1,2,…,∞ (24)
Where F (k) is the feedback control gain.
The lyapunov function V r (e (i|k)) is designed for each vertex as follows:
Vr(e(i|k))=e(i|k)TPr(k)e(i|k),r=1,…,R (25)
Wherein P r (k) is a symmetric positive definite matrix to be solved, R is the number of vertexes, if the closed-loop system is made asymptotically stable, the Lyapunov function V r (e (i|k)) needs to be satisfied,
Adding the left end of equation (26) from i=1 to +.:
considering the asymptotic stability V r (e (+|k))=0 of the closed-loop system, the above simplification can be achieved in the system modes composed for different vertices For the upper bound of k+1 to infinity, Q e and Q u are the state and control input weighting matrices, respectively:
Further, substituting the state feedback controller u (i|k) =f (k) e (i|k) (24) into the decremental constraint (26) can be obtained:
e(i|k)T{(I+L(i|k)F(k))TP(k)(I+L(i|k)F(k))-P(k)+Qe+F(k)TQuF(k)}e(k+i|k)<0 (29)
According to the nature of a linear system, when each polyhedral vertex satisfies constraint (29), the system model at a future time must satisfy constraint (29), and (29) can be rewritten as follows:
and the following performance requirements are added:
V(0,k)=e(k|k)TP(0|k)e(k|k)<γ (31)
The following optimization problem is established according to (30) and (31), and the following linear matrix inequality is designed:
/>
Wherein,
According to the Shu's index, Q r>0,G+GT>Qr is obtained from the formula (34), and further, the matrixSatisfy non-negative definite, thus
According to equation (35), equation (33) may be converted into:
Taking (36) the left-hand diag { Q rG-T, I, I, I }, the right-hand diag { G -1Qj, I, I, I }, can be equated with (33) the following LMI:
Wherein, G=F -1 Y,
If the optimization problem (32) has a solution Q r, r=1, 2, …, R and a pair of matrices Y, G. To calculateDefine Q m=min[Qr ], available/>Is e (1|k) TPm (k) e (1|k), wherein P m=Qm/gamma;
S42, updating an objective function, and establishing a total linear matrix inequality group (LMIS), wherein the method comprises the following steps:
Considering the current instant k, e (0|k) =e (k) and u (0|k) =u (k), a sort (22) of equations is available,
According to (20), (21), (30) and (38), the AUV visual servoing quasi-maximum-minimum model predictive control problem is expressed as follows:
/>
Further, the following LMIs can be obtained:
in order to make the objective function easier to solve, the objective function of the optimization problem (39) is made to satisfy:
e(k)TQee(k)+u(k)TQuu(k)+e(1|k)TPm(k)e(1|k)≤γ (44)
substituting e (1|k) =e (k) +l (p (k)) u (k) into (41), from the sull's complement, the equivalent condition can be obtained:
H(k)≥0 (45)
applying the schulplement theorem to the decremental constraint and the control input constraint in (40) yields an equivalent condition:
Cr(k)≥0,Dr(k)≥0 (46)
Combining (44) - (46), the optimization problem (39) can be rewritten as:
Wherein (46 b) the last constraint is a visual visibility constraint. The optimization problem (39) is converted into a linear matrix inequality description which is convenient to solve online. Assuming that the problem is solved, the optimal Jie *,u(k)*,Qm(k)* and Y (k) * are found. Then define AUV visual servo controller as u mpc(k)=u(k)*, terminal weight P (k) =γq m(k)*-1,F(k)=Y(k)*G(k)*, corresponding AUV closed-loop visual servo control system is:
e(k+1)=e(k)+L(p(k))umpc(k) (49)
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
FIG. 3 (a) a conventional QMM-MPC algorithm (without depth estimator) image plane, (b) a modified QMM-MPC algorithm image plane of the present application; comparing (a) and (b), it can be known that the conventional QMM-MPC algorithm (without depth estimator) is more tortuous and produces serious rollback, and exhibits jaggies.
FIG. 4 (a) is a diagram of the error variance of a conventional QMM-MPC algorithm; (b) the improved QMM-MPC algorithm error map of the present application; comparing (a) and (b), the application has better control performance in terms of convergence and response time compared with the traditional algorithm.
FIG. 5 (a) is a velocity map of a conventional QMM-MPC algorithm; (b) improved QMM-MPC algorithm speed planning of the application. Compared with the traditional algorithm, the method has the advantages that compared with the traditional algorithm, the speed value is larger, the constraint is met, the AUV can reach the expected position faster, and the strong conservation of the traditional algorithm is greatly reduced.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (3)

1. The QMM-MPC underwater robot vision docking control method for online depth estimation is characterized by comprising the following steps of:
establishing a six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information;
the six-degree-of-freedom AUV visual servo model under the weak light working condition and under the influence of uncertain depth information is established, and the method comprises the following steps:
establishing a camera perspective model through the conversion relation among a world coordinate system, an AUV local coordinate system, a camera coordinate system and an image plane coordinate system;
Based on a camera perspective model, establishing a six-degree-of-freedom AUV visual servo model under a weak light working condition and under the influence of uncertain depth information;
Based on the AUV visual servo model with six degrees of freedom, combining with motion measurement data of AUV and docking station characteristic points on an image plane, establishing an online depth estimator for estimating real-time depth information between the underwater robot and the docking station, and combining real-time depth estimation values into the model to obtain the AUV visual servo model with the online depth estimator;
the AUV visual servo model expression with the online depth estimator is as follows:
Wherein:
e(k)=[e1(k),…,ei(k),…,eN(k)]T=[m1e(k),n1e(k),…,mNe(k),nNe(k)]T
e i (k) represents a lateral difference m ie (k) between the image plane coordinates of the i-th feature point and the desired image plane coordinates, and a longitudinal difference n ie (k); t is sampling time, v is total speed vector of the underwater robot, and the total speed vector comprises linear speed and angular speed of each axis, and jacobian matrix of the ith characteristic point in a visual servo error system is given The following are provided:
wherein f is the focal length of the camera, For the expected coordinates of the ith feature point on the image plane,/>The depth value representing the i-th feature point estimate is expressed as follows:
Wherein, v1=[u,v,w]T,v2=[p,q,r]T
The linear speed and the angular speed of the AUV in three coordinate axes are respectively represented, and finally, the estimated depth value can be obtained by solving the linear speed and the angular speed by a least square method;
Establishing an objective function and an optimization problem according to a control target;
according to the control target, an objective function and an optimization problem are established, and the method comprises the following steps:
the convex polyhedron is obtained, and the change range of the image coordinates m ie (k) and n ie (k) can be determined according to the resolution of the camera, wherein the change range is as follows: bringing its variation boundaries into the jacobian matrix L (p (k)), decomposing L (p (k)) into a convex combination of known vertex matrices such that any sampling instant k, L (p (k)) varies in a convex polyhedron Ω composed of vertex matrices L r (r=1, 2, …, R), namely:
L(p(k))∈Ω=Co{L1,L2,…,LR};
optimizing an objective function by using a min-max control strategy to obtain an optimization problem;
the optimization problem is as follows:
Wherein: e (0|k) =e (k), For the objective function values from time k+1 to infinity, Q e and Q u are the state and control input weighting matrices, respectively;
and combining an improved QMM-MPC algorithm to design a controller, updating an objective function, establishing a total LMIS, solving to obtain an underwater robot vision docking control system, and realizing docking of the underwater robot and a docking station.
2. The method for controlling the visual docking of the QMM-MPC underwater robot for online depth estimation according to claim 1, wherein the specific process of designing the lyapunov function for each vertex in the visual system mode composed of different vertices in the convex polyhedron is as follows;
the lyapunov function was designed for each vertex of the convexity as follows:
Vr(e(i|k))=e(i|k)TPr(k)e(i|k),r=1,…,R (25)
wherein, P r (k) is a symmetric positive definite matrix to be solved, if the closed loop system is asymptotically stabilized, V r (e (i|k)) needs to satisfy:
Vr(e(i+1|k))-Vr(e(i|k))
≤-[e(i|k)TQee(i|k)+u(i|k)TQuu(i|k)] (26)
Adding the left end of equation V r (e (i|k)) from i=1 to +.:
Asymptotic stability V r (e (+|k))=0 of the closed-loop system, and the above simplification can be achieved in the system modes composed for different vertices Upper bound of (2):
Substituting the state feedback controller u (i|k) =f (k) e (i|k) into the decremental constraint (26) can be obtained:
e(|ik)T{(I+L(i|k)F(k))TP(k)(I+L(i|k)F(k))-P(k)+Qe+F(k)TQuF(k)}e(k+i|k)<0 (29)
according to the nature of a linear system, when each polyhedral vertex satisfies constraint (29), the system model at a future time must satisfy constraint formula (29), and (29) can be rewritten as follows:
and the following performance requirements are added:
V(0,k)=e(k|k)TP(0|k)e(k|k)<γ (31)
the following optimization problem is established and the following linear matrix inequality is designed:
wherein, Q r=γPr -1 is selected from the group consisting of,
According to the Shu 'er's index, Q r>0,G+GT>Qr, matrix, is obtained from equation (13)Satisfying a non-negative definite, and therefore;
according to equation (35), equation (33) may be converted into:
Taking (36) the left-hand diag { Q rG-T, I, I, I }, the right-hand diag { G -1Qj, I, I, I }, can be equated with (33) the following LMI:
Wherein, G=F -1 Y, The AUV visual servo model is an optimization problem;
the linear matrix inequality of the optimization problem has a solution Q r, r=1, 2, …, R and a pair of matrices Y, G, to find Defining Q m=min[Qr; availability/>Is e (1|k) TPm (k) e (1|k), where P m=Qm/gamma.
3. The QMM-MPC underwater robot vision docking control method of claim 1, wherein: the updating objective function establishes a total linear matrix inequality group, and the process of solving to obtain the underwater robot vision docking control system is as follows:
From the current time k, e (0|k) =e (k) and u (0|k) =u (k), it is possible to obtain:
based on the kinetic energy constraints, the camera field of view constraints, and the constraints of (10) and (17), the six-degree-of-freedom AUV visual servoing quasi-maximum-minimum model predictive control problem is expressed as follows:
Further, the following LMIs can be obtained:
in order to make the objective function easier to solve, the objective function of the optimization problem (18) is made to satisfy:
e(k)TQee(k)+u(k)TQuu(k)+e(1|k)TPm(k)e(1|k)≤γ (44)
substituting e (1|k) =e (k) +l (p (k)) u (k) into (41), from the sull's complement, the equivalent condition can be obtained:
H(k)≥0 (45)
applying the schulplement theorem to the decremental constraint and the control input constraint in (40) yields an equivalent condition:
Cr(k)≥0,Dr(k)≥0 (46)
Combining (44) - (46), the optimization problem (39) can be rewritten as:
Wherein e min-e(k)≤B(p(k))u(k)≤emax -e (k) in (46) is a visual visibility constraint, the optimization problem (39) is converted into a linear matrix inequality description which is more convenient to solve online, and if the problem is solved to obtain the optimal Jie *,u(k)*,Qm(k)* and Y (k) *, the AUV visual servo controller is defined as u mpc(k)=u(k)*, the terminal weight P (k) =γQ m(k)*-1,F(k)=Y(k)*G(k)*, and the corresponding AUV closed-loop visual servo control system is as follows:
e(k+1)=e(k)+L(p(k))umpc(k) (49)。
CN202210226486.5A 2022-03-09 2022-03-09 QMM-MPC underwater robot vision docking control method for online depth estimation Active CN114610047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210226486.5A CN114610047B (en) 2022-03-09 2022-03-09 QMM-MPC underwater robot vision docking control method for online depth estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210226486.5A CN114610047B (en) 2022-03-09 2022-03-09 QMM-MPC underwater robot vision docking control method for online depth estimation

Publications (2)

Publication Number Publication Date
CN114610047A CN114610047A (en) 2022-06-10
CN114610047B true CN114610047B (en) 2024-05-28

Family

ID=81860906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210226486.5A Active CN114610047B (en) 2022-03-09 2022-03-09 QMM-MPC underwater robot vision docking control method for online depth estimation

Country Status (1)

Country Link
CN (1) CN114610047B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614560A (en) * 2018-05-31 2018-10-02 浙江工业大学 A kind of mobile robot visual servo guaranteed cost tracking and controlling method
CN108839026A (en) * 2018-07-19 2018-11-20 浙江工业大学 A kind of mobile robot visual servo tracking forecast Control Algorithm
CN110298144A (en) * 2019-07-30 2019-10-01 大连海事大学 The output adjusting method of handover network flight control system based on alternative events triggering
CN112256001A (en) * 2020-09-29 2021-01-22 华南理工大学 Visual servo control method for mobile robot under visual angle constraint
CN113034399A (en) * 2021-04-01 2021-06-25 江苏科技大学 Binocular vision based autonomous underwater robot recovery and guide pseudo light source removing method
CN113031590A (en) * 2021-02-06 2021-06-25 浙江同筑科技有限公司 Mobile robot vision servo control method based on Lyapunov function
CN114115285A (en) * 2021-11-29 2022-03-01 大连海事大学 Multi-agent search emotion target path planning method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111360820B (en) * 2020-02-18 2021-01-22 哈尔滨工业大学 Distance space and image feature space fused hybrid visual servo method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108614560A (en) * 2018-05-31 2018-10-02 浙江工业大学 A kind of mobile robot visual servo guaranteed cost tracking and controlling method
CN108839026A (en) * 2018-07-19 2018-11-20 浙江工业大学 A kind of mobile robot visual servo tracking forecast Control Algorithm
CN110298144A (en) * 2019-07-30 2019-10-01 大连海事大学 The output adjusting method of handover network flight control system based on alternative events triggering
CN112256001A (en) * 2020-09-29 2021-01-22 华南理工大学 Visual servo control method for mobile robot under visual angle constraint
CN113031590A (en) * 2021-02-06 2021-06-25 浙江同筑科技有限公司 Mobile robot vision servo control method based on Lyapunov function
CN113034399A (en) * 2021-04-01 2021-06-25 江苏科技大学 Binocular vision based autonomous underwater robot recovery and guide pseudo light source removing method
CN114115285A (en) * 2021-11-29 2022-03-01 大连海事大学 Multi-agent search emotion target path planning method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
通讯网络影响下自主车队的控制;岳伟;郭戈;;控制理论与应用;20110715(第07期);全文 *

Also Published As

Publication number Publication date
CN114610047A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
Zhang et al. 2D Lidar‐Based SLAM and Path Planning for Indoor Rescue Using Mobile Robots
Liang et al. Leader-following formation tracking control of mobile robots without direct position measurements
Murali et al. Perception-aware trajectory generation for aggressive quadrotor flight using differential flatness
Shen et al. Autonomous multi-floor indoor navigation with a computationally constrained MAV
Li et al. Visual servo regulation of wheeled mobile robots with simultaneous depth identification
Spica et al. Active structure from motion: Application to point, sphere, and cylinder
Spasojevic et al. Perception-aware time optimal path parameterization for quadrotors
Gur fil et al. Partial aircraft state estimation from visual motion using the subspace constraints approach
CN111522351B (en) Three-dimensional formation and obstacle avoidance method for underwater robot
CN110561420B (en) Arm profile constraint flexible robot track planning method and device
CN111746523A (en) Vehicle parking path planning method and device, vehicle and storage medium
Cristofalo et al. Vision-based control for fast 3-d reconstruction with an aerial robot
Toro-Arcila et al. Visual path following with obstacle avoidance for quadcopters in indoor environments
Copot et al. Image-based and fractional-order control for mechatronic systems
CN114610047B (en) QMM-MPC underwater robot vision docking control method for online depth estimation
Giordano et al. 3D structure identification from image moments
Rioux et al. Cooperative vision-based object transportation by two humanoid robots in a cluttered environment
Wu et al. Experiments on high-performance maneuvers control for a work-class 3000-m remote operated vehicle
Ma et al. Moving to OOP: An active observation approach for a novel composite visual servoing configuration
Zhou et al. Visual servo control of underwater vehicles based on image moments
CN114815797A (en) Probability map fusion-based multi-unmanned-vessel task processing method and device
Lin et al. Autonomous Landing of a VTOL UAV on a Ship based on Tau Theory
Wang et al. Research on SLAM road sign observation based on particle filter
Heshmati-Alamdari Cooperative and Interaction Control for Underwater Robotic Vehicles
Zhang et al. An unscented Kalman filter-based visual pose estimation method for underwater vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant