CN110705105B - Modeling method and system for inverse dynamics model of robot - Google Patents

Modeling method and system for inverse dynamics model of robot Download PDF

Info

Publication number
CN110705105B
CN110705105B CN201910948416.9A CN201910948416A CN110705105B CN 110705105 B CN110705105 B CN 110705105B CN 201910948416 A CN201910948416 A CN 201910948416A CN 110705105 B CN110705105 B CN 110705105B
Authority
CN
China
Prior art keywords
layer
data
input
output
memory unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910948416.9A
Other languages
Chinese (zh)
Other versions
CN110705105A (en
Inventor
邵振洲
渠瀛
陈曦
孙鹏飞
关永
施智平
王东方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Normal University
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Priority to CN201910948416.9A priority Critical patent/CN110705105B/en
Publication of CN110705105A publication Critical patent/CN110705105A/en
Application granted granted Critical
Publication of CN110705105B publication Critical patent/CN110705105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a modeling method and a system of a robot inverse dynamics model, wherein the method comprises the following steps: building a recurrent neural network, wherein the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting gate and an outer resetting gate, and the inner layer comprises an inner forgetting gate and an inner resetting gate; training an inverse dynamics model of the mechanical arm by using a cyclic neural network; the input data of the recurrent neural network is the motion state of the mechanical arm, the output data of the recurrent neural network is the torque for controlling the mechanical arm to generate the motion state, and the motion state comprises a motion position, a motion speed and a motion acceleration; the inner-layer memory unit data output by the inner layer in the hidden layer of the recurrent neural network updates the outer-layer memory unit data of the outer layer, so that the time-series memory information is kept for a longer time, and the accuracy of the output of the predicted torque of the inverse dynamics model is improved.

Description

Modeling method and system for inverse dynamics model of robot
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a modeling method and a system of a robot inverse dynamics model.
Background
In the modern society, the robot plays a very important role in our life, and can replace people to complete some simple mechanical and even high-altitude dangerous works, and the important part is the control of the robot. The inverse dynamics model of the robot calculates corresponding torque for controlling the motion of the mechanical arm based on the motion state (motion position, motion speed and motion acceleration) of the mechanical arm of the robot, so as to achieve ideal control. However, errors caused by friction force, centripetal force, coriolis force and the like exist in the current physical modeling of the robot inverse dynamics model, so that the established inverse dynamics model cannot be adapted to the motion state of the current mechanical arm, and the accurate practical application cannot be met.
In recent years, with the development of artificial intelligence, the neural network-based method reasonably solves the above problems. The neural network-based method has extremely strong nonlinear mapping capability, and the model obtained by training a large amount of data is used for fitting the actual and accurate motion state without considering the influence of uncertain factors, so that the accuracy of model prediction is improved.
The existing method for modeling robot inverse dynamics based on a neural network method includes an Echo State Network (ESN) and a long-short time memory network (LSTM). They are also a variant of the Recurrent Neural Network (RNN), which uses its own memory mechanism to keep the motion state of the arm in front of the robot in combination with the current motion state, simulating a complex robot system. However, both cannot retain memory information for a long time, and thus cannot provide a more accurate time-series prediction model.
Disclosure of Invention
The invention aims to provide a modeling method and a system of a robot inverse dynamics model, which can retain longer-time memory information so as to improve the prediction precision of the inverse dynamics model.
In order to achieve the purpose, the invention provides the following scheme:
a method of modeling an inverse kinematics model of a robot, the method comprising:
building a recurrent neural network, wherein the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting gate and an outer resetting gate, and the inner layer comprises an inner forgetting gate and an inner resetting gate;
building a recurrent neural network, wherein the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting gate and an outer resetting gate, and the inner layer comprises an inner forgetting gate and an inner resetting gate;
determining input data and output data of the recurrent neural network, wherein the input data of the recurrent neural network is the motion state of the mechanical arm, the output data of the recurrent neural network is torque for controlling the mechanical arm to generate the motion state, and the motion state comprises a motion position, a motion speed and a motion acceleration;
The input data are input into the recurrent neural network according to the movement time of the mechanical arm; inputting data output from the input layer to an outer layer of the hidden layer;
the outer-layer forgetting door obtains the outer-layer memory unit data at the current moment according to the data input to the outer layer at the current moment and the outer-layer memory unit data at the previous moment, and inputs the outer-layer memory unit data at the current moment to the inner layer;
the inner-layer forgetting door obtains the inner-layer memory unit data at the current moment according to the outer-layer memory unit data input at the current moment and the inner-layer memory unit data at the previous moment;
the inner layer reset gate updates the outer layer memory unit data of the current moment according to the inner layer memory unit data of the current moment;
the updated data of the outer-layer memory unit at the current moment is subjected to jump connection at the outer-layer reset gate to obtain the output data of the hidden layer; the output data of the hidden layer is output after linear transformation is carried out on the output layer;
comparing the output data of the hidden layer with the collected real output data by using a mean square error training formula to obtain a comparison error; and adjusting parameters in the linear transformation, and when the contrast error is smaller than a set threshold value, obtaining an inverse dynamics model of the mechanical arm.
Optionally, the calculation formula of the outer-layer memory cell data at the current time is as follows: c. Ct=ft⊙ct-1+(1-ft)⊙Xt,ctOuter memory cell data representing the current time, ct-1Representing outer memory cell data of a previous time,XtData, X, representing the weighted current time input to the outer layert=WxtW denotes the outer weighting, ftThe data proportion applied to the outer-layer forgetting door in the data input to the outer layer at the current moment is shown, t represents the current moment, and t-1 represents the previous moment;
ft=σ(Wfxt+bf),xtdata indicating the current time input to the outer layer, WfRepresenting outer forgetting gate weight, bfRepresents the bias of the outer forgetting gate, and sigma represents the sigmoid activation function.
Optionally, the calculation formula of the data of the inner memory cell at the current time is as follows:
Figure BDA0002224957920000031
Figure BDA0002224957920000032
the data of the inner layer memory cell indicating the current time,
Figure BDA0002224957920000033
indicating the data of the inner layer memory cell at the previous time,
Figure BDA0002224957920000034
indicating the weighted current time instant of the data input to the inner layer,
Figure BDA0002224957920000035
Figure BDA0002224957920000036
data indicating the input of the inner layer at the present time,
Figure BDA0002224957920000037
Figure BDA0002224957920000038
the inner-layer weighting is represented by the inner-layer weighting,
Figure BDA0002224957920000039
the data proportion which is input into the inner-layer data at the current moment and is applied to the inner-layer forgetting gate is represented;
Figure BDA00022249579200000310
Figure BDA00022249579200000311
represents the inner forgetting gate weight,
Figure BDA00022249579200000312
The outer-layer weighting is represented by the outer-layer weighting,
Figure BDA00022249579200000313
indicating the inner forgetting gate bias.
Optionally, the update formula of the outer memory cell data at the current time is as follows:
Figure BDA00022249579200000314
Figure BDA00022249579200000315
Figure BDA00022249579200000316
the data of the inner memory cell indicating the current time,
Figure BDA00022249579200000317
representing the proportion of data output from the inner layer reset gate at the current moment, wherein g is a tanh activation function;
Figure BDA00022249579200000318
Figure BDA00022249579200000319
represents the inner reset gate weight,
Figure BDA00022249579200000320
representing the inner reset gate bias.
Optionally, the output formula of the hidden layer is as follows: h ist=rt⊙g(ct)+(1-rt)⊙xt,htOutput data representing the hidden layer at the current time, rtRepresenting the proportion of data output from the outer layer reset gate at the current moment, wherein g is a tanh activation function;
rt=σ(Wrxt+br),Wrrepresenting the outer reset gate weight, brIndicating the outer reset gate bias.
The invention also provides a modeling system of the robot inverse dynamics model, which comprises the following components:
the recurrent neural network building module is used for building a recurrent neural network, the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting gate and an outer resetting gate, and the inner layer comprises an inner forgetting gate and an inner resetting gate;
the input data and output data determining module is used for determining input data and output data of the recurrent neural network, the input data of the recurrent neural network is the motion state of the mechanical arm, the output data of the recurrent neural network is torque for controlling the mechanical arm to generate the motion state, and the motion state comprises a motion position, a motion speed and a motion acceleration;
The data input module is used for inputting the input data into the recurrent neural network according to the motion time of the mechanical arm; inputting data output from the input layer to an outer layer of the hidden layer;
the outer-layer memory unit data acquisition module is used for acquiring the outer-layer memory unit data of the current moment according to the data input to the outer layer at the current moment and the outer-layer memory unit data of the previous moment by the outer-layer forgetting door and inputting the outer-layer memory unit data of the current moment to the inner layer;
the inner-layer memory unit data acquisition module is used for acquiring the inner-layer memory unit data of the current moment by the inner-layer forgetting gate according to the outer-layer memory unit data input at the current moment and the inner-layer memory unit data of the previous moment;
the outer-layer memory unit data updating module is used for updating the outer-layer memory unit data of the current moment by the inner-layer reset door according to the inner-layer memory unit data of the current moment;
the output module is used for obtaining the output data of the hidden layer at the outer layer reset gate through the effect of jump connection by the updated outer layer memory unit data at the current moment; the output data of the hidden layer is output after linear transformation is carried out on the output layer;
The inverse dynamics model acquisition module is used for comparing the output data of the hidden layer with the collected real output data by using a mean square error training formula to obtain a comparison error; and adjusting parameters in the linear transformation, and when the contrast error is smaller than a set threshold value, obtaining an inverse dynamics model of the mechanical arm.
Optionally, the calculation formula of the outer-layer memory cell data at the current time is as follows: c. Ct=ft⊙ct-1+(1-ft)⊙Xt,ctOuter memory cell data representing the current time, ct-1Outer memory cell data representing the previous time, XtData, X, representing the weighted current time input to the outer layert=WxtW denotes the outer weighting, ftThe data proportion applied to the outer-layer forgetting door in the data input to the outer layer at the current moment is represented;
ft=σ(Wfxt+bf),xtdata indicating the current time input to the outer layer, WfRepresenting outer forgetting gate weight, bfRepresents the bias of the outer forgetting gate, and sigma represents the sigmoid activation function.
Optionally, the calculation formula of the data of the inner memory cell at the current time is as follows:
Figure BDA0002224957920000041
Figure BDA0002224957920000042
the data of the inner layer memory cell indicating the current time,
Figure BDA0002224957920000043
indicating the data of the inner layer memory cell at the previous time,
Figure BDA0002224957920000044
indicating the weighted current time instant of the data input to the inner layer,
Figure BDA0002224957920000045
Figure BDA0002224957920000046
Data indicating the input of the inner layer at the present time,
Figure BDA0002224957920000047
Figure BDA0002224957920000048
the inner-layer weighting is represented by the inner-layer weighting,
Figure BDA0002224957920000049
the data proportion which is input into the inner-layer data at the current moment and is applied to the inner-layer forgetting gate is represented;
Figure BDA00022249579200000410
Figure BDA00022249579200000411
represents the inner forgotten gate weight,
Figure BDA00022249579200000412
The outer-layer weighting is represented as,
Figure BDA00022249579200000413
indicating the inner forgetting gate bias.
Optionally, the update formula of the outer-layer memory cell data at the current time is as follows:
Figure BDA0002224957920000051
Figure BDA0002224957920000052
Figure BDA0002224957920000053
the data of the inner layer memory cell indicating the current time,
Figure BDA0002224957920000054
representing the proportion of data output from the inner layer reset gate at the current moment, wherein g is a tanh activation function;
Figure BDA0002224957920000055
Figure BDA0002224957920000056
represents the inner reset gate weight,
Figure BDA0002224957920000057
representing the inner reset gate bias.
Optionally, the output formula of the hidden layer is as follows: h is a total oft=rt⊙g(ct)+(1-rt)⊙xt,htOutput data representing the hidden layer at the current time, rtRepresenting the proportion of data output from the outer layer reset gate at the current moment, wherein g is a tanh activation function;
rt=σ(Wrxt+br),Wrrepresenting the outer reset gate weight, brIndicating the outer reset gate bias.
According to the invention content provided by the invention, the invention discloses the following technical effects:
the invention discloses a modeling method and a system of a robot inverse dynamics model, wherein the method comprises the following steps: building a recurrent neural network, wherein the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting gate and an outer resetting gate, and the inner layer comprises an inner forgetting gate and an inner resetting gate; training an inverse dynamics model of the mechanical arm by using a cyclic neural network; the input data of the recurrent neural network is the motion state of the mechanical arm, the output data of the recurrent neural network is the torque for controlling the mechanical arm to generate the motion state, and the motion state comprises a motion position, a motion speed and a motion acceleration; the outer-layer forgetting gate in the recurrent neural network obtains the outer-layer memory unit data at the current moment according to the data input to the outer layer at the current moment and the outer-layer memory unit data at the previous moment, inputs the outer-layer memory unit data at the current moment to the inner layer, and the inner-layer forgetting gate obtains the inner-layer memory unit data at the current moment according to the outer-layer memory unit data input at the current moment and the inner-layer memory unit data at the previous moment; the inner layer reset gate updates the data of the outer layer memory unit at the current moment according to the data of the inner layer memory unit at the current moment, so that the memory information of the time sequence is kept for a longer time, and the precision of the data output by the inverse dynamics model is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart of a modeling method of an inverse dynamics model of a robot according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a modeling system of an inverse kinematics model of a robot according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a recurrent neural network according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a hidden layer in a recurrent neural network according to an embodiment of the present invention;
FIG. 5 is a graph comparing predicted joint-output torque and actual torque using an inverse kinematics model according to an embodiment of the present invention;
FIG. 6 is a graph comparing the predicted joint two output torques and the actual torque using an inverse kinematics model according to an embodiment of the present invention;
FIG. 7 is a graph comparing the predicted triple output torque and the actual torque of the joint using an inverse kinematics model according to an embodiment of the present invention;
FIG. 8 is a graph comparing the predicted joint four output torques and the actual torques using an inverse kinematics model according to an embodiment of the present invention;
FIG. 9 is a comparison graph of predicted joint five output torque and actual torque from an inverse kinematics model according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The invention aims to provide a modeling method and a system of a robot inverse dynamics model, which can retain longer-time memory information so as to improve the prediction precision of the inverse dynamics model.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
Fig. 1 is a schematic flow chart of a modeling method of an inverse robot dynamics model according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
step 101: and constructing a recurrent neural network, wherein the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting door and an outer reset door, and the inner layer comprises an inner forgetting door and an inner reset door.
And step 101, building a cyclic neural network of cascade memory for optimizing the target learning function. The recurrent neural network is nested by basic Simple Recurrent Units (SRUs).
Step 102: determining input data and output data of the recurrent neural network, wherein the input data of the recurrent neural network is the motion state of the mechanical arm, the output data of the recurrent neural network is torque for controlling the mechanical arm to generate the motion state, and the motion state comprises a motion position, a motion speed and a motion acceleration.
Step 102 comprises collecting a data set of inverse dynamics of the robot, wherein the data set corresponds to the motion state of the mechanical arm and the torque corresponding to the motion of the control mechanical arm, and is divided into input (motion state) and input (torque); the resulting data set contains a 15-dimensional input space, i.e., motion position, motion velocity, and motion acceleration, and 5-dimensional torque vectors as output or unknowns, while the collected data set is divided into a training set and a test set.
The mechanical arm of the robot for experiments has joints with 7 degrees of freedom, only the first five joints are needed in the motion, namely the first five joints at the tail end of the motion, and the robot dimension of 5 joints is R 3d*1Where d represents the number of joints 5, and the input to each joint includes three vectors (position, velocity, acceleration), the motion state input at each time is data of dimension 15 x 1. The experiment uses a supervised training approach, so the input data includes both sample data and labels. Each row of input data has 20 columns, the first 15 columns representing position, velocity, acceleration inputs for each joint in 5 degrees of freedom, and the last 5 columns representing torque for each joint corresponding to the input signature.
Constructing an inverse dynamics module consisting of the motion state of the mechanical arm and the corresponding torque:
Figure BDA0002224957920000071
q is the position of the motion,
Figure BDA0002224957920000072
in order to be the speed of the movement,
Figure BDA0002224957920000073
the motion acceleration is adopted, tau is the torque corresponding to the motion of the mechanical arm, and F represents the inverse dynamics model; and setting a target learning function y of the input quantity x according to the inverse dynamics model tau, and expressing the inverse dynamics model in a form of (y) f (x), wherein the output y corresponds to the torque tau corresponding to the mechanical arm, the input x corresponds to the motion state of the mechanical arm, and f is the target learning function. Step 102 determines an inverse kinematics model for training a robotic arm using a recurrent neural network.
The recurrent neural network is constructed as shown in FIG. 3, and includes an input layer 1, a hidden layer 2, and an output layer 3, where q is tIs the moving position at the current moment,
Figure BDA0002224957920000081
is the speed of the movement at the present moment,
Figure BDA0002224957920000082
acceleration of motion at the current moment, τtFor the torque corresponding to the mechanical arm movement at the current moment, the data transmission relation of a hidden layer in a cyclic neural network is shown in fig. 4, wherein the hidden layer plays an important role in a forgetting gate and a resetting gate, the gating mechanisms of the forgetting gate and the resetting gate on the inner layer and the outer layer are the same, the basic structure of the hidden layer is divided into an inner layer and an outer layer which are formed by combining SRU units, and different from the traditional stacking network, the method that outer layer memory is used as inner layer input and inner layer output updates the outer layer memory is adopted to form an outer-inner-outer structure.
Step 103: the input data are input into the recurrent neural network according to the motion time of the mechanical arm; data output from the input layer is input to an outer layer of the hidden layer.
In step 103, time is a time step of the movement of the mechanical arm, data in the hidden layer is screened through the outer layer, and meanwhile data input at the current moment is preprocessed.
Step 104: the outer-layer forgetting door obtains the data of the outer-layer memory unit at the current moment according to the data input to the outer layer at the current moment and the data of the outer-layer memory unit at the previous moment, and inputs the data of the outer-layer memory unit at the current moment to the inner layer.
The calculation formula of the outer-layer memory cell data at the current time in step 104 is: c. Ct=ft⊙ct-1+(1-ft)⊙Xt,ctOuter memory cell data representing the current time, ct-1Outer memory cell data representing the previous time, XtData, X, representing the weighted current time input to the outer layert=WxtW denotes the outer weighting, ftThe data proportion applied to the outer-layer forgetting door in the data input to the outer layer at the current moment is shown, t represents the current moment, and t-1 represents the previous moment; f. oft=σ(Wfxt+bf),xtData indicating the current time input to the outer layer, WfRepresenting outer forgetting gate weight, bfRepresents the bias of the outer forgetting gate, and sigma represents the sigmoid activation function.
Step 105: and the inner-layer forgetting door obtains the inner-layer memory unit data at the current moment according to the outer-layer memory unit data input at the current moment and the inner-layer memory unit data at the previous moment.
In step 105, the calculation formula of the data of the inner memory cell at the current time is:
Figure BDA0002224957920000083
Figure BDA0002224957920000084
The data of the inner layer memory cell indicating the current time,
Figure BDA0002224957920000085
indicating the data of the inner layer memory cell at the previous time,
Figure BDA0002224957920000086
indicating the weighted current time instant of the data input to the inner layer,
Figure BDA0002224957920000087
Figure BDA0002224957920000088
data indicating the input of the inner layer at the present time,
Figure BDA0002224957920000089
Figure BDA00022249579200000810
the inner-layer weighting is represented by the inner-layer weighting,
Figure BDA00022249579200000811
the data proportion which is input into the inner-layer data at the current moment and is applied to the inner-layer forgetting gate is represented;
Figure BDA0002224957920000091
Figure BDA0002224957920000092
represents the inner forgetting gate weight,
Figure BDA0002224957920000093
The outer-layer weighting is represented by the outer-layer weighting,
Figure BDA0002224957920000094
indicating the inner forgetting gate bias.
Step 106: and the inner layer reset gate updates the outer layer memory unit data at the current moment according to the inner layer memory unit data at the current moment.
In step 106, the update formula of the data of the outer-layer memory cell at the current time is as follows:
Figure BDA0002224957920000095
Figure BDA0002224957920000096
Figure BDA0002224957920000097
the data of the inner layer memory cell indicating the current time,
Figure BDA0002224957920000098
representing the proportion of data output from the inner layer reset gate at the current moment, wherein g is a tanh activation function;
Figure BDA0002224957920000099
Figure BDA00022249579200000910
represents the inner reset gate weight,
Figure BDA00022249579200000911
representing the inner reset gate bias.
Step 107: the updated data of the outer-layer memory unit at the current moment is subjected to jump connection at the outer-layer reset gate to obtain the output data of the hidden layer; and the output data of the hidden layer is output after the output layer is subjected to linear transformation.
The output formula of the hidden layer in step 107 is: h is a total oft=rt⊙g(ct)+(1-rt)⊙xt,htOutput data representing the hidden layer at the current time, rtRepresenting the proportion of data output from the outer layer reset gate at the current moment, wherein g is a tanh activation function; in (1-r)t)⊙xtUnder the effect of jump connection, the prediction precision of the inverse power motion model is improved. r ist=σ(Wrxt+br),WrRepresenting the outer reset gate weight, brIndicating the outer reset gate bias.
The formula of the output data of the hidden layer after the output layer is subjected to linear transformation to output the predicted joint torque is as follows: y ist=(Whht+bh),WhA weight representing the output; bhIndicating the output offset value.
Step 108: comparing the output data of the hidden layer with the collected real output data by using a mean square error training formula to obtain a comparison error; and adjusting parameters in the linear transformation, and when the contrast error is smaller than a set threshold value, obtaining an inverse dynamics model of the mechanical arm.
In step 108, the mean square error training formula is:
Figure BDA00022249579200000912
d represents the number of joints, n represents the number of data (total number of samples in the input data set), j represents the cardinality of the joint accumulation starting from 1, t represents the cardinality of the time step accumulation starting from 1,
Figure BDA00022249579200000913
representing the actual result of the acquisition, i.e. the actual torque, y, of the input t [j]Representing the torque predicted by a dynamic model, and obtaining the final test result.
Optimizing MSE using Adam optimizer, adjusting formula yt=(Whht+bh) The MSE is reduced by the weight and the bias in the test set, 100 times of cyclic operation is carried out, the obtained optimal model is applied to inverse dynamics, and data of a test set are detected, wherein the graphs in fig. 5-9 are comparison graphs of output torques and real torques of the first joint to the fifth joint predicted by the inverse dynamics model, and the predicted results can be seen from the graphs in fig. 5-9 to be matched with real results. Adam, which is not an acronym nor a human name. Its name is derived from adaptive moment estimation (adaptive moment estimation).
The invention relates to a robot inverse dynamics modeling method based on a cascade memory neural network, which ensures memory information of a longer time sequence by utilizing the mutual access of inner and outer memory units nested in the network, improves the precision of torque prediction and accurately simulates a complex robot system.
The invention can process data input in parallel, utilizes the nesting processing of the SRU loop unit, reduces the serial problem that the current input of the traditional recurrent neural network needs to wait for the output of the previous moment, has great advantages on the processing time and the sequence related problem, greatly reduces the training time of input signals, completes the training of the trained samples within a few seconds, and improves the efficiency while improving the actual precision of the model.
Fig. 2 is a schematic structural diagram of a modeling system of an inverse robot dynamics model according to an embodiment of the present invention, and as shown in fig. 2, the system includes:
the recurrent neural network building module 201 is used for building a recurrent neural network, the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting door and an outer reset door, and the inner layer comprises an inner forgetting door and an inner reset door.
The input and output data determination module 202 is configured to determine input data and output data of the recurrent neural network, where the input data of the recurrent neural network is a motion state of the mechanical arm, and the output data of the recurrent neural network is a torque for controlling the mechanical arm to generate the motion state, where the motion state includes a motion position, a motion speed, and a motion acceleration.
The input and output data determination module 202 further includes a data set for collecting inverse dynamics of the robot, the data set corresponding to a motion state of the robot arm and a torque corresponding to a motion of the robot arm, and the data set is divided into an input (motion state) and an input (torque); the resulting data set contains a 15-dimensional input space, i.e., motion position, motion velocity, and motion acceleration, and 5-dimensional torque vectors as output or unknowns, while the collected data set is divided into a training set and a test set.
The mechanical arm of the robot for experiments has joints with 7 degrees of freedom, only the first five joints are needed in the motion, namely the first five joints at the tail end of the motion, and the robot dimension of 5 joints is R3d*1Where d represents the number of joints 5, each jointIncludes three vectors (position, velocity, acceleration), then the motion state input at each moment is data of dimension 15 x 1. The experiment uses a supervised training approach, so the input data includes both sample data and labels. Each row of input data has 20 columns, the first 15 columns representing position, velocity, acceleration inputs for each joint in 5 degrees of freedom, and the last 5 columns representing torque for each joint corresponding to the input signature.
Constructing an inverse dynamics module consisting of the motion state of the mechanical arm and the corresponding torque:
Figure BDA0002224957920000111
q is the position of the motion,
Figure BDA0002224957920000112
in order to be the speed of the movement,
Figure BDA0002224957920000113
the motion acceleration is adopted, tau is the torque corresponding to the motion of the mechanical arm, and F represents the inverse dynamics model; and setting a target learning function y of the input quantity x according to the inverse dynamics model tau, and expressing the inverse dynamics model in a form of y-f (x), wherein the output y corresponds to the torque tau corresponding to the mechanical arm, the input x corresponds to the motion state of the mechanical arm, and f is the target learning function. Step 102 determines an inverse kinematics model for training a robotic arm using a recurrent neural network.
The data input module 203 is used for inputting the input data into the recurrent neural network according to the motion time of the mechanical arm; data output from the input layer is input to an outer layer of the hidden layer.
The time in the data input module 203 is the time step of the movement of the mechanical arm, the data in the hidden layer is firstly screened through the outer layer, and meanwhile, the data input at the current moment is preprocessed.
The outer-layer memory unit data acquisition module 204 is configured to obtain the outer-layer memory unit data of the current time according to the data input to the outer layer at the current time and the outer-layer memory unit data of the previous time, and input the outer-layer memory unit data of the current time to the inner layer.
The outer-layer memory cell data obtaining module 204 further includes a calculation formula of the outer-layer memory cell data at the current time as follows: c. Ct=ft⊙ct-1+(1-ft)⊙Xt,ctOuter memory cell data representing the current time, ct-1Outer memory cell data representing the previous time, XtData, X, representing the weighted current time input to the outer layer t=WxtW denotes the outer weighting, ftAnd the data proportion applied to the outer-layer forgetting door in the data input to the outer layer at the current moment is shown.
ft=σ(Wfxt+bf),xtData indicating the current time input to the outer layer, WfRepresenting outer forgetting gate weight, bfRepresents the bias of the outer forgetting gate, and sigma represents the sigmoid activation function.
The inner-layer memory unit data obtaining module 205 is configured to obtain the inner-layer memory unit data of the current moment according to the outer-layer memory unit data input at the current moment and the inner-layer memory unit data of the previous moment by the inner-layer forgetting gate.
The inner-layer memory unit data obtaining module 205 further includes that the calculation formula of the inner-layer memory unit data at the current time is:
Figure BDA0002224957920000121
Figure BDA0002224957920000122
the data of the inner layer memory cell indicating the current time,
Figure BDA0002224957920000123
indicating the data of the inner layer memory cell at the previous time,
Figure BDA0002224957920000124
indicating the weighted current time instant of the data input to the inner layer,
Figure BDA0002224957920000125
Figure BDA0002224957920000126
data indicating the input of the inner layer at the present time,
Figure BDA0002224957920000127
Figure BDA0002224957920000128
the inner-layer weighting is represented by the inner-layer weighting,
Figure BDA0002224957920000129
and the data proportion which is input into the inner-layer data at the current moment and is applied to the inner-layer forgetting gate is represented.
Figure BDA00022249579200001210
Figure BDA00022249579200001211
Represents the inner forgetting gate weight,
Figure BDA00022249579200001212
The outer-layer weighting is represented by the outer-layer weighting,
Figure BDA00022249579200001213
indicating the inner forgetting gate bias.
An outer-layer memory cell data updating module 206, configured to update, by the inner-layer reset gate, the outer-layer memory cell data at the current time according to the inner-layer memory cell data at the current time.
The outer-layer memory cell data updating module 206 further includes that the updating formula of the outer-layer memory cell data at the current time is:
Figure BDA00022249579200001214
Figure BDA00022249579200001215
the data of the inner memory cell indicating the current time,
Figure BDA00022249579200001216
representing the proportion of data output from the inner layer reset gate at the current moment, wherein g is a tanh activation function;
Figure BDA00022249579200001217
Figure BDA00022249579200001218
represents the inner reset gate weight,
Figure BDA00022249579200001219
representing the inner reset gate bias.
The output module 207 is configured to obtain output data of the hidden layer at the outer layer reset gate through a jump connection function by using the updated outer layer memory unit data at the current time; and the output data of the hidden layer is output after the output layer is subjected to linear transformation.
The output module 207 further includes that the output formula of the hidden layer is: h ist=rt⊙g(ct)+(1-rt)⊙xt,htOutput data representing the hidden layer at the current time, rtRepresenting the proportion of data output from the outer layer reset gate at the current moment, wherein g is a tanh activation function; r ist=σ(Wrxt+br),WrRepresenting the outer reset gate weight, brIndicating the outer reset gate bias.
The formula of the output data of the hidden layer after the output layer is subjected to linear transformation to output the predicted joint torque is as follows: y ist=(Whht+bh),WhA weight representing the output; bhIndicating the output offset value.
The inverse dynamics model obtaining module 208 compares the output data of the hidden layer with the collected real output data by using a mean square error training formula to obtain a comparison error; and adjusting parameters in the linear transformation, and when the contrast error is smaller than a set threshold value, obtaining an inverse dynamics model of the mechanical arm.
In the inverse dynamics model obtaining module 208, the mean square error training formula is:
Figure BDA0002224957920000131
MSE represents the Mean Square Error, known in english as Mean Square Error, d represents the number of joints, n represents the number of data (total number of samples in the input data set), j represents the cardinality of the joint accumulation starting from 1, t represents the cardinality of the time step accumulation starting from 1,
Figure BDA0002224957920000132
representing the actual result of the acquisition, i.e. the actual torque, y, of the inputt [j]Representing the torque predicted by the dynamic model, and obtaining the final test result.
Optimizing MSE using Adam optimizer, adjusting formula yt=(Whht+bh) The MSE is reduced by the weight and the bias in the test set, and 100 times of cyclic operation is performed, so that the obtained optimal model is applied to inverse dynamics to detect the data of the test set.
The method comprises the steps of building a cyclic neural network, training a robot inverse dynamics model by using the cyclic neural network, wherein the cyclic neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting gate and an outer resetting gate, and the inner layer comprises an inner forgetting gate and an inner resetting gate; training an inverse dynamics model of the mechanical arm by using a cyclic neural network; the input data of the recurrent neural network is the motion state of the mechanical arm, the output data of the recurrent neural network is the torque for controlling the mechanical arm to generate the motion state, and the motion state comprises a motion position, a motion speed and a motion acceleration; the outer-layer forgetting gate in the recurrent neural network obtains the data of the outer-layer memory unit at the current moment according to the data input to the outer layer at the current moment and the data of the outer-layer memory unit at the previous moment, and inputs the data of the outer-layer memory unit at the current moment to the inner layer, and the inner-layer forgetting gate obtains the data of the inner-layer memory unit at the current moment according to the data of the outer-layer memory unit input at the current moment and the data of the inner-layer memory unit at the previous moment; the inner layer reset gate updates the data of the outer layer memory unit at the current moment according to the data of the inner layer memory unit at the current moment, so that the memory information of the time sequence is kept for a longer time, the precision of the data output by the inverse dynamics model is improved, and the predicted torque output by the inverse dynamics model is more accurate.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the foregoing, the description is not to be taken in a limiting sense.

Claims (4)

1. A method of modeling an inverse robotic dynamics model, the method comprising:
building a recurrent neural network, wherein the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting gate and an outer resetting gate, and the inner layer comprises an inner forgetting gate and an inner resetting gate;
Determining input data and output data of the recurrent neural network, wherein the input data of the recurrent neural network is the motion state of the mechanical arm, the output data of the recurrent neural network is torque for controlling the mechanical arm to generate the motion state, and the motion state comprises a motion position, a motion speed and a motion acceleration;
the input data are input into the recurrent neural network according to the movement time of the mechanical arm; inputting data output from the input layer to an outer layer of the hidden layer;
the outer-layer forgetting door obtains the outer-layer memory unit data at the current moment according to the data input to the outer layer at the current moment and the outer-layer memory unit data at the previous moment, and inputs the outer-layer memory unit data at the current moment to the inner layer;
the inner-layer forgetting door obtains inner-layer memory unit data at the current moment according to the outer-layer memory unit data input at the current moment and the inner-layer memory unit data at the previous moment;
the inner layer reset door updates the outer layer memory unit data at the current moment according to the inner layer memory unit data at the current moment;
the updated data of the outer-layer memory unit at the current moment is subjected to jump connection at the outer-layer reset gate to obtain the output data of the hidden layer; the output data of the hidden layer is output after linear transformation is carried out on the output layer;
Comparing the output data of the hidden layer with the collected real output data by using a mean square error training formula to obtain a comparison error; adjusting parameters in the linear transformation, and when the contrast error is smaller than a set threshold value, obtaining an inverse dynamics model of the mechanical arm;
the calculation formula of the data of the outer-layer memory unit at the current moment is as follows: c. Ct=ft⊙ct-1+(1-ft)⊙Xt,ctOuter memory cell data representing the current time, ct-1Outer memory cell data representing the previous time, XtData, X, representing the weighted current time input to the outer layert=WxtW denotes the outer weighting, ftThe data proportion applied to the outer-layer forgetting door in the data input to the outer layer at the current moment is shown, and t is shown asThe previous time, t-1, represents the previous time;
ft=σ(Wfxt+bf),xtdata indicating the current time input to the outer layer, WfRepresenting outer forgetting gate weight, bfRepresenting the bias of an outer forgetting gate, wherein sigma represents a sigmoid activation function;
the calculation formula of the data of the inner-layer memory unit at the current moment is as follows:
Figure FDA0003623570940000021
Figure FDA0003623570940000022
the data of the inner layer memory cell indicating the current time,
Figure FDA0003623570940000023
indicating the data of the inner layer memory cell at the previous time,
Figure FDA0003623570940000024
indicating the weighted current time instant of the data input to the inner layer,
Figure FDA0003623570940000025
Figure FDA0003623570940000026
Data indicating the input of the inner layer at the current time,
Figure FDA0003623570940000027
Figure FDA0003623570940000028
the inner-layer weighting weight is represented,
Figure FDA0003623570940000029
the data proportion which is input into the inner-layer data at the current moment and is applied to the inner-layer forgetting gate is represented;
Figure FDA00036235709400000210
Figure FDA00036235709400000211
represents the inner forgetting gate weight,
Figure FDA00036235709400000212
representing the inner layer forgetting gate bias;
the updating formula of the data of the outer-layer memory unit at the current moment is as follows:
Figure FDA00036235709400000213
Figure FDA00036235709400000214
Figure FDA00036235709400000215
the data of the inner layer memory cell indicating the current time,
Figure FDA00036235709400000216
representing the proportion of data output from the inner layer reset gate at the current moment, wherein g is a tanh activation function;
Figure FDA00036235709400000217
Figure FDA00036235709400000218
represents the inner reset gate weight,
Figure FDA00036235709400000219
representing the inner reset gate bias.
2. The robot of claim 1The modeling method of the inverse dynamics model is characterized in that the output formula of the hidden layer is as follows: h ist=rt⊙g(ct)+(1-rt)⊙xt,htOutput data representing the hidden layer at the current time, rtRepresenting the proportion of data output from the outer layer reset gate at the current moment, wherein g is a tanh activation function;
rt=σ(Wrxt+br),Wrrepresenting the outer reset gate weight, brIndicating the outer reset gate bias.
3. A modeling system for an inverse robotic dynamics model, the system comprising:
the recurrent neural network building module is used for building a recurrent neural network, the recurrent neural network comprises an input layer, a hidden layer and an output layer, the hidden layer comprises an outer layer and an inner layer, the outer layer comprises an outer forgetting gate and an outer resetting gate, and the inner layer comprises an inner forgetting gate and an inner resetting gate;
The input and output data determining module is used for determining input data and output data of the recurrent neural network, the input data of the recurrent neural network is the motion state of the mechanical arm, the output data of the recurrent neural network is torque for controlling the mechanical arm to generate the motion state, and the motion state comprises a motion position, a motion speed and a motion acceleration;
the data input module is used for inputting the input data into the recurrent neural network according to the motion time of the mechanical arm; inputting data output from the input layer to an outer layer of the hidden layer;
the outer-layer memory unit data acquisition module is used for acquiring the outer-layer memory unit data of the current moment according to the data input to the outer layer at the current moment and the outer-layer memory unit data of the previous moment by the outer-layer forgetting door and inputting the outer-layer memory unit data of the current moment to the inner layer;
the inner-layer memory unit data acquisition module is used for acquiring the inner-layer memory unit data of the current moment by the inner-layer forgetting gate according to the outer-layer memory unit data input at the current moment and the inner-layer memory unit data of the previous moment;
the outer-layer memory unit data updating module is used for updating the outer-layer memory unit data of the current moment by the inner-layer reset door according to the inner-layer memory unit data of the current moment;
The output module is used for acquiring the output data of the hidden layer at the outer layer reset gate through the effect of jump connection by the updated outer layer memory unit data at the current moment; the output data of the hidden layer is output after linear transformation in the output layer;
the inverse dynamics model acquisition module is used for comparing the output data of the hidden layer with the collected real output data by using a mean square error training formula to obtain a comparison error; adjusting parameters in the linear transformation, and when the contrast error is smaller than a set threshold value, obtaining an inverse dynamics model of the mechanical arm;
the calculation formula of the data of the outer-layer memory unit at the current moment is as follows: c. Ct=ft⊙ct-1+(1-ft)⊙Xt,ctOuter memory cell data representing the current time, ct-1Outer memory cell data representing the previous time, XtData, X, representing the weighted current time input to the outer layert=WxtW denotes the outer weighting, ftThe data proportion applied to the outer-layer forgetting door in the data input to the outer layer at the current moment is shown, t represents the current moment, and t-1 represents the previous moment;
ft=σ(Wfxt+bf),xtdata indicating the current time input to the outer layer, WfRepresenting outer forgetting gate weight, bfRepresenting the bias of an outer forgetting gate, wherein sigma represents a sigmoid activation function;
The calculation formula of the data of the inner-layer memory unit at the current moment is as follows:
Figure FDA0003623570940000041
Figure FDA0003623570940000042
the data of the inner layer memory cell indicating the current time,
Figure FDA0003623570940000043
indicating the data of the inner layer memory cell at the previous time,
Figure FDA0003623570940000044
indicating the weighted current time instant of the data input to the inner layer,
Figure FDA0003623570940000045
Figure FDA0003623570940000046
data indicating the input of the inner layer at the present time,
Figure FDA0003623570940000047
the inner-layer weighting is represented by the inner-layer weighting,
Figure FDA0003623570940000048
the data proportion which is input into the inner-layer data at the current moment and is applied to the inner-layer forgetting gate is represented;
Figure FDA0003623570940000049
represents the inner forgetting gate weight,
Figure FDA00036235709400000410
representing the inner forgetting gate bias;
the updating formula of the data of the outer-layer memory unit at the current moment is as follows:
Figure FDA00036235709400000411
Figure FDA00036235709400000412
the data of the inner layer memory cell indicating the current time,
Figure FDA00036235709400000413
representing the proportion of data output from the inner layer reset gate at the current moment, wherein g is a tanh activation function;
Figure FDA0003623570940000051
Figure FDA0003623570940000052
represents the inner reset gate weight,
Figure FDA0003623570940000053
representing the inner reset gate bias.
4. A modeling system for an inverse robotic dynamics model in accordance with claim 3, characterized by the hidden layer output formula being: h ist=rt⊙g(ct)+(1-rt)⊙xt,htOutput data representing the hidden layer at the current time, rtRepresenting the proportion of data output from the outer layer reset gate at the current moment, wherein g is a tanh activation function;
rt=σ(Wrxt+br),Wrrepresenting the outer reset gate weight, b rIndicating the outer reset gate bias.
CN201910948416.9A 2019-10-08 2019-10-08 Modeling method and system for inverse dynamics model of robot Active CN110705105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910948416.9A CN110705105B (en) 2019-10-08 2019-10-08 Modeling method and system for inverse dynamics model of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910948416.9A CN110705105B (en) 2019-10-08 2019-10-08 Modeling method and system for inverse dynamics model of robot

Publications (2)

Publication Number Publication Date
CN110705105A CN110705105A (en) 2020-01-17
CN110705105B true CN110705105B (en) 2022-06-10

Family

ID=69196670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910948416.9A Active CN110705105B (en) 2019-10-08 2019-10-08 Modeling method and system for inverse dynamics model of robot

Country Status (1)

Country Link
CN (1) CN110705105B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428317B (en) * 2020-04-06 2023-06-09 宁波智诚祥科技发展有限公司 Joint friction torque compensation method based on 5G and cyclic neural network
CN112247992B (en) * 2020-11-02 2021-07-23 中国科学院深圳先进技术研究院 Robot feedforward torque compensation method
CN113156320B (en) * 2021-03-12 2023-05-30 山东大学 Lithium ion battery SOC estimation method and system based on deep learning
CN113561185B (en) * 2021-09-23 2022-01-11 中国科学院自动化研究所 Robot control method, device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108621159A (en) * 2018-04-28 2018-10-09 首都师范大学 A kind of Dynamic Modeling in Robotics method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138887A1 (en) * 2017-11-01 2019-05-09 Board Of Trustees Of Michigan State University Systems, methods, and media for gated recurrent neural networks with reduced parameter gating signals and/or memory-cell units

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108621159A (en) * 2018-04-28 2018-10-09 首都师范大学 A kind of Dynamic Modeling in Robotics method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
下肢外骨骼康复机器人的动力学建模及神经网络辨识仿真;陈贵亮 等;《机械设计与制造》;20131130(第11期);第2-3节 *

Also Published As

Publication number Publication date
CN110705105A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN110705105B (en) Modeling method and system for inverse dynamics model of robot
CN108621159B (en) Robot dynamics modeling method based on deep learning
EP3926582B1 (en) Model generating apparatus, method, and program, and prediction apparatus
CN108596327B (en) Seismic velocity spectrum artificial intelligence picking method based on deep learning
CN109740742A (en) A kind of method for tracking target based on LSTM neural network
CN110728698B (en) Multi-target tracking system based on composite cyclic neural network system
CN106548475A (en) A kind of Forecasting Methodology of the target trajectory that spins suitable for space non-cooperative
CN111310965A (en) Aircraft track prediction method based on LSTM network
WO2020133721A1 (en) Method for status estimation of signalized intersection based on non-parametric bayesian framework
CN113408392B (en) Flight path completion method based on Kalman filtering and neural network
CN115688288B (en) Aircraft pneumatic parameter identification method and device, computer equipment and storage medium
CN113447021A (en) MEMS inertial navigation system positioning enhancement method based on LSTM neural network model
CN109509207B (en) Method for seamless tracking of point target and extended target
CN109857127B (en) Method and device for calculating training neural network model and aircraft attitude
CN111798494A (en) Maneuvering target robust tracking method under generalized correlation entropy criterion
Chen et al. Learning trajectories for visual-inertial system calibration via model-based heuristic deep reinforcement learning
CN112561203B (en) Method and system for realizing water level early warning based on clustering and GRU
CN115050095A (en) Human body posture prediction method based on Gaussian process regression and progressive filtering
CN114942480A (en) Ocean station wind speed forecasting method based on information perception attention dynamic cooperative network
CN113987961A (en) Robot jump landing detection method
CN113821974A (en) Engine residual life prediction method based on multiple failure modes
CN113485273B (en) Dynamic system time delay calculation method and system
CN115951325B (en) BiGRU-based multi-ship target tracking method, storage medium and product
CN113850366B (en) Method for predicting target motion based on LSTM
CN114800525B (en) Robot collision detection method, system, computer and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant