CN112388635B - Method, system and device for fusing sensing and space positioning of multiple sensors of robot - Google Patents

Method, system and device for fusing sensing and space positioning of multiple sensors of robot Download PDF

Info

Publication number
CN112388635B
CN112388635B CN202011190385.4A CN202011190385A CN112388635B CN 112388635 B CN112388635 B CN 112388635B CN 202011190385 A CN202011190385 A CN 202011190385A CN 112388635 B CN112388635 B CN 112388635B
Authority
CN
China
Prior art keywords
sequence
fusion
robot
sensor
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011190385.4A
Other languages
Chinese (zh)
Other versions
CN112388635A (en
Inventor
李恩
罗明睿
杨国栋
梁自泽
谭民
郭锐
李勇
刘海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
State Grid Shandong Electric Power Co Ltd
Original Assignee
Institute of Automation of Chinese Academy of Science
State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science, State Grid Shandong Electric Power Co Ltd filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202011190385.4A priority Critical patent/CN112388635B/en
Publication of CN112388635A publication Critical patent/CN112388635A/en
Application granted granted Critical
Publication of CN112388635B publication Critical patent/CN112388635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of robot environment perception, and particularly relates to a method, a system and a device for robot multi-sensor fusion perception and spatial positioning, aiming at solving the problem of low accuracy and precision of robot perception and spatial positioning caused by adverse interference conditions of multi-factor synthesis in the environment. The invention comprises the following steps: acquiring raw data of a depth camera, a MARG sensor and a joint encoder; correcting the acquired data by combining the calibration and calibration results of the sensor; enhancing and repairing the visual data acquired by the depth camera; performing pre-fusion processing on visual data, motion state data and angular displacement data respectively based on visual feature points, quaternion postures and a kinematics principle of a robot mechanism; and with the help of an extended Kalman filter and a fuzzy theory, the spatial fusion positioning of the robot in the environment is completed by utilizing the pre-fused multi-sensor data. The invention realizes high-precision robot perception and space positioning under adverse interference of multi-factor synthesis.

Description

Method, system and device for fusing sensing and space positioning of multiple sensors of robot
Technical Field
The invention belongs to the technical field of robot environment perception, and particularly relates to a method, a system and a device for robot multi-sensor fusion perception and space positioning.
Background
A large number of unstructured complex operation scenes exist in the fields of aeronautical manufacturing, electric power transmission and the like, and the scenes are mainly characterized by multiple obstacles, low illumination, weak contrast and strong reflection. In such a scene, it is generally difficult to complete the operation by manpower, and therefore, it is necessary to research an intelligent robot capable of implementing operations such as obstacle avoidance, object detection, and maintenance operation.
At present, the most widely applied indoor environment sensing and space positioning scheme adopts a machine vision technology, however, interference factors such as low illumination, weak contrast, strong reflection and the like in a complex environment bring great difficulty to vision processing, and meanwhile, obstacle measurement and space environment detection in an unstructured environment are difficult to solve through a single sensing means.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, the accuracy and precision of robot sensing and spatial positioning are low due to adverse interference conditions of multi-factor synthesis in the environment, the invention provides a method for robot multi-sensor fusion sensing and spatial positioning, which comprises the following steps:
step S10, acquiring a robot multi-sensor data sequence; the robot multi-sensor data sequence comprises a visual data sequence acquired by a structured light depth camera, a motion state data sequence acquired by a MARG sensor and an angular displacement data sequence acquired by a joint encoder;
step S20, calibrating and calibrating the structured light depth camera, the MARG sensor and the joint encoder, and correcting the robot multi-sensor data sequence based on the calibration and calibration result;
step S30, enhancing and repairing the visual data sequence in the corrected robot multi-sensor data sequence by a preset depth camera visual enhancement method;
step S40, performing pre-fusion processing on the enhanced and repaired visual data sequence to obtain a first pre-fusion pose sequence; performing pre-fusion processing on the corrected motion state data sequence to obtain a pre-fusion attitude sequence; performing pre-fusion processing on the corrected angular displacement data sequence to obtain a second pre-fusion pose sequence;
and step S50, carrying out fusion positioning on the spatial state of the robot in the environment based on the first pre-fusion pose sequence, the second pre-fusion pose sequence and the pre-fusion attitude sequence, and obtaining the pose sequence and the spatial coordinate sequence of the robot in the environment.
In some preferred embodiments, the sequence of visual data comprises a sequence of color maps and a sequence of depth maps acquired by a depth camera; the motion state data sequence comprises a triaxial acceleration sequence, an angular velocity sequence and a geomagnetic field intensity sequence which are acquired by a MARG sensor; the angular displacement data sequence comprises an angular displacement data sequence of the robot joint acquired by the joint encoder.
In some preferred embodiments, calibrating and calibrating the structured light depth camera, the MARG sensor and the joint encoder comprises:
the calibration and calibration of the structured light depth camera comprise color camera calibration, infrared camera calibration, depth map drift correction and color map and depth map registration;
the calibration and calibration of the MARG sensor comprise acceleration zero drift calibration and magnetic field ellipsoid fitting calibration;
the calibration and calibration of the joint encoder comprises the registration of an electrical origin of the encoder and a mechanical origin of the joint rotating mechanism.
In some preferred embodiments, the pre-fusion process of the enhanced and repaired visual data sequence comprises:
step S411, extracting ORB two-dimensional characteristic points of a frame of color image in the enhanced and repaired visual data sequence, and extracting depth measurement values corresponding to the ORB two-dimensional characteristic points in a depth image registered with the current frame color image;
step S412, if the depth measurement value can be extracted, the ORB two-dimensional feature point is used as an ORB three-dimensional feature point; otherwise, the ORB is still used as the ORB two-dimensional feature point;
step S413, matching and tracking all feature points of the current frame with all feature points of a previous frame of the current frame to obtain feature point pairs;
step S414, if the feature points in the feature point pairs are all ORB three-dimensional feature points, obtaining a pose transformation matrix between two frames through an ICP (inductively coupled plasma) algorithm; otherwise, obtaining a pose transformation matrix between two frames through a PnP algorithm;
step S415, judging whether the tracking of the ORB three-dimensional feature points is lost or not, and if not, directly adding a camera pose transformation matrix obtained by tracking as a new key frame into the existing key frame sequence; if the key frame is lost, calling a pre-fusion measurement value of a joint encoder and a MARG sensor corresponding to the current frame as a camera pose transformation matrix of an initial key frame in the new key frame sequence;
step S416, performing pose graph optimization on the obtained key frame sequence to obtain a camera pose sequence to be optimized, and performing closed-loop correction on the camera pose sequence to be optimized to obtain a first pre-fusion pose sequence.
In some preferred embodiments, the pre-fusion process of the modified motion state data sequence includes:
step S421, acquiring a first state quantity and a first control quantity at the moment of k-1; the first state quantity comprises a quaternion value of a rotation matrix from the MARG sensor carrier coordinate system to the geographic coordinate system and an angular speed accumulated drift value of the MARG sensor on an X, Y, Z axis; the first control quantity comprises X, Y, Z shaft angular velocity true values;
step S422, based on the first state quantity and the first control quantity at the k-1 moment, according to a first state transition function f1[x1(k-1),u1(k-1)]Obtaining the predicted value of the first state quantity at the k-th moment
Figure BDA0002752576550000031
Step S423, based on the extended Kalman filter algorithm, passing through the first observation quantity and the first observation function at the kth moment
Figure BDA0002752576550000041
First process covariance matrix Q1A first noise covariance matrix R1And the predicted value of the first state quantity at the k-th time
Figure BDA0002752576550000042
Obtaining a first state quantity x of the k-th time pre-fusion1(k) The pre-fusion first state quantity at k moments is a pre-fusion attitude sequence; first observed quantity z at the k-th time1(k) Three-axis acceleration sequence [ a ] obtained by MARG sensorAxaAy aAz]TAnd the sequence of earth magnetic field intensities [ h ]Mx hMy hMz]T
In some preferred embodiments, the first state transition function is:
Figure BDA0002752576550000043
wherein q is0、q1、q2、q3Taking the value of quaternion from MARG sensor carrier coordinate system to geographic coordinate system rotation matrix, bgx、bgy、bgzIntegrating a drift value, ω, for angular velocity of the MARG sensor on the X, Y, Z axisx、ωy、ωzIs X, Y, Z true axial angular velocity and T is the sampling period of the MARG sensor.
In some preferred embodiments, step S423 includes:
step S4231, obtaining a first process covariance matrix Q1And a first noise covariance matrix R1And based on the first observed quantity z1(k) Obtaining a first observation function
Figure BDA0002752576550000044
Figure BDA0002752576550000045
Figure BDA0002752576550000051
Figure BDA0002752576550000052
Wherein, aAx,aAy,aAzTriaxial acceleration sequence, h, obtained for a MARG sensorMx,hMy,hMzGeomagnetic field intensity sequence obtained for MARG sensor, g is local gravitational acceleration value, Hx,Hy,HzRespectively the components of the magnetic field intensity in three coordinate axis directions under the geographic coordinate system,
Figure BDA0002752576550000053
the distribution variance of the quaternion values from the MARG sensor carrier coordinate system to the geographic coordinate system rotation matrix is taken,
Figure BDA0002752576550000054
is the distribution variance, σ, of the angular velocity drift valuea 2、σm 2Distribution variances of a triaxial acceleration sequence and a geomagnetic field intensity sequence which are respectively collected by the MARG sensor;
step S4232, based on the parameter acquired in step S4231, combines the predicted value of the first state quantity at the k-th time
Figure BDA0002752576550000055
Obtaining a first state quantity x of the k-th time pre-fusion1(k):
Figure BDA0002752576550000056
Wherein, P1,k-1Is a first covariance matrix, P, at time k-11,kIs P1,k-1Obtaining an updated first covariance matrix at the kth time after the k-th time pre-fusion operation, I7×7Is a 7 × 7 identity matrix, x1(k) A first state quantity pre-fused at the kth moment;
in step S4233, the pre-fusion first state quantity at k times is a pre-fusion pose sequence.
In some preferred embodiments, the pre-fusion process of the modified angular displacement data sequence includes:
step S431, acquiring the length of a connecting rod between adjacent joints of the robot, the Z-axis offset distance and the Z-axis included angle of the adjacent joints according to a pre-designed mechanism model of the robot; the Z axis is a joint rotation axis or a joint moving direction; the angular displacement data sequence is a rotation angle sequence around a Z axis of a coordinate system;
step S432, combining the pose transformation matrix of the nth joint of the robot relative to the (n-1) th joint based on the data obtained in the step S431n-1TnAnd obtaining a second pre-fusion pose sequence.
In some preferred embodiments, step S432 includes:
step S4321, calculating a pose transformation matrix of the nth joint of the robot relative to the (n-1) th joint based on the data obtained in the step S431n-1Tn
Figure BDA0002752576550000061
Wherein the content of the first and second substances,n-1Tna position and orientation transformation matrix representing the nth joint of the robot relative to the (n-1) th joint, anRepresenting the length of the connecting rod between the nth joint and the (n-1) th joint of the robot, dn、αnAnd thetanRespectively representing the Z-axis offset distance, the torsion angle and the rotation angle around the Z axis of the coordinate system between the nth joint and the (n-1) th joint of the robot;
step S4322, combining the data obtained in step S431 and the pose transformation matrix obtained in step S4321n-1TnAnd obtaining a second pre-fusion pose sequence:
Tend_effector0T11,d1,a11)1T22,d2,a22)……n-1Tnn,dn,ann)
wherein, Tend_effectorRepresenting a second pre-fused pose sequence.
In some preferred embodiments, step S50 includes:
step S51, acquiring a second state quantity and a second control quantity at the moment k-1; the second state quantity comprises the position of the robot end effector and the attitude Euler angle of the robot end effector; the second control quantity includes a velocity, an acceleration, and an angular velocity of the robot end effector;
step S52, based on the second state quantity and the second control quantity at the k-1 moment, according to the second state transition function f2[x2(k-1),u2(k-1)]Obtaining the predicted value of the second state quantity at the kth moment
Figure BDA0002752576550000071
Step S53, acquiring a first pre-fusion pose sequence, a second pre-fusion pose sequence and a pre-fusion attitude sequence as a second observed quantity z2(k) Carrying out data synchronization on the second observed quantity, and judging whether synchronous data can be formed or not, wherein if the synchronous data can be formed, the state updating matrix is an absolute state updating matrix, and if the synchronous data cannot be formed, the state updating matrix is a relative state updating matrix;
step S54, based on the extended Kalman filter algorithm, the predicted value of the second observed quantity, the state updating matrix, the second noise covariance matrix and the second state quantity at the k moment
Figure BDA0002752576550000072
Obtaining a second state quantity x fused at the k-th moment2(k);
Step S55, merging the k-th time into the second state quantity x2(k) Second state quantity predicted from the k-th time
Figure BDA0002752576550000073
Taking the variation between the first noise covariance matrix and the second noise covariance matrix as input fuzzy variables of a two-dimensional Mamdani fuzzy method, taking the second noise covariance matrix as output fuzzy variables, and carrying out self-adaptive adjustment on the second noise covariance matrix at the moment of k +1 by using fuzzy reasoning;
step S56, sequentially outputting p with the second state quantity fused at the k-th timex、py、pzTheta, gamma and psi as a six-degree-of-freedom pose sequence of the robot in the environment, wherein px、py、pzIs a three-freedom-degree space coordinate sequence of the robot in the environment.
In some preferred embodiments, the second state transition function is:
Figure BDA0002752576550000081
wherein p isx、py、pzTheta, gamma and psi are second state quantities at the k-th time,
Figure BDA0002752576550000082
Figure BDA0002752576550000083
is the second control quantity, p, at the k-th momentx′、py′、pz', θ', γ ', ψ' are second state quantities at the k-1 th time,
Figure BDA0002752576550000084
and T is a second control quantity at the k-1 th moment, and is a sampling period of the sensor.
In some preferred embodiments, the absolute state update matrix and the relative state update matrix are respectively:
Figure BDA0002752576550000085
Figure BDA0002752576550000086
wherein H1Updating the matrix for absolute states, H2Updating the matrix for the relative state, N being the total number of synchronized sensors, N being greater than 1 and less than or equal to Nmax,NmaxFor the maximum number of sensors, pi is the total number of system state quantities that the ith sensor can observe, qiThe total number of system control variables that can be observed by the ith sensor.
In some preferred embodiments, step S54 includes:
Figure BDA0002752576550000091
wherein, P2,k-1Is the second covariance matrix, P, at time k-12,kIs P2,k-1Obtaining an updated second covariance matrix at the kth time after the k-th time pre-fusion operation, I12×12、I21×21Identity matrices of 12 × 12 and 21 × 21, respectively, q is a set process covariance coefficient, x2(k) Is the second state quantity fused at the k-th moment.
On the other hand, the invention provides a system for fusing sensing and space positioning of a plurality of sensors of a robot, which comprises a data acquisition module, a calibration and calibration module, a correction module, an enhancement and repair module, a pre-fusion module and a fusion positioning module;
the data acquisition module is configured to acquire a robot multi-sensor data sequence; the robot multi-sensor data sequence comprises a visual data sequence acquired by a structured light depth camera, a motion state data sequence acquired by a MARG sensor and an angular displacement data sequence acquired by a joint encoder;
the calibration and calibration module is configured to calibrate and calibrate the structured light depth camera, the MARG sensor and the joint encoder;
the correction module is configured to correct the robot multi-sensor data sequence based on the calibration and calibration results;
the enhancement and repair module is configured to enhance and repair the visual data sequence in the corrected robot multi-sensor data sequence by a preset depth camera visual enhancement method;
the pre-fusion module is configured to perform pre-fusion processing on the enhanced and repaired visual data sequence to obtain a first pre-fusion pose sequence; the pre-fusion processing of the corrected motion state data sequence is carried out to obtain a pre-fusion attitude sequence; the pre-fusion processing of the corrected angular displacement data sequence is carried out to obtain a second pre-fusion pose sequence;
the fusion positioning module is configured to perform fusion positioning on the spatial state of the robot in the environment based on the first pre-fusion pose sequence, the second pre-fusion pose sequence and the pre-fusion attitude sequence, so as to obtain a pose sequence and a spatial coordinate sequence of the robot in the environment.
The invention has the beneficial effects that:
(1) according to the method for the multi-sensor fusion sensing and the spatial positioning of the robot, disclosed by the invention, a plurality of different sensors are arranged on the robot to acquire multi-sensor data, so that the characteristics of each sensor are fully exerted, a corresponding multi-sensor data fusion sensing and spatial positioning method is provided, and the accuracy of the pose state of the robot acquired in a strange complex environment is further improved.
(2) According to the method for the robot multi-sensor fusion sensing and spatial positioning, the color image and the depth image acquired by the visual sensor in the multi-sensor are effectively enhanced and repaired at the same time, a more clear and recognizable depth camera image in a strange complex environment is acquired, the difficulty in acquiring the pose state of the subsequent robot is reduced, and the accuracy and the efficiency are improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a method for integrating sensing and spatial localization by multiple sensors of a robot according to the present invention;
FIG. 2 is a flow chart of a pre-fusion of visual data sequences for one embodiment of the method for robotic multi-sensor fusion sensing and spatial localization of the present invention;
FIG. 3 is a flow chart of a pre-fusion of motion state data sequences for one embodiment of a method for robotic multi-sensor fusion sensing and spatial localization of the present invention;
FIG. 4 is a pre-fusion flow chart of angular displacement data sequences of one embodiment of the inventive method for fusing sensing and spatial localization by multiple sensors of a robot;
FIG. 5 is a flowchart of a fusion positioning method for a multi-sensor fusion sensing and spatial positioning of a robot according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of fusion state update maintaining average sampling rate according to an embodiment of the method for fusing sensing and spatial localization by multiple sensors of a robot;
fig. 7 is a schematic diagram of a system for integrating sensing and spatial localization of multiple sensors of a robot according to an embodiment of the method for integrating sensing and spatial localization of multiple sensors of a robot of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides a method for integrating multi-sensor sensing and space positioning of a robot, which combines the measuring principle of a sensor and the motion process of the robot to complete multi-sensor integration and space positioning, so that the stability and reliability of the environment sensing and space positioning processes are improved. The multi-sensor is integrated in the robot tail end sensing platform, adverse environmental interference factors are overcome, and the pose and the space coordinate of the robot in the environment are rapidly and accurately acquired.
The invention discloses a method for integrating perception and space positioning of multiple sensors of a robot, which comprises the following steps:
step S10, acquiring a robot multi-sensor data sequence; the robot multi-sensor data sequence comprises a visual data sequence acquired by a structured light depth camera, a motion state data sequence acquired by a MARG sensor and an angular displacement data sequence acquired by a joint encoder;
step S20, calibrating and calibrating the structured light depth camera, the MARG sensor and the joint encoder, and correcting the robot multi-sensor data sequence based on the calibration and calibration result;
step S30, enhancing and repairing the visual data sequence in the corrected robot multi-sensor data sequence by a preset depth camera visual enhancement method;
step S40, performing pre-fusion processing on the enhanced and repaired visual data sequence to obtain a first pre-fusion pose sequence; performing pre-fusion processing on the corrected motion state data sequence to obtain a pre-fusion attitude sequence; performing pre-fusion processing on the corrected angular displacement data sequence to obtain a second pre-fusion pose sequence;
and step S50, carrying out fusion positioning on the spatial state of the robot in the environment based on the first pre-fusion pose sequence, the second pre-fusion pose sequence and the pre-fusion attitude sequence, and obtaining the pose sequence and the spatial coordinate sequence of the robot in the environment.
In order to more clearly describe the method for the multi-sensor fusion sensing and spatial localization of the robot of the present invention, the following describes the steps in the embodiment of the present invention in detail with reference to fig. 1.
The method for the robot multi-sensor fusion sensing and space positioning in the first embodiment of the invention comprises the steps S10-S50, and the steps are described in detail as follows:
step S10, acquiring a robot multi-sensor data sequence; the robot multi-sensor data sequence comprises a visual data sequence acquired by a structured light depth camera, a motion state data sequence acquired by a MARG sensor and an angular displacement data sequence acquired by a joint encoder.
The visual data sequence collected by the structured light depth camera comprises a color image sequence and a depth image sequence, the motion state data sequence collected by the MARG sensor comprises a triaxial acceleration sequence, an angular velocity sequence and a geomagnetic field intensity sequence, and the angular displacement data sequence collected by the joint encoder comprises an angular displacement data sequence of a robot joint.
And step S20, calibrating and calibrating the structured light depth camera, the MARG sensor and the joint encoder, and correcting the robot multi-sensor data sequence based on the calibration and calibration results.
Step S21, the calibration and calibration of the structured light depth camera includes four links of color camera calibration, infrared camera calibration, depth map drift correction and color map and depth map registration:
the structured light depth camera is the most important sensor in the invention, and the acquired data can be better corrected into real data which accords with the current equipment and environmental conditions through good calibration. A typical structured light depth camera consists of a color camera, an infrared camera and an infrared emitter. The calibration of the structured light depth camera is to adjust the transformation relation between the coordinate system and the coordinate system by a certain mathematical method, so that the data of the pixel plane can truly reflect the situation of the physical world.
The color camera and the infrared camera are calibrated by the same method, which specifically comprises the following steps:
step 1, shooting a 6 multiplied by 9 standard calibration chessboard diagram with the side length of a cell being 25mm for multiple times according to different angles by using a color camera and an infrared camera respectively. When the infrared camera is used, the infrared emitter is turned off to avoid speckle of the infrared emitter from influencing calibration.
Step 2, shooting the world coordinate P in the chessboard diagram by matching different angles according to the formula (1)worldAnd pixel coordinate PpixelAnd (4) calculating a camera internal reference matrix K, a rotation matrix R, a translation matrix t and a distortion matrix D to finish the calibration process.
Ppixel=h(K,R,t,D,Pworld) (1)
Wherein h (-) represents the camera calibration algorithm. In one embodiment of the present invention, a zhang scaling method is adopted, and in other embodiments, other camera scaling algorithms may be selected, which is not limited in the present invention.
The depth map drift correction process specifically includes:
step 1, selecting a flat plate parallel to a camera plane, and acquiring a depth map of a corresponding position according to a preset scale.
Step 2, performing polynomial fitting on the flat plate in the obtained depth map to obtain each group of measured depth
Figure BDA0002752576550000131
Group with minimum RMSE
Figure BDA0002752576550000132
Defining the final calibration parameter to obtain p10u+p01v+p11uv+p20u2+p02v2
Step 3, continue to carry out
Figure BDA0002752576550000133
Fitting the polynomial to obtain a0,a1,a2,a3Calibrating parameters, and finally calibrating the depth map to obtain a result shown in the formula (2):
dδ(u,v,dk)=a0+a1dk+a2dk 2+a3dk 3+p10u+p01v+p11uv+p20u2+p02v2 (2)
wherein d isδ(u,v,dk) Indicating the drift correction, dkRepresenting the raw depth measurements without correction, u and v representing the coordinates of the pixel points in the depth map, respectively.
The registration process of the color image and the depth image specifically comprises the following steps:
step 1, keeping the position and the pose of the camera unchanged, and shooting a 6 multiplied by 9 standard calibration checkerboard graph with the same cell side length of 25mm by using a color camera and an infrared camera respectively.
Step 2, calculating a point P in the infrared camera coordinate system according to the formula (3)ir_cameraTransformation to point P of color camera coordinate systemrgb_cameraMatching matrix T generated in the processCDAnd completing the calibration process.
Prgb_camera=TCDPir_camera (3)
Step S22, the calibration and calibration of the MARG sensor comprises two links of acceleration zero drift calibration and magnetic field ellipsoid fitting calibration:
the MARG sensor is composed of an accelerometer, a gyroscope and a magneto-resistance meter, wherein the accelerometer may not only have vertical downward gravity acceleration when being static due to manufacturing deviation, and the magneto-resistance meter does not only have geomagnetic field strength pointing to a geomagnetic pole when having no external magnetic field, so that zero drift errors of the accelerometer and the magneto-resistance meter need to be calibrated.
The accelerometer calibration process specifically comprises:
step 1, standing a sensor on a horizontal plane;
and 2, comparing the actual triaxial acceleration value with the local gravity acceleration value, calculating a compensation value, and finishing the calibration process.
The magnetic resistance meter calibration process specifically comprises the following steps:
step 1, a sensor is arranged in an environment far away from magnetic field interference;
step 2, firstly, horizontally placing the sensor, then rotating the sensor for more than 360 degrees along a plumb line, then vertically placing the sensor, firstly rotating the sensor for more than 90 degrees around an X axis, and then rotating the sensor for more than 360 degrees around the plumb line, and obtaining the actual magnetic field intensity in each direction;
and 3, completing the calibration process by a spherical fitting method.
Step S23, the calibration and calibration of the joint encoder comprises two links of registration of an electrical origin of the encoder and a mechanical origin of the joint rotating mechanism:
the joint encoder is positioned at the tail end of an actuating motor of each rotating pair and each moving pair in the mechanical structure of the robot, and due to actual installation deviation, the counting zero point of the encoder is not completely the same as the movement origin in the mechanical structure, so that the calibration is needed.
The joint encoder calibration process specifically includes:
step 1, moving a joint to a motion origin;
step 2, taking the current joint encoder count value as a compensation value;
and 3, repeating the step 1 and the step 2 of the joint encoder calibration process for all joints to finish calibration.
And step S30, enhancing and repairing the visual data sequence in the corrected robot multi-sensor data sequence by a preset depth camera visual enhancement method.
The image brightness and contrast collected by the camera are low due to the closed metal cavity environment, the image characteristics are seriously lost, the measurement precision of the visual odometer based on the image characteristics is greatly influenced, and the positioning of the robot in the cavity environment is further influenced. The invention can improve the problem by enhancing the visible light image, and can be divided into two parts of active brightness equalization and self-adaptive image characteristic enhancement in specific implementation. The distributed light supplement device extracts the illumination intensity of the current environment to be used as feedback input of the distributed light supplement device, control output of a light source is obtained through feedback adjustment, and active brightness balance is completed; the latter combines the self-adaptive parameter adjusting algorithm and the histogram equalization algorithm to realize the self-adaptive image enhancement of the color image, and the specific process comprises the following steps:
step S311, obtaining an illuminance component distribution diagram of the light receiving surface under the combined action of each point light source of the color image in the registered color image and depth image pair through a multi-scale gaussian filter.
In one embodiment of the invention, the multi-scale GaussianThe filter scale comprises S, M, L three scales, and the Gaussian kernel standard deviation parameter with S, M, L three scales is set to extract the illumination component of the scene, and the final Gaussian filter function
Figure BDA0002752576550000151
I.e. the Gaussian filter function G with different scalesS(x,y)、GM(x,y)、GL(x, y) wherein the S scale is 10, the M scale is 50, and the L scale is 200. In other embodiments, the gaussian filter combinations with corresponding scales can be selected according to needs, and the present invention is not described in detail herein.
Step S312, performing area sampling on the illuminance component distribution map to obtain an illuminance component under the independent action of each single point light source.
Step 313, performing active brightness equalization of the color map through feedback adjustment based on the illumination component under the independent action of the single point light source, as shown in formulas (4) and (5):
I′out(i,k)=Iin(i,k)+Iout(i,k) (4)
Iout(i,k)=(1-α)Iout(i,k-1)+α[255-Iin(i,k)] (5)
wherein, I'out(I, k) represents the equivalent illumination of the ith point light source at the time k after active brightness equalization, Iin(I, k) represents the equivalent illumination of the ith point light source at the moment k before active brightness equalization, Iout(I, k) and IoutAnd (i, k-1) respectively represent the compensation illumination of the ith point light source at the time k and the time k-1, and alpha is a preset control coefficient.
The larger the value of the control system is, the higher the light supplement sensitivity is, and in one embodiment of the invention, the control coefficient alpha is set to be between 0.8 and 0.95.
Step S314, calculating the mean value and standard deviation of each pixel value of the color image after the active brightness equalization.
And step S315, constructing a fuzzy inference system, taking the mean value and the standard deviation as system input variables, and obtaining an optimal clipping threshold value in the contrast-limiting self-adaptive histogram equalization algorithm and an optimal gamma correction coefficient in the gamma correction algorithm through fuzzy inference by combining a preset membership function and a fuzzy rule.
In one embodiment of the invention, the input variable mean of the fuzzy inference system
Figure BDA0002752576550000161
In a range from the standard deviation σ of
Figure BDA0002752576550000162
Output variable clipping threshold cLAnd the range of gamma correction coefficient beta is cL∈[2 20]、β∈[0.3 0.9]The preset membership function adopts a triangular membership function, and the preset fuzzy rule adopts a double-input double-output 3 multiplied by 4 specification fuzzy rule table for reasoning.
Step S316, based on the optimal gamma correction coefficient, performing adaptive brightness equalization on the brightness equalized color map by using a gamma correction algorithm, as shown in formulas (6) and (7):
Figure BDA0002752576550000163
Figure BDA0002752576550000164
wherein, Fo(x, y) represents the illumination component after adaptive brightness equalization of the pixel points located in (x, y), FiAnd (x, y) represents the illumination component before the pixel point positioned at (x, y) is subjected to adaptive brightness equalization, F (x, y) represents the brightness value of the pixel point positioned at (x, y), M is the average value of the illumination components of the current image, and beta is the optimal gamma correction coefficient.
The larger the value of the gamma correction coefficient is, the larger the correction intensity is, and it is generally appropriate to set the correction intensity to be between 0.4 and 0.5, in an embodiment of the present invention, an optimal parameter is automatically determined by a fuzzy inference system, in other embodiments, an appropriate parameter may also be set according to needs, and the present invention is not described in detail herein.
And based on the optimal cutting threshold, performing contrast-limiting adaptive histogram equalization on the image subjected to the adaptive brightness equalization, and performing bilateral filtering to obtain an enhanced color image.
Because the active infrared light source is adopted for measurement, the depth image of the structured light depth camera is slightly influenced by illumination, but is greatly influenced by optical influences of dark objects, smooth objects and transparent objects and parallax of a narrow environment. The method comprises the steps that a similar area in a depth map is judged by means of texture features of a color map, and an effective measurement value which does not need to be repaired and an invalid measurement value waiting for repair are obtained from the similar area; the latter fits a local point cloud model according to the effective measured value, and then recalculates the depth value of the ineffective measured point by means of a camera projection model to complete the repairing process, wherein the repairing process specifically comprises the following steps:
in step S321, the enhanced color map is down-sampled to a set resolution, which is generally 256 × 192 or 320 × 240.
Step S322, smoothing the similar texture area in the image after down sampling into the same color through the MeanShift algorithm.
And step S323, extracting a corresponding color connected domain in the smoothed image through a FloodFill algorithm to form a texture area mask.
Step S324, extracting ROI areas in the depth maps in the registered color map and depth map pair by enhancing the texture features of the color map, and obtaining a similar texture area set of the depth map.
Step S325, for each similar texture region in the similar texture region set of the depth map, a range of depth measurement values in the region is obtained, a measurement value larger than the maximum range of the depth camera is divided into invalid measurement points, and a measurement value belonging to a normal range is divided into valid measurement points.
Step S326, calculating the ratio of the number of the effective measuring points to the number of the ineffective measuring points, and if the ratio is smaller than a set threshold, terminating the repair; otherwise, the effective measurement points in the similar texture region of the depth map are fitted through the RANSAC algorithm to obtain an effective measurement point local point cloud fitting model.
Carrying out first repair effectiveness evaluation through the ratio of the number of the effective measuring points to the number of the ineffective measuring points, wherein in one embodiment of the invention, when the ratio is more than 1, the repair possibility is considered to be high; and when the ratio is less than 0.2, the repair is not possible, and the repair process is quitted.
Step S327, using the point where the error between the actual value of the effective measurement point and the model estimation value is less than or equal to the set threshold as the inner point, and using the point where the error between the actual value of the effective measurement point and the model estimation value is greater than or equal to the set threshold as the outer point, if the ratio of the inner point to the outer point is less than the set threshold, terminating the repair; otherwise, carrying out depth value recalculation on the invalid measuring points in the similar texture region according to the camera projection model and the local point cloud fitting model, as shown in formulas (8), (9) and (10):
Figure BDA0002752576550000181
Figure BDA0002752576550000182
Figure BDA0002752576550000183
wherein the content of the first and second substances,
Figure BDA0002752576550000184
is a spatial point coordinate in the environment,
Figure BDA0002752576550000185
for the recalculated depth measurement, (u, v) are the pixel plane coordinates in the depth image, cx、cyRespectively the offset of the optical center of the camera in two perpendicular directions, fx、fyThe focal lengths of the camera in two vertical directions are respectively, and F (x, y) is a local point cloud fitting model.
Performing second repair effectiveness evaluation on the ratio of the number of the inner points to the number of the outer points extracted by the RANSAC algorithm, wherein in one embodiment of the invention, when the ratio is greater than 2, the repair effect is considered to be good; and when the ratio is less than 0.5, the repair is not possible, and the repair process is quitted.
And step S328, repeating the step S325 to the step S328 until each area of the similar texture area set of the depth map completes the repair of the invalid measurement point, and obtaining the enhanced depth map.
In one embodiment of the present invention, the above process is used to enhance and repair the visual data, and in other embodiments, the enhancement and repair of the visual data can be realized by other methods, for example, documents: "Liujunyi. color image guided depth image enhancement [ D ]. Zhejiang university, 2014" and "clockton. time of flight method three-dimensional camera depth image enhancement technology research [ D ]. 2017", etc., the invention is not detailed herein.
Step S40, performing pre-fusion processing on the enhanced and repaired visual data sequence to obtain a first pre-fusion pose sequence; performing pre-fusion processing on the corrected motion state data sequence to obtain a pre-fusion attitude sequence; and performing pre-fusion processing on the corrected angular displacement data sequence to obtain a second pre-fusion pose sequence.
The visual data comprises a color image and a depth image, and the state estimation of the camera pose can be realized by means of the characteristic point information of the color image and the depth value information of the depth image, so that a corresponding visual odometer can be formed. Therefore, the present invention partially improves the conventional visual odometer according to the characteristics of the complex environment, as shown in fig. 2, which is a pre-fusion flow chart of the visual data sequence of an embodiment of the method for fusing sensing and spatial localization by multiple sensors of the robot of the present invention, and the pre-fusion process of the enhanced and repaired visual data sequence includes:
step S411, extracting ORB two-dimensional characteristic points of a frame of color image in the enhanced and repaired visual data sequence, and extracting depth measurement values corresponding to the ORB two-dimensional characteristic points in a depth image registered with the current frame color image;
step S412, if the depth measurement value can be extracted, the ORB two-dimensional feature point is used as an ORB three-dimensional feature point; otherwise, the ORB is still used as the ORB two-dimensional feature point;
step S413, matching and tracking all feature points of the current frame with all feature points of a previous frame of the current frame to obtain feature point pairs;
step S414, if the feature points in the feature point pairs are all ORB three-dimensional feature points, obtaining a pose transformation matrix between two frames through an ICP (inductively coupled plasma) algorithm; otherwise, obtaining a pose transformation matrix between two frames through a PnP algorithm;
step S415, judging whether the tracking of the ORB three-dimensional feature points is lost or not, and if not, directly adding a camera pose transformation matrix obtained by tracking as a new key frame into the existing key frame sequence; if the key frame is lost, calling a pre-fusion measurement value of a joint encoder and a MARG sensor corresponding to the current frame as a camera pose transformation matrix of an initial key frame in the new key frame sequence;
step S416, performing pose graph optimization on the obtained key frame sequence to obtain a camera pose sequence to be optimized, and performing closed-loop correction on the camera pose sequence to be optimized to obtain a first pre-fusion pose sequence.
Due to the influence of the manufacturing accuracy of the sensor and the interference of the operating environment, the noise of the original measurement data of the MARG sensor is generally large, and if a fusion positioning system is directly introduced, the overall positioning accuracy of the system can be reduced. Therefore, the invention performs pre-fusion processing on the raw data of the MARG sensor. As shown in fig. 3, which is a flowchart of a pre-fusion process of a motion state data sequence according to an embodiment of the method for fusing sensing and spatial localization by multiple sensors of a robot of the present invention, the pre-fusion process of the modified motion state data sequence includes:
step S421, acquiring a first state quantity and a first control quantity at the moment of k-1; the first state quantity comprises a quaternion value of a rotation matrix from the MARG sensor carrier coordinate system to the geographic coordinate system and an angular speed accumulated drift value of the MARG sensor on an X, Y, Z axis; the first control quantity includes X, Y, Z axle angular velocity true values.
The first state quantity at the k-th time is as shown in equation (11):
x1(k)=[q0(k) q1(k) q2(k) q3(k) bgx(k) bgy(k) bgz(k)]T (11)
wherein q is0、q1、q2、q3Taking the value of quaternion from MARG sensor carrier coordinate system to geographic coordinate system rotation matrix, bgx、bgy、bgzThe drift value is accumulated for the angular velocity on the axis of the MARG sensor X, Y, Z.
The first control amount at the k-th time is as shown in equation (12):
u1(k)=[ωx(k) ωy(k) ωz(k)]T (12)
wherein, ω isx、ωy、ωzIs X, Y, Z true axial angular velocity and satisfies omegax=ωGx(k)-bgx(k-1)、ωy(k)=ωGy(k)-bgy(k-1)、ωz(k)=ωGz(k)-bgz(k-1),ωGx、ωGy、ωGzAngular velocity sequences collected for the MARG sensor.
Step S422, based on the first state quantity and the first control quantity at the k-1 moment, according to a first state transition function f1[x1(k-1),u1(k-1)]Obtaining the predicted value of the first state quantity at the kth moment
Figure BDA0002752576550000201
Namely, it is
Figure BDA0002752576550000202
First state transfer function f1[x1(k-1),u1(k-1)]As shown in formula (13):
Figure BDA0002752576550000203
where T is the sampling period of the MARG sensor.
Step S423, based on the extended Kalman filter algorithm, passing through the first observation quantity and the first observation function at the kth moment
Figure BDA0002752576550000204
A first process covariance matrix Q1, a first noise covariance matrix R1, and a predicted value of a first state quantity at a k-th time
Figure BDA0002752576550000205
Obtaining a first state quantity x of the k-th time pre-fusion1(k) The pre-fusion first state quantity at k moments is a pre-fusion attitude sequence; first observed quantity z at the k-th time1(k) Three-axis acceleration sequence [ a ] obtained by MARG sensorAx aAyaAz]TAnd the sequence of earth magnetic field intensities [ h ]Mx hMy hMz]T
Step S4231, obtaining a first process covariance matrix Q1 and a first noise covariance matrix R1 and based on a first observation z1(k) Obtaining a first observation function
Figure BDA0002752576550000211
As shown in formulas (14), (15) and (16), respectively:
Figure BDA0002752576550000212
Figure BDA0002752576550000213
Figure BDA0002752576550000214
wherein g is the local gravitational acceleration value Hx,Hy,HzRespectively the intensity of the magnetic field in the direction of three coordinate axes under the geographic coordinate systemThe components of the first and second images are,
Figure BDA0002752576550000215
the distribution variance of the quaternion values from the MARG sensor carrier coordinate system to the geographic coordinate system rotation matrix is taken,
Figure BDA0002752576550000216
is the distribution variance, σ, of the angular velocity drift valuea 2、σm 2The distribution variances of the triaxial acceleration sequence and the geomagnetic field strength sequence acquired by the MARG sensor respectively.
Step S4232, based on the parameter acquired in step S4231, combines the predicted value of the first state quantity at the k-th time
Figure BDA0002752576550000221
Obtaining a first state quantity x of the k-th time pre-fusion1(k) As shown in formula (17):
Figure BDA0002752576550000222
wherein, P1,k-1Is a first covariance matrix, P, at time k-11,kIs P1,k-1Obtaining an updated first covariance matrix at the kth time after the k-th time pre-fusion operation, I7×7Is a 7 × 7 identity matrix, x1(k) Is the first state quantity of the pre-fusion at the k-th moment.
In step S4233, the pre-fusion first state quantity at k times is a pre-fusion pose sequence.
The multi-degree-of-freedom redundant structure can improve the motion flexibility of the robot in a complex environment, and the multi-degree-of-freedom redundant structure is adopted in the specific implementation of the invention, and the angular displacement of each joint is converted into the pose state of the tail end of the robot through the positive kinematics calculation of the robot. As shown in fig. 4, which is a flow chart of pre-fusion of angular displacement data sequences according to an embodiment of the method for fusing sensing and spatial localization by multiple sensors of a robot of the present invention, the pre-fusion process of the modified angular displacement data sequences includes:
and step S431, taking the joint rotation axis or the movement direction as a Z axis, obtaining the length of a connecting rod between adjacent joints of the robot, the Z-axis offset distance and the Z-axis included angle of the adjacent joints according to a pre-designed mechanism model of the robot, wherein the corrected angular displacement data sequence is a Z-axis rotation angle sequence around a coordinate system.
Step S432, combining the pose transformation matrix of the nth joint of the robot relative to the (n-1) th joint based on the data obtained in the step S431n-1TnAnd obtaining a second pre-fusion pose sequence.
Step S4321, calculating a pose transformation matrix of the nth joint of the robot relative to the (n-1) th joint based on the data obtained in the step S431n-1TnAs shown in formula (18):
Figure BDA0002752576550000231
wherein the content of the first and second substances,n-1Tna position and orientation transformation matrix representing the nth joint of the robot relative to the (n-1) th joint, anRepresenting the length of the connecting rod between the nth joint and the (n-1) th joint of the robot, dn、αnAnd thetanRespectively representing the Z-axis offset distance, the torsion angle and the rotation angle around the Z axis of the coordinate system between the nth joint and the (n-1) th joint of the robot.
Step S4322, combining the data obtained in step S431 and the pose transformation matrix obtained in step S4321n-1TnAnd obtaining a second pre-fusion pose sequence as shown in the formula (19):
Tend_effector0T11,d1,a11)1T22,d2,a22)……n-1Tnn,dn,ann) (19)
wherein, Tend_effectorRepresenting a second pre-fused pose sequence.
And step S50, carrying out fusion positioning on the spatial state of the robot in the environment based on the first pre-fusion pose sequence, the second pre-fusion pose sequence and the pre-fusion attitude sequence, and obtaining the pose sequence and the spatial coordinate sequence of the robot in the environment. The fuzzy adaptive EKF algorithm provided by the invention is improved according to the characteristics of a complex environment on the basis of the basic structure of input-prediction-update-output so as to improve the precision and stability of the system, as shown in FIG. 5, a fusion positioning flow chart of an embodiment of the robot multi-sensor fusion sensing and space positioning method of the invention is shown, and comprises the following steps:
step S51, acquiring a second state quantity and a second control quantity at the moment k-1; the second state quantity comprises the position of the robot end effector and the attitude Euler angle of the robot end effector; the second control amount includes a velocity, an acceleration, and an angular velocity of the robot end effector.
The second state quantity at the k-th time is as shown in equation (20):
x2(k)=[px(k) py(k) pz(k) θ(k) γ(k) ψ(k)]T (20)
wherein p isx、py、pzTheta, gamma, psi are attitude euler angles of the robot end effector.
The second control amount at the k-th time is represented by equation (21):
Figure BDA0002752576550000241
wherein the content of the first and second substances,
Figure BDA0002752576550000242
is the speed of the robot end-effector,
Figure BDA0002752576550000243
is the acceleration of the robot end-effector,
Figure BDA0002752576550000244
is the angular velocity of the robot end effector.
Step S52, based on the second state quantity and the second control quantity at the k-1 moment, according to the second state transition function f2[x2(k-1),u2(k-1)]Obtaining the predicted value of the second state quantity at the kth moment
Figure BDA0002752576550000245
Namely, it is
Figure BDA0002752576550000246
Second state transfer function f2[x2(k-1),u2(k-1)]As shown in equation (22):
Figure BDA0002752576550000247
wherein p isx、py、pzTheta, gamma and psi are second state quantities at the k-th time,
Figure BDA0002752576550000248
Figure BDA0002752576550000249
is the second control quantity, p, at the k-th momentx′、py′、pz', θ', γ ', ψ' are second state quantities at the k-1 th time,
Figure BDA00027525765500002410
and T is a second control quantity at the k-1 th moment, and is a sampling period of the sensor.
Step S53, acquiring a first pre-fusion pose sequence, a second pre-fusion pose sequence and a pre-fusion attitude sequence as a second observed quantity z2(k) And performing data synchronization on the second observed quantity, and determining whether synchronous data can be formed, if so, the state update matrix is an absolute state update matrix, and if not, the state update matrix is a relative state update matrix, as shown in fig. 6, which is a method for maintaining an average sampling rate according to an embodiment of the method for integrating sensing and spatial localization of multiple sensors of a robot in the present inventionThe fusion state of (1) update schematic.
The absolute state update matrix and the relative state update matrix are respectively expressed by equations (23) and (24):
Figure BDA0002752576550000251
Figure BDA0002752576550000252
wherein H1Updating the matrix for absolute states, H2Updating the matrix for the relative state, N being the total number of synchronized sensors, N being greater than 1 and less than or equal to Nmax,NmaxFor the maximum number of sensors, pi is the total number of system state quantities that the ith sensor can observe, qiThe total number of system control variables that can be observed by the ith sensor.
Step S54, based on the extended Kalman filter algorithm, the predicted value of the second observed quantity, the state updating matrix, the second noise covariance matrix and the second state quantity at the k moment
Figure BDA0002752576550000253
Obtaining a second state quantity x fused at the k-th moment2(k) As shown in formula (25):
Figure BDA0002752576550000254
wherein, P2,k-1Is the second covariance matrix, P, at time k-12,kIs P2,k-1Obtaining an updated second covariance matrix at the kth time after the k-th time pre-fusion operation, I12×12、I21×21Identity matrices of 12 × 12 and 21 × 21, respectively, q is a set process covariance coefficient, x2(k) Is the second state quantity fused at the k-th moment.
Step S55, merging the k-th time into the second state quantity x2(k) Second state quantity predicted from the k-th time
Figure BDA0002752576550000261
Taking the variation between the first noise covariance matrix and the second noise covariance matrix as input fuzzy variables of a two-dimensional Mamdani fuzzy method, taking the second noise covariance matrix as output fuzzy variables, and carrying out self-adaptive adjustment on the second noise covariance matrix at the moment of k +1 by using fuzzy reasoning;
step S56, sequentially outputting p with the second state quantity fused at the k-th timex、py、pzTheta, gamma and psi as a six-degree-of-freedom pose sequence of the robot in the environment, wherein px、py、pzIs a three-freedom-degree space coordinate sequence of the robot in the environment.
The system for fusing perception and spatial positioning of the multi-sensor robot comprises a data acquisition module, a calibration and calibration module, a correction module, an enhancement and repair module, a pre-fusion module and a fusion positioning module;
the data acquisition module is configured to acquire a robot multi-sensor data sequence; the robot multi-sensor data sequence comprises a visual data sequence acquired by a structured light depth camera, a motion state data sequence acquired by a MARG sensor and an angular displacement data sequence acquired by a joint encoder;
the calibration and calibration module is configured to calibrate and calibrate the structured light depth camera, the MARG sensor and the joint encoder;
the correction module is configured to correct the robot multi-sensor data sequence based on the calibration and calibration results;
the enhancement and repair module is configured to enhance and repair the visual data sequence in the corrected robot multi-sensor data sequence by a preset depth camera visual enhancement method;
the pre-fusion module is configured to perform pre-fusion processing on the enhanced and repaired visual data sequence to obtain a first pre-fusion pose sequence; the pre-fusion processing of the corrected motion state data sequence is carried out to obtain a pre-fusion attitude sequence; the pre-fusion processing of the corrected angular displacement data sequence is carried out to obtain a second pre-fusion pose sequence;
the fusion positioning module is configured to perform fusion positioning on the spatial state of the robot in the environment based on the first pre-fusion pose sequence, the second pre-fusion pose sequence and the pre-fusion attitude sequence, so as to obtain a pose sequence and a spatial coordinate sequence of the robot in the environment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the system for fusion sensing and spatial localization of multiple sensors of a robot provided in the foregoing embodiment is only illustrated by the division of the above functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
A storage device according to a third embodiment of the present invention stores a plurality of programs, which are suitable for being loaded and executed by a processor to implement the above-mentioned method for integrating sensing and spatial localization of multiple sensors in a robot.
A processing apparatus according to a fourth embodiment of the present invention includes a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is suitable to be loaded and executed by a processor to realize the method for the multi-sensor fusion perception and space positioning of the robot.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
A system for integrating sensing and spatial positioning of multiple sensors of a robot according to a fifth embodiment of the present invention, as shown in fig. 7, includes a structured light depth camera 1, a MARG sensor 2, a joint encoder 3, a distributed adjustable light supplement 4, and a positioning module.
The structured light depth camera, the MARG sensor, the joint encoder, the distributed adjustable light supplement device and the positioning module are arranged on the robot.
The distributed adjustable light supplement 4 is arranged around the color camera 5 in the structured light depth camera 1 to provide necessary compensation light source for the color camera 5. To ensure matching of the kinematic state data, visual data and joint angular displacement data, the center of the structured light depth camera 1, the center of the MARG sensor 2 and the center of the end joint 6 are on the same axis on the sensor mount. The number of joint encoders 3 is not limited to the number shown in fig. 7 according to the difference in the required degrees of freedom of motion of the robot, and in general, the number of required degrees of freedom of motion is the same as the number of joint encoders 3.
The structured light depth camera is used for collecting a color image sequence and a depth image sequence;
the MARG sensor is used for acquiring a triaxial acceleration sequence, an angular velocity sequence and a geomagnetic field intensity sequence;
the joint encoder is used for acquiring an angular displacement data sequence of the robot joint;
the distributed adjustable light supplement device is used for supplementing light to the environment where the robot is located;
the positioning module is used for performing data enhancement and restoration, data pre-fusion and data fusion positioning through the robot multi-sensor fusion sensing and space positioning method according to the data acquired by the structured light depth camera, the MARG sensor and the joint encoder, and acquiring a pose sequence and a space coordinate sequence of the robot in the environment.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (11)

1. A method for integrating perception and space positioning of a robot multi-sensor is characterized by comprising the following steps:
step S10, acquiring a robot multi-sensor data sequence; the robot multi-sensor data sequence comprises a visual data sequence acquired by a structured light depth camera, a motion state data sequence acquired by a MARG sensor and an angular displacement data sequence acquired by a joint encoder;
step S20, calibrating and calibrating the structured light depth camera, the MARG sensor and the joint encoder, and correcting the robot multi-sensor data sequence based on the calibration and calibration result;
step S30, enhancing and repairing the visual data sequence in the corrected robot multi-sensor data sequence by a preset depth camera visual enhancement method;
step S40, performing pre-fusion processing on the enhanced and repaired visual data sequence to obtain a first pre-fusion pose sequence; performing pre-fusion processing on the corrected motion state data sequence to obtain a pre-fusion attitude sequence; performing pre-fusion processing on the corrected angular displacement data sequence to obtain a second pre-fusion pose sequence;
and step S50, carrying out fusion positioning on the spatial state of the robot in the environment based on the first pre-fusion pose sequence, the second pre-fusion pose sequence and the pre-fusion attitude sequence, and obtaining the pose sequence and the spatial coordinate sequence of the robot in the environment.
2. The method of claim 1, wherein the sequence of visual data comprises a sequence of color maps and a sequence of depth maps acquired by a depth camera; the motion state data sequence comprises a triaxial acceleration sequence, an angular velocity sequence and a geomagnetic field intensity sequence which are acquired by a MARG sensor; the angular displacement data sequence comprises an angular displacement data sequence of the robot joint acquired by the joint encoder.
3. The method of claim 2, wherein calibrating and calibrating the structured light depth camera, the MARG sensor and the joint encoder comprises:
the calibration and calibration of the structured light depth camera comprise color camera calibration, infrared camera calibration, depth map drift correction and color map and depth map registration;
the calibration and calibration of the MARG sensor comprise acceleration zero drift calibration and magnetic field ellipsoid fitting calibration;
the calibration and calibration of the joint encoder comprises the registration of an electrical origin of the encoder and a mechanical origin of the joint rotating mechanism.
4. The method of robotic multi-sensor fusion perception and spatial localization of claim 2, wherein the pre-fusion processing of the enhanced and restored visual data sequence comprises:
step S411, extracting ORB two-dimensional characteristic points of a frame of color image in the enhanced and repaired visual data sequence, and extracting depth measurement values corresponding to the ORB two-dimensional characteristic points in a depth image registered with the current frame color image;
step S412, if the depth measurement value can be extracted, the ORB two-dimensional feature point is used as an ORB three-dimensional feature point; otherwise, the ORB is still used as the ORB two-dimensional feature point;
step S413, matching and tracking all feature points of the current frame with all feature points of a previous frame of the current frame to obtain feature point pairs;
step S414, if the feature points in the feature point pairs are all ORB three-dimensional feature points, obtaining a pose transformation matrix between two frames through an ICP (inductively coupled plasma) algorithm; otherwise, obtaining a pose transformation matrix between two frames through a PnP algorithm;
step S415, judging whether the tracking of the ORB three-dimensional feature points is lost or not, and if not, directly adding a camera pose transformation matrix obtained by tracking as a new key frame into the existing key frame sequence; if the key frame is lost, calling a pre-fusion measurement value of a joint encoder and a MARG sensor corresponding to the current frame as a camera pose transformation matrix of an initial key frame in the new key frame sequence;
step S416, performing pose graph optimization on the obtained key frame sequence to obtain a camera pose sequence to be optimized, and performing closed-loop correction on the camera pose sequence to be optimized to obtain a first pre-fusion pose sequence.
5. The method of claim 2, wherein the pre-fusion processing of the modified motion state data sequence comprises:
step S421, acquiring a first state quantity and a first control quantity at the moment of k-1; the first state quantity comprises a quaternion value of a rotation matrix from the MARG sensor carrier coordinate system to the geographic coordinate system and an angular speed accumulated drift value of the MARG sensor on an X, Y, Z axis; the first control quantity comprises X, Y, Z shaft angular velocity true values;
step S422, based on the first state quantity and the first control quantity at the k-1 moment, according to a first state transition function f1[x1(k-1),u1(k-1)]Obtaining the predicted value of the first state quantity at the k-th moment
Figure FDA0003501785080000031
The first state transition function is:
Figure FDA0003501785080000032
wherein q is0、q1、q2、q3Taking the value of quaternion from MARG sensor carrier coordinate system to geographic coordinate system rotation matrix, bgx、bgy、bgzIntegrating a drift value, ω, for angular velocity of the MARG sensor on the X, Y, Z axisx、ωy、ωzX, Y, Z axial angular velocity true value, T is sampling period of MARG sensor;
step S423, based on the extended Kalman filter algorithm, passing through the first observation quantity and the first observation function at the kth moment
Figure FDA0003501785080000033
First process protocolVariance matrix Q1A first noise covariance matrix R1And the predicted value of the first state quantity at the k-th time
Figure FDA0003501785080000034
Obtaining a first state quantity x of the k-th time pre-fusion1(k) The pre-fusion first state quantity at k moments is a pre-fusion attitude sequence; first observed quantity z at the k-th time1(k) Three-axis acceleration sequence [ a ] obtained by MARG sensorAx aAy aAz]TAnd the sequence of earth magnetic field intensities [ h ]Mx hMy hMz]T
Step S4231, obtaining a first process covariance matrix Q1And a first noise covariance matrix R1And based on the first observed quantity z1(k) Obtaining a first observation function
Figure FDA0003501785080000041
Figure FDA0003501785080000042
Figure FDA0003501785080000043
Figure FDA0003501785080000044
Wherein, aAx,aAy,aAzTriaxial acceleration sequence, h, obtained for a MARG sensorMx,hMy,hMzGeomagnetic field intensity sequence obtained for MARG sensor, g is local gravitational acceleration value, Hx,Hy,HzRespectively the components of the magnetic field intensity in three coordinate axis directions under the geographic coordinate system,
Figure FDA0003501785080000045
the distribution variance of the quaternion values from the MARG sensor carrier coordinate system to the geographic coordinate system rotation matrix is taken,
Figure FDA0003501785080000046
the variance of the distribution of the drift values is accumulated for the angular velocity,
Figure FDA0003501785080000047
distribution variances of a triaxial acceleration sequence and a geomagnetic field intensity sequence which are respectively collected by the MARG sensor;
step S4232, based on the parameter acquired in step S4231, combines the predicted value of the first state quantity at the k-th time
Figure FDA0003501785080000048
Obtaining a first state quantity x of the k-th time pre-fusion1(k):
Figure FDA0003501785080000051
Wherein, P1,k-1Is a first covariance matrix, P, at time k-11,kIs P1,k-1Obtaining an updated first covariance matrix at the kth time after the k-th time pre-fusion operation, I7×7Is a 7 × 7 identity matrix, x1(k) A first state quantity pre-fused at the kth moment;
in step S4233, the pre-fusion first state quantity at k times is a pre-fusion pose sequence.
6. The method of claim 2, wherein the pre-fusion processing of the modified angular displacement data sequence comprises:
step S431, acquiring the length of a connecting rod between adjacent joints of the robot, the Z-axis offset distance and the Z-axis included angle of the adjacent joints according to a pre-designed mechanism model of the robot; the Z axis is a joint rotation axis or a joint moving direction; the angular displacement data sequence is a sequence of rotational angles about a Z-axis;
step S432, combining the pose transformation matrix of the nth joint of the robot relative to the (n-1) th joint based on the data obtained in the step S431n-1TnAnd obtaining a second pre-fusion pose sequence.
7. The method of claim 6, wherein step S432 comprises:
step S4321, calculating a pose transformation matrix of the nth joint of the robot relative to the (n-1) th joint based on the data obtained in the step S431n-1Tn
Figure FDA0003501785080000052
Wherein the content of the first and second substances,n-1Tna position and orientation transformation matrix representing the nth joint of the robot relative to the (n-1) th joint, anRepresenting the length of the connecting rod between the nth joint and the (n-1) th joint of the robot, dn、αnAnd thetanRespectively representing the Z-axis offset distance, the torsion angle and the rotation angle around the Z axis between the nth joint and the (n-1) th joint of the robot;
step S4322, combining the data obtained in step S431 and the pose transformation matrix obtained in step S4321n-1TnAnd obtaining a second pre-fusion pose sequence:
Tend_effector0T11,d1,a1,α1)1T22,d2,a2,α2)…n-1Tnn,dn,an,αn)
wherein, Tend_effectorRepresenting a second pre-fused pose sequence.
8. The method for fusion perception and spatial localization of multiple sensors in robot according to claim 2, wherein step S50 includes:
step S51, acquiring a second state quantity and a second control quantity at the moment k-1; the second state quantity comprises the position of the robot end effector and the attitude Euler angle of the robot end effector; the second control quantity includes a velocity, an acceleration, and an angular velocity of the robot end effector;
step S52, based on the second state quantity and the second control quantity at the k-1 moment, according to the second state transition function f2[x2(k-1),u2(k-1)]Obtaining the predicted value of the second state quantity at the kth moment
Figure FDA0003501785080000061
The second state transfer function is:
Figure FDA0003501785080000062
wherein p isx、py、pzTheta, gamma and psi are second state quantities at the k-th time,
Figure FDA0003501785080000063
Figure FDA0003501785080000064
is a second control quantity, p 'at the k-th time'x、p′y、p′zTheta ', gamma ', psi ' are second state quantities at the k-1 time,
Figure FDA0003501785080000071
the second control quantity at the k-1 moment is T, and the sampling period of the sensor is T;
step S53, acquiring a first pre-fusion pose sequence, a second pre-fusion pose sequence and a pre-fusion attitude sequence as a second observed quantity z2(k) To the secondThe two observation quantities carry out data synchronization and judge whether synchronous data can be formed or not, if so, the state updating matrix is an absolute state updating matrix, and if not, the state updating matrix is a relative state updating matrix;
step S54, based on the extended Kalman filter algorithm, the predicted value of the second observed quantity, the state updating matrix, the second noise covariance matrix and the second state quantity at the k moment
Figure FDA0003501785080000072
Obtaining a second state quantity x fused at the k-th moment2(k);
Step S55, merging the k-th time into the second state quantity x2(k) Predicted value of second state quantity at k-th time
Figure FDA0003501785080000073
Taking the variation between the first noise covariance matrix and the second noise covariance matrix as input fuzzy variables of a two-dimensional Mamdani fuzzy method, taking the second noise covariance matrix as output fuzzy variables, and carrying out self-adaptive adjustment on the second noise covariance matrix at the moment of k +1 by using fuzzy reasoning;
step S56, sequentially outputting p with the second state quantity fused at the k-th timex、py、pzTheta, gamma and psi as a six-degree-of-freedom pose sequence of the robot in the environment, wherein px、py、pzIs a three-freedom-degree space coordinate sequence of the robot in the environment.
9. The method of claim 8, wherein the absolute state update matrix and the relative state update matrix are respectively:
Figure FDA0003501785080000074
Figure FDA0003501785080000075
wherein H1Updating the matrix for absolute states, H2Updating the matrix for the relative state, N being the total number of synchronized sensors including structured light depth camera, MARG sensor and joint encoder, N being more than 1 and less than or equal to Nmax,NmaxIs the maximum number of sensors, piTotal number of system state quantities q observable by the ith sensoriThe total number of system control variables that can be observed by the ith sensor.
10. The method of claim 9, wherein step S54 includes:
Figure FDA0003501785080000081
wherein when the second observed quantity can form synchronous data, H2=H1Updating the matrix for absolute states, H, when the second observation cannot form synchronous data2=H2Updating the matrix for the relative state; p2,k-1Is the second covariance matrix, P, at time k-12,kIs P2,k-1Obtaining an updated second covariance matrix at the kth time after the k-th time pre-fusion operation, I12×12、I21×21Identity matrices of 12 × 12 and 21 × 21, respectively, q is a set process covariance coefficient, x2(k) Is the second state quantity fused at the k-th moment.
11. A system for fusing sensing and space positioning of multiple sensors of a robot is characterized by comprising a data acquisition module, a calibration and calibration module, a correction module, an enhancement and repair module, a pre-fusion module and a fusion positioning module;
the data acquisition module is configured to acquire a robot multi-sensor data sequence; the robot multi-sensor data sequence comprises a visual data sequence acquired by a structured light depth camera, a motion state data sequence acquired by a MARG sensor and an angular displacement data sequence acquired by a joint encoder;
the calibration and calibration module is configured to calibrate and calibrate the structured light depth camera, the MARG sensor and the joint encoder;
the correction module is configured to correct the robot multi-sensor data sequence based on the calibration and calibration results;
the enhancement and repair module is configured to enhance and repair the visual data sequence in the corrected robot multi-sensor data sequence by a preset depth camera visual enhancement method;
the pre-fusion module is configured to perform pre-fusion processing on the enhanced and repaired visual data sequence to obtain a first pre-fusion pose sequence; the method is also configured to perform pre-fusion processing on the corrected motion state data sequence to obtain a pre-fusion attitude sequence; the pre-fusion processing of the corrected angular displacement data sequence is carried out to obtain a second pre-fusion pose sequence;
the fusion positioning module is configured to perform fusion positioning on the spatial state of the robot in the environment based on the first pre-fusion pose sequence, the second pre-fusion pose sequence and the pre-fusion attitude sequence, so as to obtain a pose sequence and a spatial coordinate sequence of the robot in the environment.
CN202011190385.4A 2020-10-30 2020-10-30 Method, system and device for fusing sensing and space positioning of multiple sensors of robot Active CN112388635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011190385.4A CN112388635B (en) 2020-10-30 2020-10-30 Method, system and device for fusing sensing and space positioning of multiple sensors of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011190385.4A CN112388635B (en) 2020-10-30 2020-10-30 Method, system and device for fusing sensing and space positioning of multiple sensors of robot

Publications (2)

Publication Number Publication Date
CN112388635A CN112388635A (en) 2021-02-23
CN112388635B true CN112388635B (en) 2022-03-25

Family

ID=74598525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011190385.4A Active CN112388635B (en) 2020-10-30 2020-10-30 Method, system and device for fusing sensing and space positioning of multiple sensors of robot

Country Status (1)

Country Link
CN (1) CN112388635B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115700507B (en) * 2021-07-30 2024-02-13 北京小米移动软件有限公司 Map updating method and device
CN114618802B (en) * 2022-03-17 2023-05-05 国网辽宁省电力有限公司电力科学研究院 GIS cavity operation device and GIS cavity operation method
CN116793199B (en) * 2023-08-24 2023-11-24 四川普鑫物流自动化设备工程有限公司 Centralized multi-layer goods shelf four-way vehicle positioning system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222772A (en) * 2015-09-17 2016-01-06 泉州装备制造研究所 A kind of high-precision motion track detection system based on Multi-source Information Fusion
KR101772220B1 (en) * 2016-05-27 2017-08-28 한국과학기술원 Calibration method to estimate relative position between a multi-beam sonar and a camera
CN108981692A (en) * 2018-06-14 2018-12-11 兰州晨阳启创信息科技有限公司 It is a kind of based on inertial navigation/visual odometry train locating method and system
CN109887057A (en) * 2019-01-30 2019-06-14 杭州飞步科技有限公司 The method and apparatus for generating high-precision map
CN110826503A (en) * 2019-11-08 2020-02-21 山东科技大学 Closed pipeline human body detection method and system based on multi-sensor information fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10921801B2 (en) * 2017-08-02 2021-02-16 Strong Force loT Portfolio 2016, LLC Data collection systems and methods for updating sensed parameter groups based on pattern recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222772A (en) * 2015-09-17 2016-01-06 泉州装备制造研究所 A kind of high-precision motion track detection system based on Multi-source Information Fusion
KR101772220B1 (en) * 2016-05-27 2017-08-28 한국과학기술원 Calibration method to estimate relative position between a multi-beam sonar and a camera
CN108981692A (en) * 2018-06-14 2018-12-11 兰州晨阳启创信息科技有限公司 It is a kind of based on inertial navigation/visual odometry train locating method and system
CN109887057A (en) * 2019-01-30 2019-06-14 杭州飞步科技有限公司 The method and apparatus for generating high-precision map
CN110826503A (en) * 2019-11-08 2020-02-21 山东科技大学 Closed pipeline human body detection method and system based on multi-sensor information fusion

Also Published As

Publication number Publication date
CN112388635A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112388635B (en) Method, system and device for fusing sensing and space positioning of multiple sensors of robot
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN109344882B (en) Convolutional neural network-based robot control target pose identification method
CN108510530B (en) Three-dimensional point cloud matching method and system
CN107941217B (en) Robot positioning method, electronic equipment, storage medium and device
CN111735479A (en) Multi-sensor combined calibration device and method
CN110456330B (en) Method and system for automatically calibrating external parameter without target between camera and laser radar
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN109631911B (en) Satellite attitude rotation information determination method based on deep learning target recognition algorithm
CN111486864B (en) Multi-source sensor combined calibration method based on three-dimensional regular octagon structure
CN113627473A (en) Water surface unmanned ship environment information fusion sensing method based on multi-mode sensor
CN111890373A (en) Sensing and positioning method of vehicle-mounted mechanical arm
CN110865650A (en) Unmanned aerial vehicle pose self-adaptive estimation method based on active vision
CN111161337A (en) Accompanying robot synchronous positioning and composition method in dynamic environment
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN111998862A (en) Dense binocular SLAM method based on BNN
CN107527366A (en) A kind of camera tracking towards depth camera
CN114693787A (en) Parking garage map building and positioning method and system and vehicle
CN116222543A (en) Multi-sensor fusion map construction method and system for robot environment perception
CN114758011B (en) Zoom camera online calibration method fusing offline calibration results
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation
CN110515088B (en) Odometer estimation method and system for intelligent robot
CN110598370A (en) Robust attitude estimation of multi-rotor unmanned aerial vehicle based on SIP and EKF fusion
CN116182855B (en) Combined navigation method of compound eye-simulated polarized vision unmanned aerial vehicle under weak light and strong environment
CN115797490B (en) Graph construction method and system based on laser vision fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant