CN113516692A - Multi-sensor fusion SLAM method and device - Google Patents

Multi-sensor fusion SLAM method and device Download PDF

Info

Publication number
CN113516692A
CN113516692A CN202110541906.4A CN202110541906A CN113516692A CN 113516692 A CN113516692 A CN 113516692A CN 202110541906 A CN202110541906 A CN 202110541906A CN 113516692 A CN113516692 A CN 113516692A
Authority
CN
China
Prior art keywords
camera
coordinate system
slam
map
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110541906.4A
Other languages
Chinese (zh)
Other versions
CN113516692B (en
Inventor
刘燕霖
陈敏鹤
金忠孝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Motor Corp Ltd
Original Assignee
SAIC Motor Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Motor Corp Ltd filed Critical SAIC Motor Corp Ltd
Priority to CN202110541906.4A priority Critical patent/CN113516692B/en
Priority claimed from CN202110541906.4A external-priority patent/CN113516692B/en
Publication of CN113516692A publication Critical patent/CN113516692A/en
Application granted granted Critical
Publication of CN113516692B publication Critical patent/CN113516692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Probability & Statistics with Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a multi-sensor fusion SLAM method and a device, wherein in order to enlarge the observation visual field of a visual sensor and avoid the failure of characteristic point extraction caused by shielding, sudden strong illumination and no obvious texture area, the invention introduces a plurality of cameras; meanwhile, in order to eliminate monocular degradation caused by uncertain monocular dimension and rapid rotation, inertial navigation observation and visual observation of the IMU and the wheel speed meter are tightly coupled to obtain a more robust SLAM system, so that the multi-sensor fused SLAM device has a wider visual field range.

Description

Multi-sensor fusion SLAM method and device
Technical Field
The invention relates to the technical field, in particular to a multi-sensor fusion SLAM method and device based on multiple cameras, an IMU and a wheel speed meter.
Background
The instant positioning And map construction (SLAM) technology is a method for a moving body to output the position And the posture of the moving body in real time by taking the observation of a sensor of the moving body as input under unknown environment And position And simultaneously building a map in an incremental mode according to the posture. SLAM technology all has important application in fields such as autopilot, high-precision map, robot, unmanned aerial vehicle and AR/VR at present.
SLAMs can be classified into laser SLAMs based on laser radars and vision SLAMs based on cameras such as monocular, binocular, RGBD and the like according to the difference of main sensors. Laser SLAM has been relatively well developed, but visual SLAM is still the mainstream of current research in view of the cost of large-scale mass production applications.
For the visual SLAM, the measurement range of a binocular camera is limited by a base line and resolution, a large amount of calculation is needed to recover the pixel depth, and the real-time performance is poor; the depth camera can directly obtain the pixel depth, but is expensive, small in visual field, narrow in measurement range and sensitive to light, and is mainly used indoors at present. The monocular camera is widely concerned about small size, low cost and good real-time performance, but has the defects that the depth can be determined only by translation, and the scale uncertainty exists, so that the difference between the reconstructed track and the real track is a scaling factor. Currently, monocular vision SLAM has a number of excellent open source frameworks, such as ORB-SLAM series, LSD-SLAM, DSO, SVO, and others.
The IMU and the wheel speed meter do not depend on the external environment, the speed and the angular speed of the moving body can be given through pre-integration, and the IMU and the wheel speed meter have a reestablished absolute scale; the higher output frequency can provide a prediction result between image frames, the search range matched between two frames can be reduced according to the prediction result, and track following loss caused by degradation during monocular rapid rotation is avoided. But at the same time, the data errors of the IMU and the wheel speed meter are continuously amplified through accumulation, and the visual observation coupling optimization can reduce the divergence and the accumulation errors caused by the offset, so that the result is more accurate. At present, SLAM open source frames of monocular vision Fusion IMUs mainly comprise VINS, OKVINS, VINS-Fusion and the like, and the schemes require accurate hardware alignment and are strict in initialization, and dimension, bias and gravity direction need to be given. In the invention, the wheel speed meter is introduced to directly obtain the speed variable, thereby simplifying the initialization process.
In practical application, the single-path camera observation is often limited in view, and when a view range is shielded, strongly illuminated or an area without obvious texture, characteristic points are lost, so that system failure is caused. The multi-camera system provides observation of multiple visual angles, the visual field range is widened, when one path of camera fails, tracking and drawing can be continuously established through other cameras, and the robustness of the system is greatly improved. The Multicol-SLAM provides a multichannel pure vision SLAM method based on a fisheye camera model, directly extracts feature points and descriptors from a plurality of fisheye camera images, is easy to have mismatching and has poor real-time performance; meanwhile, the edge with larger distortion of the fisheye image is removed, so that the extraction, matching and optimization of point and line characteristics are difficult; it also requires that there must be an overlap of fields of view in the camera views to determine the scale, making its application limited. The cube SLAM corrects the one-way fisheye image by a perspective cube model, fully utilizes image information, does not overcome the limitation of one-way vision, and simultaneously can not restore the absolute scale by a monocular.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method for implementing a SLAM system based on multi-sensor fusion of multiple cameras, IMU, and wheel speed meters, so as to achieve an enlarged observation field of view with a given absolute scale.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
a multi-sensor fused SLAM method, comprising:
calibrating internal parameters of a camera in the SLAM system;
calibrating external parameters of a camera, a wheel speed meter and an IMU in the SLAM system;
giving a camera projection model based on the internal parameters of the camera and the calibration results of the external parameters of the camera, the wheel speed meter and the IMU;
controlling initialization of multiple cameras and wheel speed meter loose coupling in the SLAM system;
acquiring initialized map points and poses by adopting a tracking thread, calculating and optimizing the poses of a current key frame based on image data, IMU data and wheel speed count data acquired from a reference frame to the current key frame, and sending the key frame to a local map building thread;
acquiring a key frame sent by a tracking thread by adopting a local mapping thread, triangulating and generating new map points based on the key frame, fusing the new map points, and screening and eliminating redundant key frames and redundant map points;
and performing loop detection on the key frame screened by the local mapping thread by adopting a loop detection thread, and correcting the pose and the position of the map point of the screened key frame according to the adjustment of the loop frame.
Optionally, in the multi-sensor fusion SLAM method, when the camera in the SLAM system is a perspective camera, the calibrated reference of the camera in the SLAM system includes, but is not limited to, one or more combinations of a focal length, a perspective image center coordinate, and a distortion coefficient; when the camera in the SLAM system is a fisheye camera, the internal parameters of the camera in the calibrated SLAM system include, but are not limited to, one or more of a forward mapping coefficient, a reverse mapping coefficient, fisheye image center coordinates, and affine transformation coefficients.
Optionally, in the aforementioned multi-sensor fusion SLAM method, the step of giving a camera projection model based on the internal parameters of the camera and the calibration results of the external parameters of the camera, the wheel speed meter, and the IMU includes:
based on the internal reference and the external reference, calculating to obtain a posture transformation matrix from a world coordinate system to a body coordinate system, a posture transformation matrix from the body coordinate system to a camera coordinate system, and a transformation matrix from a camera coordinate system of the fisheye camera to a camera coordinate system of the jth virtual perspective camera;
will be a formula
Figure BDA0003071909700000031
As a camera projection model, wherein P is a projection of a map point P (X, Y, Z) in the world coordinate system to a camera ciThe pixel point p (x, y) obtained under the pixel coordinate system of (1), the KjA perspective transformation matrix for the jth virtual perspective camera, said
Figure BDA0003071909700000032
To slave fisheye camera ciTo the camera coordinate system of the jth virtual perspective camera, said
Figure BDA0003071909700000033
For the body coordinate system to the ith camera ciPose transformation matrix of camera coordinate system, said TbwAnd P is a map point P (X, Y, Z) under the world coordinate system.
Optionally, in the aforementioned method for multi-sensor fusion SLAM, the controlling initialization of loose coupling between the multiple cameras and the wheel speed meters in the SLAM system includes:
respectively calculating the essential matrix of each camera through epipolar constraint, and selecting the camera corresponding to the matrix with the most interior points as an initialization camera;
performing visual initialization to obtain a translation distance of a camera optical center of the initialization camera, and recording the translation distance as a first translation distance;
obtaining the translation distance of coordinate points of a body coordinate system between two adjacent frames through the pre-integration of a wheel speed meter;
calculating the translation distance of the camera optical center of the initialization camera between two adjacent frames based on the translation distance of the coordinate point of the body coordinate system, and recording as a second translation distance;
calculating a scale factor based on the first translation distance and the second translation distance;
and recovering the absolute scales of the attitude and the map points based on the scale factors, and initializing the IMU wheel speed meter.
Optionally, in the multi-sensor-fused SLAM method, loop detection is performed on the keyframe after the local mapping process is screened by using a loop detection thread, and the pose and the position of the map point of the screened keyframe are corrected according to adjustment of the loop frame, including:
taking the key frames with the same word number as the current key frame and the similarity detection larger than the lowest score of the adjacent key frames as candidate frames, and selecting the key frames with continuity from the candidate frames as loop candidate frames, wherein the lowest score of the adjacent key frames refers to: the lowest similarity between the adjacent key frame of the current key frame and the previous key frame;
through matching of bag-of-words vector acceleration descriptors, Sim3 transformation of a current key frame and a closed-loop candidate frame is calculated by using RANSAC, and more matched map points are obtained by projecting map points according to the preliminarily obtained Sim3 transformation;
taking the Sim3 transformation and map points corresponding to the two key frames as vertexes, traversing the observation of all map points, adding forward and reverse projected binary edges, fixing the pose of the map points, and optimizing the Sim3 transformation of the two frames by minimizing the reprojection error;
according to the optimized Sim3, projecting map points of the closed-loop key frame and the key frames connected with the closed-loop key frame to the current key frame for matching to obtain more matching points;
by solving for the optimized Sim3, the keyframe poses and map point positions are adjusted based on the solved Sim 3.
A multi-sensor fused SLAM device, comprising:
the calibration unit is used for calibrating internal parameters of the camera in the SLAM system; calibrating external parameters of a camera, a wheel speed meter and an IMU in the SLAM system;
the projection calculation unit is used for giving a camera projection model based on the internal parameters of the camera and the calibration results of the external parameters of the camera, the wheel speed meter and the IMU;
the initialization unit is used for controlling the initialization of loose coupling of a plurality of cameras and wheel speed meters in the SLAM system;
the tracking thread unit is used for acquiring initialized map points and poses by adopting a tracking thread, calculating and optimizing the poses of the current key frame based on image data, IMU data and wheel speed count data acquired from a reference frame to the current key frame, and sending the key frame to a local mapping thread;
the local map building thread unit is used for inserting a key frame sent by a tracking thread by adopting a local map building thread, generating and fusing new map points based on triangulation of the key frame, and screening and removing redundant key frames and redundant map points;
and the loop detection unit is used for performing loop detection on the key frame screened by the local mapping thread by adopting a loop detection thread and correcting the pose and the position of the map point of the screened key frame according to the adjustment of the loop frame.
Optionally, in the multi-sensor-fused SLAM device, when the camera in the SLAM system is a perspective camera, the calibrated reference of the camera in the SLAM system includes, but is not limited to, one or more combinations of a focal length, a perspective image center coordinate, and a distortion coefficient; when the camera in the SLAM system is a fisheye camera, the internal parameters of the camera in the calibrated SLAM system include, but are not limited to, one or more of a forward mapping coefficient, a reverse mapping coefficient, fisheye image center coordinates, and affine transformation coefficients.
Optionally, in the aforementioned multi-sensor-fused SLAM device, when a camera projection model is given based on the calibration results of the internal parameters of the camera and the external parameters of the camera, the wheel speed meter, and the IMU, the projection calculation unit is specifically configured to:
based on the internal reference and the external reference, calculating to obtain a posture transformation matrix from a world coordinate system to a body coordinate system, a posture transformation matrix from the body coordinate system to a camera coordinate system, and a transformation matrix from a camera coordinate system of the fisheye camera to a camera coordinate system of the jth virtual perspective camera;
based on the formula
Figure BDA0003071909700000051
Is calculated toProjection model K to camerajWherein P is a map point P (X, Y, Z) projected to the camera c under the world coordinate systemiThe pixel point p (x, y) obtained under the pixel coordinate system of (1), the KjA perspective transformation matrix for the jth virtual perspective camera, said
Figure BDA0003071909700000052
To slave fisheye camera ciTo the camera coordinate system of the jth virtual perspective camera, said
Figure BDA0003071909700000053
For the body coordinate system to the ith camera ciPose transformation matrix of camera coordinate system, said TbwIs a posture transformation matrix from a world coordinate system to a body coordinate system.
Optionally, in the aforementioned multi-sensor-fused SLAM device, when the initialization unit controls the initialization of the loose coupling between the multiple cameras and the wheel speed meters in the SLAM system, the initialization unit is specifically configured to:
respectively calculating the essential matrix of each camera by epipolar constraint, and selecting the camera corresponding to the matrix with the most interior points as an initialization camera;
performing visual initialization to obtain a translation distance of a camera optical center of the initialization camera, and recording the translation distance as a first translation distance;
obtaining the translation distance of coordinate points of a body coordinate system between two adjacent frames through the pre-integration of a wheel speed meter;
calculating the translation distance of the camera optical center of the initialization camera between two adjacent frames based on the translation distance of the coordinate point of the body coordinate system, and recording as a second translation distance;
calculating a scale factor based on the first translation distance and the second translation distance;
and recovering the absolute scales of the attitude and the map points based on the scale factors, and initializing the IMU wheel speed meter.
Optionally, in the multi-sensor-fused SLAM device, the loop detection unit is configured to perform loop detection on the keyframe after the local mapping process is screened by using a loop detection thread, and correct the pose and the position of the map point of the screened keyframe according to the adjustment of the loop frame, where the loop detection thread is specifically configured to:
taking the key frames with the same word number as the current key frame and the similarity detection larger than the lowest score of the adjacent key frames as candidate frames, and selecting the key frames with continuity from the candidate frames as loop candidate frames, wherein the lowest score of the adjacent key frames refers to: the lowest similarity between the adjacent key frame of the current key frame and the previous key frame;
through matching of bag-of-words vector acceleration descriptors, Sim3 transformation of a current key frame and a closed-loop candidate frame is calculated by using RANSAC, and more matched map points are obtained by projecting map points according to the preliminarily obtained Sim3 transformation;
taking the Sim3 transformation and map points corresponding to the two key frames as vertexes, traversing the observation of all map points, adding forward and reverse projected binary edges, fixing the pose of the map points, and optimizing the Sim3 transformation of the two frames by minimizing the reprojection error;
according to the optimized Sim3, projecting map points of the closed-loop key frame and the key frames connected with the closed-loop key frame to the current key frame for matching to obtain more matching points;
by solving for the optimized Sim3, the keyframe poses and map point positions are adjusted based on the solved Sim 3.
Based on the above technical solution, the above solution provided in the embodiment of the present invention includes: the calibration of the internal parameters refers to calibration of the internal parameters of a camera in the SLAM system, calibration of the external parameters of the camera, a wheel speed meter and an IMU in the SLAM system, setting of a camera projection model through the calibrated internal parameters and the calibrated external parameters, initialization of loose coupling of a multi-camera and the wheel speed meter in the SLAM system, continuous calculation of a new pose according to new images, the IMU and the wheel speed meter data by adopting a tracking thread, insertion of a key frame and optimization of the pose, processing of the key frame sent from the tracking thread by adopting a local mapping thread, triangularization generation of new map points and fusion, inspection and elimination of map points and redundant key frames, and finally loop detection of the key frame sent from the local mapping thread by adopting a loop detection thread, so that the limitations that the absolute dimension cannot be given and the single-path visual field is limited in the traditional single-path pure vision SLAM system are overcome.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a multi-sensor fusion SLAM method disclosed in an embodiment of the present application;
fig. 2 is a schematic view of an approximate circular motion of a moving body at a time t to t + Δ t according to an embodiment of the present application;
fig. 3 is a schematic diagram of an optimized structure of a multi-sensor fusion SLAM system diagram provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a multi-sensor fused SLAM device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application discloses a SLAM method for multi-sensor fusion, and referring to FIG. 1, the method can include:
step S100: calibrating parameters;
the calibration parameters in the step mainly refer to calibration internal parameters and calibration external parameters, wherein the calibration internal parameters refer to calibration of internal parameters of a camera in the SLAM system. In this example, the SLAM system supports a multi-camera system. That is, the SLAM system supports n (n.gtoreq.1) cameras (the image pickupThe machine can be a perspective camera or a fisheye camera), the cameras are rigidly arranged at any angle on a moving body, whether the fields of view of the cameras are overlapped or not is not required, taking a two-way fisheye camera as an example, the fisheye camera internal parameter calibration takes an omnidirectional visual model based on Taylor series proposed by Scaramuzza as an example, and the internal parameter comprises a forward mapping coefficient a0,a1,...,aNInverse mapping coefficient p0,p1,...,pNCenter coordinate u of fisheye image0And v0Affine transformation coefficients c, d, e, of course, a person skilled in the art can select the camera parameters to be calibrated based on actual requirements.
The calibration external reference refers to external references for calibrating a camera, a wheel speed meter and an IMU in the SLAM system, and aims to provide the relative pose of rigid body transformation among the camera, the wheel speed meter and the IMU through the calibrated external references. In this example, the central points of two wheel speed meters in the SLAM system are set as the origin of coordinates of a Body coordinate system (Body for short), and the pose of the Body coordinate system can be regarded as the absolute pose of the moving Body. The position and orientation of the body coordinate system can be selected by those skilled in the art based on actual requirements. Since the subsequent optimization is tightly coupled, the IMU, the wheel speed gauge coordinate system and the Body coordinate system are brought into coincidence, i.e. the moving Body advancing direction is the positive x-axis direction of the IMU and the gravitational direction is the negative z-axis direction of the IMU. At this time, the SLAM system includes a World coordinate system (World), a Body coordinate system (Body), and a plurality of Camera coordinate systems (Camera). External reference calibration requires aligning the IMU wheel speed meter reference frame with the Body coordinate system and giving the coordinates from the Body coordinate system to the ith camera ciPose transformation relation T of camera coordinate systemc,b
Step S200: a camera projection model is given;
in this step, a camera projection model is given based on the calibration results of the internal and external parameters.
Specifically, the process of calculating the projection model according to the calibration results of the internal parameters and the external parameters includes:
it is known to transform the pose from the world coordinate system to the Body coordinate system
Figure BDA0003071909700000081
Body coordinate system to i-th camera ciPose transformation of camera coordinate system
Figure BDA0003071909700000082
Can be based on
Figure BDA0003071909700000083
And
Figure BDA0003071909700000084
the transformation from three-dimensional points in the world coordinate system to two-dimensional points in the pixel coordinate system is obtained. Let P (X, Y, Z) be a map point in the world coordinate system, and project the map point to the camera ciThe pixel coordinate system of (2) to obtain the pixel point p (x, y), then the following relationship exists:
Figure BDA0003071909700000085
wherein the content of the first and second substances,
Figure BDA0003071909700000086
presentation camera ciThe projection model of (1). In the case of a see-through camera,
Figure BDA0003071909700000087
may use a projection matrix (K)n) And distortion coefficient (distortion coefficient k)1、k2、p1、p2) To model. In the case of a fisheye camera,
Figure BDA0003071909700000088
the model can be modeled with a calibrated generic polynomial model.
Particularly, if a fisheye camera is used as visual input, although the visual field range can be greatly expanded, the fisheye image has large distortion, and subsequent feature extraction and matching have certain problems. Therefore, the multi-pinhole perspective projection model can be used as the camera projection model, and the image of the fisheye camera is projected onto the imaging plane of the pinhole camera to be correctedThe fisheye image of (1). Specifically, the multi-pinhole perspective projection model is a virtual perspective camera with N (N is more than or equal to 2) optical centers coinciding with the optical center of the fisheye camera, the respective imaging planes form a certain included angle,
Figure BDA0003071909700000089
representation from fisheye camera ciSo that the transformation relationship from the three-dimensional point P (X, Y, Z) in the world coordinate system to the two-dimensional point P (X, Y) in the pixel coordinate system of the jth virtual perspective camera is:
Figure BDA0003071909700000091
the above-mentioned
Figure BDA0003071909700000092
Namely the projection model of the camera is obtained,
wherein KjIs the perspective transformation matrix of the jth virtual perspective camera.
Step S300: initializing the loose coupling of a plurality of cameras and a wheel speed meter;
the method is mainly used for carrying out loose coupling initialization on a multi-camera and a wheel speed meter, when the initialization is started, an SLAM system transmits initial frame images and current frame images of the multi-camera, each frame comprises a plurality of images from the plurality of cameras, and time stamps of the images are synchronized, namely the images have the same time stamp. During the loose coupling initialization process, the gyroscope and the wheel speed count data of the SLAM system are aligned, and assuming that the wheel speed meter is unbiased, the pre-integration between two adjacent frames of images transmitted by the SLAM system can be obtained through the discrete motion equation observed by inertia, so that the position and the attitude of the moving body can be given based on the pre-integration. The initialization of the loose coupling of the multiple cameras is visual initialization, the initialization of the IMU is completed at the moment of the visual initialization, the initial bias of the gyroscope is set to be 0, and then the bias of the gyroscope is optimized. Visual initialization is used for extracting and matching the features of the initial frame and the current frame of each camera, and interior points are obtained through epipolar constraint calculationThe most camera image is rotated R and translated t, the camera is used as a leading Camera (initialization camera), the motion of the camera is used as a reference, and the pose of a moving body is obtained according to an external reference relation
Figure BDA0003071909700000093
Triangularization is used for building an initialization map.
Since the initialized and calculated R and t have monocular scale uncertainty, the absolute scale of the initialized map and pose needs to be restored through the observation of the wheel speed meter. The moving distance of the optical center of the camera between two frames obtained by the visual odometer in the SLAM system is SvoThe distance is different from the real distance in the world coordinate system by a preset scale factor. Since the wheel speed meter gives the speed of the target object in the world coordinate system, an absolute scale can be given. The displacement of the moving body in the x-axis direction can be obtained by dead reckoning through the pre-integration of the wheel speed meter between two frames, and the displacement caused by rotation is considered at the same time, so that the moving distance S of the optical center of the camera between the two frames is finally obtainedvio. The true scale is therefore scale ═ Svio/SvoTherefore, the absolute scales of the pose and the map point can be recovered, and initialization is completed.
In particular, in the present solution,
the observation equation of the inertial sensor in the SLAM system at the time t is as follows:
Figure BDA0003071909700000094
Figure BDA0003071909700000095
wherein
Figure BDA0003071909700000096
Coordinates of the angular velocity vector measured for the gyroscope in the Body coordinate system, bg(t) bias of the gyroscope, ηg(t) is the angular velocity noise,
Figure BDA0003071909700000101
is the true value of angular velocity in the Body coordinate system;
Figure BDA0003071909700000102
is the coordinate of the velocity vector measured by the wheel speed meter under the Body coordinate system, etav(t) is the speed noise and,
Figure BDA0003071909700000103
is the true value of the velocity in the Body coordinate system. the discrete equation of motion from time t to time t + Δ t is:
Figure BDA0003071909700000104
Figure BDA0003071909700000105
wherein R iswb(t + Δ t) represents a rotation matrix from the Body coordinate system at time t + Δ t to the world coordinate system, Rwb(t) represents a rotation matrix from the Body coordinate system at time t to the world coordinate system.
Figure BDA0003071909700000106
The rotation vector of the Body coordinate system from the time t to the time t + delta t can be regarded as a lie algebra, and a lie group is obtained through exponential mapping
Figure BDA0003071909700000107
A rotation matrix representing the Body coordinate system from time t + Δ t to time t. It is assumed that the motion of the moving body between two frames satisfies a constant turning rate and velocity amplitude model (CTRV), i.e., it moves along a straight line while moving at a fixed turning rate and a constant velocity, and it moves approximately in a circular arc, as shown in fig. 2. Assuming that the speed is constant during this time,
Figure BDA0003071909700000108
indicating from time t tothe displacement vector in the Body coordinate system at time t + deltat,
Figure BDA0003071909700000109
and a rotation matrix of the displacement vector from the Body coordinate system at the time t + delta t to the world coordinate system is represented, and the displacement increment in the world coordinate system within delta t time can be obtained by multiplying the rotation matrix and the displacement vector.
Substituting the discrete motion equation into the observation equation can obtain:
Figure BDA00030719097000001010
Figure BDA00030719097000001011
since the frequency of inertial observations is much higher than the frequency of visual observations, there are multiple inertial observations between the ith and jth frame images, with a pre-integration of:
Figure BDA00030719097000001012
Figure BDA00030719097000001013
the position and attitude of the moving body can be given by pre-integration. The initialization of the IMU is completed at the moment of visual initialization, the initial bias of the gyroscope is set to 0, and then the bias is optimized.
The pose of the Body coordinate system is first given by visual initialization. Extracting and matching features of an initial frame and a current frame image of the same camera, carrying out iteration for X times by using a random sample consensus (RANSAC), wherein X can be 200, eight or other groups of feature points are selected each time to calculate an essential matrix E through epipolar constraint, errors of epipolar lines corresponding to all the feature points under the essential matrix E are calculated, if the errors are smaller than a given threshold value, the feature points are judged to be interior points, and the interior points are recordedRecording the interior points; simultaneously recording the accumulated scores of all the characteristic point pairs under the intrinsic matrix E, wherein the larger the error is, the lower the score is, and recording the intrinsic matrix E with the highest score as a camera ciInitialized essence matrix Ei. To essential matrix EiSingular Value Decomposition (SVD) is performed to recover the motions R and t of the camera for a total of four combinations. And restoring the matched inner points of the camera by triangulation, removing points outside the field angle, points with negative depth and points with excessive reprojection errors, and finally selecting the R and t with the most triangulated points as the motion of the camera. Taking the camera with the largest number of reconstruction points as a lead camera (initialization camera), and taking the motion of the camera as a reference to obtain the pose of the moving Body according to the external reference relation, namely the pose of the Body coordinate system
Figure BDA0003071909700000111
And establishing an initialization map.
Since the calculated R and t by pure vision initialization have monocular scale uncertainty, the absolute scale of the initialized map and the pose needs to be restored through the observation of the wheel speed meter. Firstly, global optimization is carried out on the positions of the map points and the positions of the Body coordinate systems of the initial frame and the current frame, so that the reprojection error of the map points projected onto a pixel plane through the positions is minimum. Taking the Camera coordinate system of the Leading Camera of the initial frame as a world coordinate system, and taking the absolute pose (i.e. the pose of the Body coordinate system) of the moving Body of the initial frame as
Figure BDA0003071909700000112
Figure BDA0003071909700000113
Wherein
Figure BDA0003071909700000114
It is shown that the rotation is performed,
Figure BDA0003071909700000115
representing translation (i.e., the coordinates of the initial frame camera optical center in the world coordinate system). Absolute position and posture of current frame moving body
Figure BDA0003071909700000116
Figure BDA0003071909700000117
Wherein
Figure BDA0003071909700000118
It is shown that the rotation is performed,
Figure BDA0003071909700000119
representing translation (i.e., the coordinates of the current frame camera optical center in the world coordinate system). The moving distance of the optical center of the camera between two frames obtained by the vision odometer is
Figure BDA00030719097000001110
But the scale of this distance is inaccurate.
Since the wheel speed meter gives the speed in the world coordinate system, an absolute scale can be given. Optionally calculated here using inertial observations of the initial and current frames, the distance that the camera's optical center moves can be decomposed into a dead-reckoning x-direction component and a rotation-derived y-direction component. Firstly, the distance S moved by the coordinate origin of the coordinate system (i.e. Body coordinate system) of the wheel speed meter can be obtained by pre-integration of the wheel speed meter between two framesodom. S is due to the short interval between two framesodomApproximately regarding the translation amount of the moving body in the direction of the x axis; meanwhile, due to the rotation between the two frames, the optical center of the camera approximately translates on the y axis, and the translation amount of the optical center of the camera caused by the rotation is calculated. External reference between Leading Camera and Body coordinate systems is known
Figure BDA0003071909700000121
The precalibrated external parameters are considered to be unchanged during the initialization process, i.e.
Figure BDA0003071909700000122
Figure BDA0003071909700000123
The optical center of the initial frame camera is at the beginningThe coordinates in the frame Body coordinate system,
Figure BDA0003071909700000124
after rotation, the coordinates of the optical center of the current frame camera in the initial frame Body coordinate system
Figure BDA0003071909700000125
So that the distance of the optical center movement of the camera caused by rotation can be obtained
Figure BDA0003071909700000126
Distance of movement of camera optical center between two frames given by wheel speed meter and rotation
Figure BDA0003071909700000127
The true scale is therefore scale ═ Svio/SvoTherefore, the absolute scales of the pose and the map point can be recovered, and the initialization of the SLAM system is completed.
Specifically, in a technical solution disclosed in another embodiment of the present application, the controlling initialization of loose coupling between a multi-camera and a wheel speed meter in the SLAM system includes:
respectively calculating the essential matrix of each camera through epipolar constraint, and selecting the camera corresponding to the matrix with the most interior points as an initialization camera;
performing visual initialization to obtain a translation distance of a camera optical center of the initialization camera, and recording the translation distance as a first translation distance;
obtaining the translation distance of coordinate points of a body coordinate system between two adjacent frames through the pre-integration of a wheel speed meter;
calculating the translation distance of the camera optical center of the initialization camera between two adjacent frames based on the translation distance of the coordinate point of the body coordinate system, and recording as a second translation distance;
calculating a scale factor based on the first translation distance and the second translation distance;
and recovering the absolute scales of the attitude and the map points based on the scale factors, and initializing the IMU wheel speed meter.
Step S400: tracing threads (Tracking).
After obtaining the initialized map points and poses, the SLAM system needs to continuously compute new poses from these data, insert keyframes, and optimize the keyframe poses as new image, IMU, and wheel speed meter data are imported. In this process, the tracking thread first uses the visual and inertial observations of the previous frame or previous key frame to calculate the pose between the two frames (previous frame or previous key frame and current frame).
And if the difference between the occurrence time of the last repositioning and the current time is not more than the preset time, resolving the pose (Track Reference Key Frame) by using multi-path visual observation. Matching the feature points of the two acquired frames of images, specifically, the matching process is as follows: and converting the descriptors of the feature points into bag-of-words vectors to perform feature point matching between the current frame and the reference frame, and eliminating mismatching by calculating a distance threshold, a proportion threshold and angle voting. After matching the feature points of the two acquired images, the pose of the reference frame is used as an initial value of the pose of the current frame, the map points, the pose of the moving body and the external parameters of a plurality of cameras are used as vertexes, the observation of each map point is used as one edge, and the pose of the moving body of the current frame is optimized by optimizing the reprojection error from three-dimensional points under a world coordinate system to two-dimensional points of a pixel coordinate system. In the scheme, optimization can be performed for four times in total, observation after each optimization is divided into outlier and inlier, wherein the outlier does not participate in the next optimization, and finally mismatching map points are removed according to the updated pose. Alternatively, after updating the pose of the moving body, the pose of the moving body and the map points can be known, the external parameters of each camera are used as vertexes, and the reprojection errors of the unit edges are constructed to optimize the external parameters of the multiple cameras.
If the relocation is not triggered or the difference between the occurrence time of the last relocation and the current time is greater than the preset time length, the SLAM system adopts inertial observation to resolve the pose (Track With Odom). And taking the pose and the offset of the inertial observation of the reference frame as initial values of the current frame, and calculating the pre-integral of the inertial observation from the reference frame to the current frame. Projecting the map point of the previous frame to the current frame through a projection model, obtaining a matching point according to a descriptor distance in a small range, eliminating mismatching through angle voting, taking the poses of a moving body and an inertial observation offset of two frames obtained through inertial observation, using the map point and a multiphase external parameter as vertexes, adding constraint between the poses and the offset of the two frames through the pre-integration of the inertial observation between the two frames, simultaneously taking the observation of the map point as constraint and adding a unit edge taking the current pose as the vertex, optimizing for four times in total, dividing the observation after each optimization into outlier and inlier, not participating in the next optimization of the outlier, and finally eliminating the mismatching map point according to the updated pose. Alternatively, the multi-phase external parameters may be continuously optimized by minimizing the reprojection error.
After the pose of the current frame is obtained through matching and pose optimization between the two frames, the local map is tracked to obtain more matches, and the pose of the current frame is continuously optimized. Specifically, the local key frame and the local map points are updated, points in the visual field range of the current frame are found in the local map, and the points are projected to the current frame for matching, so that more matched map points are obtained. Similarly, if the difference between the occurrence time of the last relocation and the current time is not more than the preset time length, resolving a pose (Track Local Map) by using visual observation of a plurality of cameras, wherein the pose optimization method is the same as that in the TrackReference Key Frame; if the relocation is not triggered or the difference between the past time of the last relocation and the current time is not larger than the preset time length, the pose (Track Local Map With Odom) is solved by using visual and inertial observation, and the pose optimization method is the same as that in the Track With Odom. And finally, rejecting the map points which are mismatched through the updated pose of the moving body of the current frame.
If the number of matches obtained after two tracings is still insufficient, the relocation mode will be entered. In the repositioning process, firstly, the key frame with the same number of words as the current key frame exceeding a threshold value and the similarity detection being larger than the lowest score of the adjacent key frame is used as a repositioning candidate frame, and then the pose and the map point of the current frame are calculated according to the repositioning candidate frame by using an EPNP method. If the relocation fails, the entire SLAM system restarts.
The cost function of the reprojection error in the optimization process is as follows:
Figure BDA0003071909700000141
wherein, p'ijShowing the pose T in the Body coordinate systemiAt three-dimensional, point PjThe corresponding real pixel coordinates are then calculated,
Figure BDA0003071909700000142
representing the three-dimensional point P according to the pose of the Body coordinate systemjPixel coordinates projected onto the image plane. For the nth virtual pinhole camera of the mth fisheye camera, the projection relationship is as follows:
pij=KnRnTmTiPj
wherein T ismRepresenting the pose transformation from Body coordinate system to mth camera system, RnRepresenting pose transformation of the mth camera system to the nth virtual pinhole camera, KnA projection matrix representing the nth virtual pinhole camera.
The optimized structure of the optimization process diagram is shown in fig. 3.
Figure BDA0003071909700000143
The pose at time t-i can also be expressed by the pose of inertial observation, i.e.
Figure BDA0003071909700000144
At the same time, the inertial observation at each moment also has an offset BiPose PR of inertial observation at that momentiPose PR with last momenti-1And bias Bi-1There is a constraint edge given by the pre-integration, while there is also a constraint edge between the offsets at two time instants. Camera c at time t-11See map point P0Camera c2See map, point P0And P1. At time t 2, camera c1See map point P1And P2Camera c2See map point P2And P2
Figure BDA0003071909700000145
And if t is the external parameter of the mth camera at the moment i, the graph optimization can be represented by the structure shown in fig. 3, and the pose of the moving body, the multi-phase external parameter and the position of the map point can be optimized by minimizing the observation error of the edge.
Step S500: local mapping thread (Localmapping).
The thread processes the key frames sent by the tracking thread, inspects and rejects map points, triangulates and recovers new map points between the two frames of multi-cameras and fuses the new map points, and locally optimizes the pose and the map points by combining the inserted key frames, the connected key frames and all the map points (Local Bundle Adjustment). And finally, generating new map points by continuously adding new key frames, and eliminating redundant key frames and redundant map points to provide a stable key frame set for loop detection.
In this step, when the number of keyframes in the acquired map is less than a preset threshold, a visual observation of a sliding window is used for local optimization. Specifically, the pose of the moving body of all connected key frames in the common view is used as a pose vertex to be optimized, all map points which can be seen by the key frames are used as map point vertices to be optimized, the pose of the moving body which can see the map points but is not in the common view is used as a fixed pose vertex which does not participate in optimization and only provides a constraint relation, and the appearance of a plurality of cameras is used as a fixed pose vertex. And traversing the observation added multi-element edges of each map point, respectively connecting the map points, the key frame pose vertexes and the external reference vertexes, and optimizing the positions of the map points and the key frame moving body poses in the common view by minimizing the reprojection error. Alternatively, if the appearance of each camera needs to be optimized, the observation of all map points is traversed, cell edges with multiple camera appearance as vertices are added, and the camera appearance is optimized by minimizing the reprojection error.
And when the number of the key frames in the acquired map is larger than a threshold value, performing local optimization by adopting inertial observation of a sliding window. Specifically, inertial observation poses and offsets of all connected key frames in the common view are used as pose vertexes and offset vertexes to be optimized, all map points which can be seen by the key frames are used as map point vertexes to be optimized, all inertial observation poses and offsets of the key frames which can see local map points but do not belong to the common view are used as fixed pose vertexes and offset vertexes which do not participate in optimization and only provide constraint relation, and appearance parameters of the cameras are used as the fixed pose vertexes. And giving out a constraint relation between the pose and the bias of the two frames as an observation edge through the pre-integration between the current key frame and the previous key frame, traversing the observation of each map point, adding a multi-element edge, connecting the map point, the key frame pose vertex and the external reference vertex, and optimizing the position of the map point and the pose of the key frame moving body in the common view by minimizing the reprojection error. Alternatively, if the appearance of each camera needs to be optimized, the observation of all map points is traversed, cell edges with multiple camera appearance as vertices are added, and the camera appearance is optimized by minimizing the reprojection error.
Step S400: and detecting the thread by looping.
The thread performs loop detection on the key frames sent by the local mapping thread, adjusts the poses of all key frames and the positions of map points in the map according to the loop frames, eliminates accumulated drift errors, and corrects the previous tracks and the map.
The specific process comprises the following steps:
firstly, the key frames with the same word number as the current key frame exceeding a threshold value and the similarity detection being larger than the lowest score of the adjacent key frames are taken as candidate frames, and then the key frames with continuity are selected from the candidate frames to be taken as loop candidate frames. And calculating Sim3 transformation of the current key frame and the closed-loop candidate frame by using RANSAC through matching of bag-of-word vector accelerated descriptors, and projecting map points according to the preliminarily obtained Sim3 transformation to obtain more matched map points. And taking the Sim3 transformation and map points of the two key frames as vertexes, traversing the observation of all map points, adding forward and backward projected binary edges, fixing the poses of the map points, and optimizing the Sim3 transformation of the two frames by minimizing reprojection errors to obtain more accurate Sim3 transformation. And according to the updated Sim3 transformation, projecting map points of the closed-loop key frame and the connected key frames to the current key frame for matching so as to obtain more matching points. And adjusting the positions of the key frames and the map points according to the solved Sim3 transformation, eliminating drift errors, updating all the key frames and the map points through global optimization, and correcting the previous tracks and maps.
Compared with the prior art, the method has the following advantages:
by introducing multiple cameras into the SLAM system, the observation visual field of a moving body can be effectively expanded, and the breakdown of the whole system caused by the failure of a certain camera due to the reasons that the camera is blocked, the visual field has no obvious characteristics, the change of the visual angle is severe caused by turning and the like is prevented. Experiments of operating the system on an automobile show that when an automobile comes from the rear, the illumination of the automobile lamps can cause that the images of the rear cameras cannot extract the characteristic points to cause failure, if the single-path system is restarted at the moment, the multi-camera system can continue to rely on other cameras to track and build images.
The SLAM system supports a perspective camera and a fisheye camera, and provides a multi-pinhole perspective projection model for the fisheye camera. The fisheye camera has a large field angle, but the image edge has large distortion, the existing characteristic points and descriptors are not suitable for the fisheye image with large distortion, an improved descriptor is provided in the Multicol-SLAM but has poor real-time performance, and meanwhile, in order to avoid mismatching, the fisheye image edge is avoided, and only the information in the middle of the image is utilized; the extraction, matching and optimization of line features is difficult to perform directly on fisheye images. The fisheye image is corrected into the perspective projection image by using the multi-pinhole perspective projection model, experiments show that the mature ORB characteristics are good in real-time performance and less in mismatching, the extraction and matching of the characteristic points can be performed on the whole image, the edge information of the image is fully utilized, and the corrected image is also suitable for line characteristics.
In addition, the invention does not require the overlapping of the view fields among the cameras, and gives an absolute scale by using the wheel speed meter, so that the initialization process has better real-time property and robustness. Because the wheel speed meter directly gives speed observation, the IMU is prevented from being aligned with the gravity direction during initialization; initialization of the IMU and wheel speed meters is done simultaneously with visual initialization, assuming the wheel speed meters are unbiased and the initial bias of the gyroscope is set to 0, adding the bias to the subsequent optimization. Therefore, the initialization time and the calculated amount are greatly simplified, and a more robust initialization result can be given in real time.
In a traditional monocular SLAM system framework such as ORB-SLAM2, the pose and map points of a moving body are used as optimization vertexes, and after a plurality of video cameras are introduced, camera external parameters can also be optionally used as vertexes to be optimized; meanwhile, the wheel speed meter and the IMU pre-integration add new constraint to inertial observation pose inertial bias between two frames, and the pose of a key frame, the position of a map point and camera external parameters can be optimized by minimizing the edge of a reprojection error. The result shows that the SLAM system operation process is more robust by adding multiple cameras and optimizing inertial observation.
The present embodiment discloses a multi-sensor fused SLAM device, and the specific working content of each unit in the device please refer to the content of the above method embodiment, and the multi-sensor fused SLAM device provided by the embodiment of the present invention is described below, and the multi-sensor fused SLAM device described below and the multi-sensor fused SLAM method described above may be referred to correspondingly.
Referring to fig. 4, the multi-sensor fused SLAM device disclosed in the embodiments of the present application may include:
a calibration unit 100, configured to calibrate internal parameters of a camera in the SLAM system; calibrating external parameters of a camera, a wheel speed meter and an IMU in the SLAM system;
a projection calculation unit 200 for giving a camera projection model based on the internal parameters of the camera and the calibration results of the external parameters of the camera, the wheel speed meter and the IMU;
an initialization unit 300, configured to control initialization of loose coupling between multiple cameras and a wheel speed meter in the SLAM system;
a tracking thread unit 400, configured to acquire initialized map points and poses by using a tracking thread, calculate and optimize a current key frame pose based on image data, IMU data, and wheel speed count data acquired from a reference frame to a current key frame, and send the key frame to a local mapping thread;
the local map building thread unit 500 is used for inserting the key frames sent by the tracking thread by adopting the local map building thread, generating new map points based on the key frames in a triangularization mode, fusing the new map points, and screening and removing redundant key frames and redundant map points;
and a loop detection unit 600, configured to perform loop detection on the key frame after the local mapping process is screened by using a loop detection thread, and modify the pose and the position of the map point of the screened key frame according to adjustment of the loop frame.
Corresponding to the method, the calibration unit calibrates the internal reference of the camera in the SLAM system, and for the perspective camera, the internal reference includes but is not limited to one or more combinations of focal length, perspective image center coordinates and distortion coefficients; for a fisheye camera, the parameters include, but are not limited to, a combination of one or more of forward mapping coefficients, reverse mapping coefficients, fisheye image center coordinates, and affine transformation coefficients.
Corresponding to the above method, the projection calculation unit is specifically configured to, when a camera projection model is given based on the internal parameters of the camera and the calibration results of the external parameters of the camera, the wheel speed meter, and the IMU:
based on the internal reference and the external reference, calculating to obtain a posture transformation matrix from a world coordinate system to a body coordinate system, a posture transformation matrix from the body coordinate system to a camera coordinate system, and a transformation matrix from a camera coordinate system of the fisheye camera to a camera coordinate system of the jth virtual perspective camera;
based on the formula
Figure BDA0003071909700000181
Calculating to obtain a camera projection model KjWherein P is a map point P (X, Y, Z) projected to the camera c under the world coordinate systemiThe pixel point p (x, y) obtained under the pixel coordinate system of (1), the KjA perspective transformation matrix for the jth virtual perspective camera, said
Figure BDA0003071909700000182
Representation from fisheye camera ciTo the camera coordinate system of the jth virtual perspective camera, said
Figure BDA0003071909700000183
For the body coordinate system to the ith camera ciPose transformation matrix of camera coordinate system, said TbwIs a posture transformation matrix from a world coordinate system to a body coordinate system.
Corresponding to the above method, when controlling the initialization of the multiple cameras and the wheel speed meter loose coupling in the SLAM system, the initialization unit is specifically configured to:
respectively calculating the essential matrix of each camera through epipolar constraint, and selecting the camera corresponding to the matrix with the most interior points as an initialization camera;
performing visual initialization to obtain a translation distance of a camera optical center of the initialization camera, and recording the translation distance as a first translation distance;
obtaining the translation distance of coordinate points of a body coordinate system between two adjacent frames through the pre-integration of a wheel speed meter;
calculating the translation distance of the camera optical center of the initialization camera between two adjacent frames based on the translation distance of the coordinate point of the body coordinate system, and recording as a second translation distance;
calculating a scale factor based on the first translation distance and the second translation distance;
and recovering the absolute scales of the attitude and the map points based on the scale factors, and initializing the IMU wheel speed meter.
Corresponding to the method, the loop detection unit performs loop detection on the key frame after the local mapping thread is screened by adopting a loop detection thread, and corrects the pose and the position of the map point of the screened key frame according to the adjustment of the loop frame, and is specifically used for:
taking the key frames with the same word number as the current key frame and the similarity detection larger than the lowest score of the adjacent key frames as candidate frames, and selecting the key frames with continuity from the candidate frames as loop candidate frames, wherein the lowest score of the adjacent key frames refers to: the lowest similarity between the adjacent key frame of the current key frame and the previous key frame;
through matching of bag-of-words vector acceleration descriptors, Sim3 transformation of a current key frame and a closed-loop candidate frame is calculated by using RANSAC, and more matched map points are obtained by projecting map points according to the preliminarily obtained Sim3 transformation;
taking the Sim3 transformation and map points corresponding to the two key frames as vertexes, traversing the observation of all map points, adding forward and reverse projected binary edges, fixing the pose of the map points, and optimizing the Sim3 transformation of the two frames by minimizing the reprojection error;
according to the optimized Sim3, projecting map points of the closed-loop key frame and the key frames connected with the closed-loop key frame to the current key frame for matching to obtain more matching points;
by solving for the optimized Sim3, the keyframe poses and map point positions are adjusted based on the solved Sim 3.
For convenience of description, the above system is described with the functions divided into various modules, which are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations of the invention.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A multi-sensor fused SLAM method, comprising:
calibrating internal parameters of a camera in the SLAM system;
calibrating external parameters of a camera, a wheel speed meter and an IMU in the SLAM system;
giving a camera projection model based on the internal parameters of the camera and the calibration results of the external parameters of the camera, the wheel speed meter and the IMU;
controlling initialization of multiple cameras and wheel speed meter loose coupling in the SLAM system;
acquiring initialized map points and poses by adopting a tracking thread, inserting key frames based on the acquired image data, IMU data and wheel speed meter data and the image data, correcting the poses and the map points, and sending the key frames to a local map building thread;
acquiring a key frame sent by a tracking thread by adopting a local mapping thread, triangulating on the basis of the key frame to generate a new map point and fuse the map points, and screening and removing redundant key frames and redundant map points;
and performing loop detection on the key frame screened by the local mapping thread by adopting a loop detection thread, and correcting the pose and the position of the map point of the screened key frame according to the adjustment of the loop frame.
2. The multi-sensor fused SLAM method of claim 1, wherein when the cameras in the SLAM system are perspective cameras, the calibrated camera parameters in the SLAM system include but are not limited to one or more combinations of focal length, perspective image center coordinates, distortion coefficients; when the camera in the SLAM system is a fisheye camera, the internal parameters of the camera in the calibrated SLAM system include, but are not limited to, one or more of a forward mapping coefficient, a reverse mapping coefficient, fisheye image center coordinates, and affine transformation coefficients.
3. The multi-sensor fused SLAM method of claim 1, wherein the given camera projection model based on calibration results of internal parameters of the camera and external parameters of camera, wheel speed meter and IMU comprises:
based on the internal reference and the external reference, calculating to obtain a posture transformation matrix from a world coordinate system to a body coordinate system, a posture transformation matrix from the body coordinate system to a camera coordinate system, and a transformation matrix from a camera coordinate system of the fisheye camera to a camera coordinate system of the jth virtual perspective camera;
will be a formula
Figure FDA0003071909690000012
As a camera projection model, wherein P is a projection of a map point P (X, Y, Z) in the world coordinate system to a camera ciThe pixel point p (x, y) obtained under the pixel coordinate system of (1), the KjA perspective transformation matrix for the jth virtual perspective camera, said
Figure FDA0003071909690000011
To slave fisheye camera ciTo the camera coordinate system of the jth virtual perspective camera, said
Figure FDA0003071909690000013
For the body coordinate system to the ith camera ciPose transformation matrix of camera coordinate system, said TbwAnd P is a map point P (X, Y, Z) under the world coordinate system.
4. The multi-sensor fused SLAM method of claim 1, wherein said controlling multiple camera and wheel speed meter loose coupling initialization in said SLAM system comprises:
respectively calculating the essential matrix of each camera through epipolar constraint, and selecting the camera corresponding to the matrix with the most interior points as an initialization camera;
performing visual initialization to obtain a translation distance of a camera optical center of the initialization camera, and recording the translation distance as a first translation distance;
obtaining the translation distance of coordinate points of a body coordinate system between two adjacent frames through the pre-integration of a wheel speed meter;
calculating the translation distance of the camera optical center of the initialization camera between two adjacent frames based on the translation distance of the coordinate point of the body coordinate system, and recording as a second translation distance;
calculating a scale factor based on the first translation distance and the second translation distance;
and recovering the absolute scales of the attitude and the map points based on the scale factors, and initializing the IMU wheel speed meter.
5. The multi-sensor-fused SLAM method of claim 1, wherein loop detection is performed on the locally mapped keyframes by a loop detection thread, and the pose and the map point position of the mapped keyframes are modified according to loop frame adjustment, comprising:
taking the key frames with the same word number as the current key frame and the similarity detection larger than the lowest score of the adjacent key frames as candidate frames, and selecting the key frames with continuity from the candidate frames as loop candidate frames, wherein the lowest score of the adjacent key frames refers to: the lowest similarity between the adjacent key frame of the current key frame and the previous key frame;
through matching of bag-of-words vector acceleration descriptors, Sim3 transformation of a current key frame and a closed-loop candidate frame is calculated by using RANSAC, and more matched map points are obtained by projecting map points according to the preliminarily obtained Sim3 transformation;
taking the Sim3 transformation and map points corresponding to the two key frames as vertexes, traversing the observation of all map points, adding forward and reverse projected binary edges, fixing the pose of the map points, and optimizing the Sim3 transformation of the two frames by minimizing the reprojection error;
according to the optimized Sim3, projecting map points of the closed-loop key frame and the key frames connected with the closed-loop key frame to the current key frame for matching to obtain more matching points;
by solving for the optimized Sim3, the keyframe poses and map point positions are adjusted based on the solved Sim 3.
6. A multi-sensor fused SLAM device, comprising:
the calibration unit is used for calibrating internal parameters of the camera in the SLAM system; calibrating external parameters of a camera, a wheel speed meter and an IMU in the SLAM system;
the projection calculation unit is used for giving a camera projection model based on the internal parameters of the camera and the calibration results of the external parameters of the camera, the wheel speed meter and the IMU;
the initialization unit is used for controlling the initialization of loose coupling of a plurality of cameras and wheel speed meters in the SLAM system;
the tracking thread unit is used for acquiring initialized map points and poses by adopting a tracking thread, calculating and optimizing the poses of the current key frame based on image data, IMU data and wheel speed count data acquired from a reference frame to the current key frame, and sending the key frame to a local mapping thread;
the local map building thread unit is used for inserting a key frame sent by a tracking thread by adopting a local map building thread, generating and fusing new map points based on triangulation of the key frame, and screening and removing redundant key frames and redundant map points;
and the loop detection unit is used for performing loop detection on the key frame screened by the local mapping thread by adopting a loop detection thread and correcting the pose and the position of the map point of the screened key frame according to the adjustment of the loop frame.
7. The multi-sensor fused SLAM apparatus of claim 6, wherein when the cameras in the SLAM system are perspective cameras, the calibrated parameters of the cameras in the SLAM system include but are not limited to one or more combinations of focal length, perspective image center coordinates, distortion coefficients; when the camera in the SLAM system is a fisheye camera, the internal parameters of the camera in the calibrated SLAM system include, but are not limited to, one or more of a forward mapping coefficient, a reverse mapping coefficient, fisheye image center coordinates, and affine transformation coefficients.
8. The multi-sensor fused SLAM device of claim 6, wherein said projection computation unit, when given a camera projection model based on calibration results of internal parameters of said camera and external parameters of camera, wheel speed meter and IMU, is specifically configured to:
based on the internal reference and the external reference, calculating to obtain a posture transformation matrix from a world coordinate system to a body coordinate system, a posture transformation matrix from the body coordinate system to a camera coordinate system, and a transformation matrix from a camera coordinate system of the fisheye camera to a camera coordinate system of the jth virtual perspective camera;
based on the formula
Figure FDA0003071909690000041
Calculating to obtain a camera projection model KjWherein P is a map point P (X, Y, Z) projected to the camera c under the world coordinate systemiThe pixel point p (x, y) obtained under the pixel coordinate system of (1), the KjA perspective transformation matrix for the jth virtual perspective camera, said
Figure FDA0003071909690000042
To slave fisheye camera ciTo the camera coordinate system of the jth virtual perspective camera, the world coordinate system
Figure FDA0003071909690000043
For the body coordinate system to the ith camera ciPose transformation matrix of camera coordinate system, said TbwAnd P is a map point P (X, Y, Z) under the world coordinate system.
9. The multi-sensor fused SLAM device of claim 6, wherein said initialization unit, when controlling initialization of multiple cameras and wheel speed meter loose coupling in said SLAM system, is specifically configured to:
respectively calculating the essential matrix of each camera through epipolar constraint, and selecting the camera corresponding to the matrix with the most interior points as an initialization camera;
performing visual initialization to obtain a translation distance of a camera optical center of the initialization camera, and recording the translation distance as a first translation distance;
obtaining the translation distance of coordinate points of a body coordinate system between two adjacent frames through the pre-integration of a wheel speed meter;
calculating the translation distance of the camera optical center of the initialization camera between two adjacent frames based on the translation distance of the coordinate point of the body coordinate system, and recording as a second translation distance;
calculating a scale factor based on the first translation distance and the second translation distance;
and recovering the absolute scales of the attitude and the map points based on the scale factors, and initializing the IMU wheel speed meter.
10. The multi-sensor-fused SLAM device according to claim 6, wherein the loop detection unit, when performing loop detection on the keyframe after the local mapping process is screened by using a loop detection thread and correcting the pose of the screened keyframe and the position of the map point according to the adjustment of the loop frame, is specifically configured to:
taking the key frames with the same word number as the current key frame and the similarity detection larger than the lowest score of the adjacent key frames as candidate frames, and selecting the key frames with continuity from the candidate frames as loop candidate frames, wherein the lowest score of the adjacent key frames refers to: the lowest similarity between the adjacent key frame of the current key frame and the previous key frame;
through matching of bag-of-words vector acceleration descriptors, Sim3 transformation of a current key frame and a closed-loop candidate frame is calculated by using RANSAC, and more matched map points are obtained by projecting map points according to the preliminarily obtained Sim3 transformation;
taking the Sim3 transformation and map points corresponding to the two key frames as vertexes, traversing the observation of all map points, adding forward and reverse projected binary edges, fixing the pose of the map points, and optimizing the Sim3 transformation of the two frames by minimizing the reprojection error;
according to the optimized Sim3, projecting map points of the closed-loop key frame and the key frames connected with the closed-loop key frame to the current key frame for matching to obtain more matching points;
by solving for the optimized Sim3, the keyframe poses and map point positions are adjusted based on the solved Sim 3.
CN202110541906.4A 2021-05-18 SLAM method and device for multi-sensor fusion Active CN113516692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110541906.4A CN113516692B (en) 2021-05-18 SLAM method and device for multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110541906.4A CN113516692B (en) 2021-05-18 SLAM method and device for multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN113516692A true CN113516692A (en) 2021-10-19
CN113516692B CN113516692B (en) 2024-07-19

Family

ID=

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114545348A (en) * 2022-02-25 2022-05-27 中电科技扬州宝军电子有限公司 SVD-based radar system error calibration method
CN115164918A (en) * 2022-09-06 2022-10-11 联友智连科技有限公司 Semantic point cloud map construction method and device and electronic equipment
CN115272494A (en) * 2022-09-29 2022-11-01 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment
CN117058430A (en) * 2023-10-12 2023-11-14 北京万龙精益科技有限公司 Method, apparatus, electronic device and storage medium for field of view matching
CN117808882A (en) * 2024-02-29 2024-04-02 上海几何伙伴智能驾驶有限公司 SLAM drift detection and compensation method based on multi-sensor fusion in degradation scene

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887087A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of SLAM of vehicle builds drawing method and system
CN110411457A (en) * 2019-08-27 2019-11-05 纵目科技(上海)股份有限公司 Localization method, system, terminal and the storage medium merged with vision is perceived based on stroke
CN110462683A (en) * 2018-03-06 2019-11-15 斯坦德机器人(深圳)有限公司 Method, terminal and the computer readable storage medium of close coupling vision SLAM
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN111156984A (en) * 2019-12-18 2020-05-15 东南大学 Monocular vision inertia SLAM method oriented to dynamic scene
CN111415375A (en) * 2020-02-29 2020-07-14 华南理工大学 S L AM method based on multi-fisheye camera and double-pinhole projection model
CN111462135A (en) * 2020-03-31 2020-07-28 华东理工大学 Semantic mapping method based on visual S L AM and two-dimensional semantic segmentation
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN111986506A (en) * 2020-07-20 2020-11-24 苏州易航远智智能科技有限公司 Mechanical parking space parking method based on multi-vision system
CN112219087A (en) * 2019-08-30 2021-01-12 深圳市大疆创新科技有限公司 Pose prediction method, map construction method, movable platform and storage medium
CN112734841A (en) * 2020-12-31 2021-04-30 华南理工大学 Method for realizing positioning by using wheel type odometer-IMU and monocular camera

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110462683A (en) * 2018-03-06 2019-11-15 斯坦德机器人(深圳)有限公司 Method, terminal and the computer readable storage medium of close coupling vision SLAM
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN109887087A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of SLAM of vehicle builds drawing method and system
CN110411457A (en) * 2019-08-27 2019-11-05 纵目科技(上海)股份有限公司 Localization method, system, terminal and the storage medium merged with vision is perceived based on stroke
CN112219087A (en) * 2019-08-30 2021-01-12 深圳市大疆创新科技有限公司 Pose prediction method, map construction method, movable platform and storage medium
WO2021035669A1 (en) * 2019-08-30 2021-03-04 深圳市大疆创新科技有限公司 Pose prediction method, map construction method, movable platform, and storage medium
CN111156984A (en) * 2019-12-18 2020-05-15 东南大学 Monocular vision inertia SLAM method oriented to dynamic scene
CN111415375A (en) * 2020-02-29 2020-07-14 华南理工大学 S L AM method based on multi-fisheye camera and double-pinhole projection model
CN111462135A (en) * 2020-03-31 2020-07-28 华东理工大学 Semantic mapping method based on visual S L AM and two-dimensional semantic segmentation
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN111986506A (en) * 2020-07-20 2020-11-24 苏州易航远智智能科技有限公司 Mechanical parking space parking method based on multi-vision system
CN112734841A (en) * 2020-12-31 2021-04-30 华南理工大学 Method for realizing positioning by using wheel type odometer-IMU and monocular camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董伯麟;柴旭;: "基于IMU/视觉融合的导航定位算法研究", 压电与声光, no. 05, pages 144 - 148 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114545348A (en) * 2022-02-25 2022-05-27 中电科技扬州宝军电子有限公司 SVD-based radar system error calibration method
CN115164918A (en) * 2022-09-06 2022-10-11 联友智连科技有限公司 Semantic point cloud map construction method and device and electronic equipment
CN115272494A (en) * 2022-09-29 2022-11-01 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment
CN115272494B (en) * 2022-09-29 2022-12-30 腾讯科技(深圳)有限公司 Calibration method and device for camera and inertial measurement unit and computer equipment
CN117058430A (en) * 2023-10-12 2023-11-14 北京万龙精益科技有限公司 Method, apparatus, electronic device and storage medium for field of view matching
CN117058430B (en) * 2023-10-12 2023-12-22 北京万龙精益科技有限公司 Method, apparatus, electronic device and storage medium for field of view matching
CN117808882A (en) * 2024-02-29 2024-04-02 上海几何伙伴智能驾驶有限公司 SLAM drift detection and compensation method based on multi-sensor fusion in degradation scene
CN117808882B (en) * 2024-02-29 2024-05-17 上海几何伙伴智能驾驶有限公司 SLAM drift detection and compensation method based on multi-sensor fusion in degradation scene

Similar Documents

Publication Publication Date Title
CN111561923B (en) SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
Jinyu et al. Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality
Kelly et al. A general framework for temporal calibration of multiple proprioceptive and exteroceptive sensors
CN112598757B (en) Multi-sensor time-space calibration method and device
Rehder et al. A general approach to spatiotemporal calibration in multisensor systems
CN111415387B (en) Camera pose determining method and device, electronic equipment and storage medium
CN112649016A (en) Visual inertial odometer method based on point-line initialization
CN112184824B (en) Camera external parameter calibration method and device
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN113551665B (en) High-dynamic motion state sensing system and sensing method for motion carrier
CN110207693B (en) Robust stereoscopic vision inertial pre-integration SLAM method
Zhang et al. Vision-aided localization for ground robots
CN111932674A (en) Optimization method of line laser vision inertial system
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
CN113506342B (en) SLAM omni-directional loop correction method based on multi-camera panoramic vision
CN110660098A (en) Positioning method and device based on monocular vision
CN116184430B (en) Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit
CN110874854A (en) Large-distortion wide-angle camera binocular photogrammetry method based on small baseline condition
CN113674412A (en) Pose fusion optimization-based indoor map construction method and system and storage medium
CN114690229A (en) GPS-fused mobile robot visual inertial navigation method
CN113345032A (en) Wide-angle camera large-distortion image based initial image construction method and system
CN112284381B (en) Visual inertia real-time initialization alignment method and system
CN113744308A (en) Pose optimization method, pose optimization device, electronic device, pose optimization medium, and program product
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN117073720A (en) Method and equipment for quick visual inertia calibration and initialization under weak environment and weak action control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant