CN107301654A - A kind of positioning immediately of the high accuracy of multisensor is with building drawing method - Google Patents

A kind of positioning immediately of the high accuracy of multisensor is with building drawing method Download PDF

Info

Publication number
CN107301654A
CN107301654A CN201710437709.1A CN201710437709A CN107301654A CN 107301654 A CN107301654 A CN 107301654A CN 201710437709 A CN201710437709 A CN 201710437709A CN 107301654 A CN107301654 A CN 107301654A
Authority
CN
China
Prior art keywords
point
laser radar
estimation
camera
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710437709.1A
Other languages
Chinese (zh)
Other versions
CN107301654B (en
Inventor
王�琦
李学龙
王丞泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710437709.1A priority Critical patent/CN107301654B/en
Publication of CN107301654A publication Critical patent/CN107301654A/en
Application granted granted Critical
Publication of CN107301654B publication Critical patent/CN107301654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a kind of positioning immediately of the high accuracy of multisensor with building drawing method, i.e., high-precision real-time three-dimensional structure is carried out to surrounding environment with laser radar using color camera, and calculate position and the posture of camera in real time.On the basis of quick vision SLAM, introduce the information of laser radar, and selectively use laser radar information, modified result is carried out in time, again three-dimensional map is built by revised result, make positioning that there is correct yardstick and higher precision with building figure, while making algorithm that there is relatively low complexity.

Description

A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
Technical field
The invention belongs to technical field of machine vision, and in particular to a kind of positioning immediately of the high accuracy of multisensor is with building figure side Method.
Background technology
The movement of robot at this stage plans that the independent navigation ability of robot is dependent on instant by manual path mostly Position and build diagram technology (Simultaneous Localization and Mapping, hereinafter referred to as SLAM).Its core is appointed Business is, when robot enters unknown working environment, using sensor information, high efficiency and exactly structure to be carried out to surrounding environment (Mapping) is built, while obtaining the position of equipment in space and posture (Localization).Except that can apply in machine Outside people field, the spatial pursuit and automatic Pilot of virtual reality and augmented reality equipment can similarly use SLAM technologies.
SLAM problems have been pass by 30 years since the proposition, sensor and computational methods during this used in SLAM problems It all there occurs huge change.Using vision sensor the SLAM technologies of main flow more, including monocular sensor, many mesh sensors and Sensor of coloured image and depth information (RGB-D) etc..In recent years, with the development of laser radar technique, laser radar is set The standby accurate estimation ability by it to environmental metrics, and related for characteristic the allows laser radar such as illumination variation is insensitive SLAM schemes have wide development space in actual applications.
The SLAM schemes of the view-based access control model of main flow can be divided into two classes according to optimization method at this stage, and a class is to use wave filter SLAM methods, another kind of is using figure optimization SLAM methods.
It is fairly simple using the SLAM scheme model constructions of wave filter, but error can gradually add up unrepairable.Davison Et al. in document " A.Davison, I.Reid, and N.Molton.MonoSLAM:Real-Time Single Camera SLAM.IEEE Transactions on Pattern Analysis and Machine Intelligence,pp.1052- MonoSLAM schemes are proposed in 1067,2007. ", are the monocular sensor based on extended Kalman filter (abbreviation EKF) SLAM schemes, are solved by building road sign point in the picture and carrying out model construction to SLAM using EKF.
Using figure optimization method SLAM schemes due to posture figure to be built often operand than larger.Artal et al. exists Document " R.Artal, J.Montiel, and J.Tardos.ORB-SLAM:AVersatile and Accurate Monocular SLAM System.IEEE Transactions on Robotics,vol.31,no.5,pp 1147-1163, The ORB-SLAM proposed in 2015. " is the monocular vision SLAM schemes for being currently based on figure optimization method, and ORB-SLAM passes through to figure As extracting ORB feature descriptors (feature descriptor that a kind of extraction rate is exceedingly fast), very high speed can be reached, and pass through Perfect figure optimization operation, obtained map also has very high accuracy.
The problem of SLAM algorithms of monocular vision sensor exist in actual applications has:Monocular vision sensor is difficult to estimate Count the scale of environment, i.e. yardstick;The depth of environment is tried to achieve by mathematical computations such as trigonometric ratios, often with error;Visual sensing Device is difficult to processing illumination acutely, the scene such as quick mobile and shortage texture, can cause Attitude estimation precise decreasing;In order to eliminate Above-mentioned error is, it is necessary to introduce large-scale optimizatoin, and solution is difficult and time-consuming.
And other sensor such as laser radar sensors are used alone have that calculating is slow, information is not enough enriched etc. and ask Topic, SLAM tasks can not be equally solved well.
The content of the invention
In order to overcome the deficiencies in the prior art, the present invention provides a kind of high accuracy positioning immediately of multisensor with building figure side Method, that is, merge vision sensor and complete high accuracy method of the positioning with building figure immediately with laser radar data.In quick vision On the basis of SLAM, by introducing the information of laser radar, with higher positioning with building figure precision, and by laser radar Data are selectively used, and in the case where ensureing accuracy, make the complexity of algorithm relatively low.
A kind of positioning immediately of the high accuracy of multisensor is with building drawing method, it is characterised in that step is as follows:
Step 1:Input color image is simultaneously pre-processed:The color image sequence that camera is shot is inputted, by each frame cromogram As being converted to gray level image;
Step 2:Feature extraction:Feature extraction is carried out to each frame gray level image using ORB feature extraction algorithms, obtains every The ORB characteristic points of one two field picture;
Step 3:Characteristic matching and estimation:Characteristic point progress using approximate KNN storehouse to adjacent two field pictures Match somebody with somebody, estimation is carried out to being solved using the PnP based on random sampling consensus method to the point matched, the motion for obtaining camera is estimated Count sequenceWherein,The i-th frame and the are being shot for camera Estimation between i-1 two field pictures, i=2 ..., N, N is number of image frames, RCamRepresent camera spin matrix, tCamRepresent camera Transposed matrix;
Step 4:Input correspondence laser radar data is simultaneously pre-processed:Laser radar data is inputted, will be apart from more than laser thunder 0.8 times of the point up to range removes, and laser radar point cloud data is constituted by left point;It is assumed here that described laser radar with The position relationship of the camera of shoot coloured image is fixed, has completed combined calibrating in step 1, and the two visual field is identical, internal ginseng Number is known;
Step 5:The characteristic matching and estimation of laser radar point cloud data:To laser radar point cloud data every x frames A characteristic matching and estimation are carried out, i.e., each frame laser radar point cloud data is divided into four etc. in the horizontal direction Big region, 4 marginal points and 4 planar point constitutive characteristic points are extracted in each region according to smoothness, according to smoothness to jth frame Matched with the characteristic point of jth-x frame data, if the point to matching is obtained to carrying out estimation using ICP algorithm The estimation sequence of laser radar
Wherein,For estimation of the laser radar between scanning jth frame and jth-x frame data, j=x+ 1,2x+1 ..., M, M are laser radar data totalframes, and x spans are [5,20], RLidarLaser radar spin matrix is represented, tLidarRepresent laser radar transposed matrix;
Described marginal point is the minimum point of smoothness in region, and described planar point is smoothness maximum in region Point, smoothness isWherein, S represents the set that points all in kth time scanning are constituted, Lk,pWith Lk,qP and q points space coordinate are represented, | | | | represent Euclidean distance;Described refers to according to smoothness progress matching If the difference of smoothness is less than the 5% of the smoothness of current point, match, otherwise, mismatch;
Step 6:Error correction and recording track:The estimation of the laser radar obtained with step 5 replaces the correspondence time The estimation sum of camera, obtains the camera motion estimated sequence S ' after error correction, the i.e. movement locus of camera in interval;
Step 7:Set up three-dimensional point cloud map:According toCalculating obtains three-dimensional point cloud map pclmap, its In,For n-thjTwo field picture transforms to the three-dimensional point cloud under the first two field picture angular view coordinate system Map, if the picture frame serial number n that the synchronization camera of m frames laser radar point cloud scanning is shotj, by njThe sequence of composition For N, m=x+1,2x+1 ..., M;For n-thjThe point cloud of two field picture Map, L be step 2 in extract n-thjThe number of the characteristic point of two field picture, (xl,yl,zl) represent point coordinate, by following public affairs Formula is calculated:
xl=(ul-cx)·dl/fx
yl=(vl-cy)·dl/fy
zl=dl
Wherein, l=1 ..., L, (ul,vl) for the image pixel coordinates of l-th characteristic point, (fx,fy,cx,cy) for what is given Camera parameter, dlThe depth value retrieved for l-th of characteristic point in the corresponding laser radar point cloud of image capture moment;
Such as fruit dot cloud density is excessive, then down-sampled to a cloud map datum progress, that is, obtains final three-dimensional point cloud map; The described point excessive number for referring to one cubic metre of space midpoint of cloud density is more than 10000.
The beneficial effects of the invention are as follows:On the basis of quick vision SLAM, with reference to the information of laser radar, make SLAM Positioning in task has higher precision and correct yardstick with figure of founding a capital, at the same overcome single laser radar SLAM speed compared with Slowly not the shortcomings of, not having colouring information;On the premise of accuracy is ensured, algorithm selectively uses laser radar information, it is ensured that The real-time and low complex degree of algorithm;This algorithm can output campaign estimated result, and correct the result in time in real time, simultaneously Three-dimensional map is built by revised result, algorithm practicality had so both been ensure that, and in turn ensure that algorithm in a long time There is higher precision.
Brief description of the drawings
Fig. 1 is a kind of high accuracy positioning immediately of multisensor of the present invention with building drawing method flow chart
Embodiment
The present invention is further described with reference to the accompanying drawings and examples, and the present invention includes but are not limited to following implementations Example.
The invention provides a kind of high accuracy of multisensor, positioning, with building drawing method, uses color camera and laser immediately Radar carries out high-precision real-time three-dimensional structure to surrounding environment, and calculates position and the appearance of the camera of shooting image in real time State.The inventive method realize premise be:Be for the camera of shooting and the position relationship of laser radar it is fixed, i.e., the two There are identical displacement and deflection in shooting process.Also, laser radar has been completed in the combined calibrating with camera, i.e. image Each pixel its depth value can be obtained in laser radar three-dimensional point cloud, and the two visual field is identical, known to inner parameter.
As shown in figure 1, positioning is with building drawing method immediately for a kind of high accuracy of multisensor of the present invention, its implementation process is such as Under:
Step 1:Input color image is simultaneously pre-processed
The present embodiment uses PointGray Flea2 color camera shoot coloured images.Image original resolution is 1392 × 1040, frame per second is 17fps (frame is per second), i.e., the image sequence of one second 17 frames of input.First, image is pre-processed, Coloured image is converted into gray level image to carry out subsequent arithmetic.
Step 2:Feature extraction
The gray level image inputted for previous step, extracts ORB (Oriented FAST and Rotated frame by frame first BRIEF) characteristic point.Characteristic point herein can also be replaced by other kinds of characteristic point, such as SIFT, SURF according to demand, But because the extraction of these characteristic points more takes, the overall speed of service of algorithm may be affected.
ORB (Oriented FAST and Rotated BRIEF) is the algorithm that a kind of rapid characteristic points are extracted and described. This algorithm is to be proposed by scholars such as Rublee in 2011, " Rublee Ethan, et al. " ORB:An efficient alternative to SIFT or SURF."IEEE International Conference on Computer Vision IEEE,2012:2564-2571.”.ORB algorithms are divided into two parts, are feature point extraction and feature point description respectively.Feature extraction It is by FAST (Features from Accelerated Segment Test) algorithm development is Lai feature point description is basis BRIEF (Binary Robust Independent Elementary Features) feature describes algorithm improvement.ORB is special It is to combine the detection method of FAST characteristic points with BRIEF Feature Descriptors to levy, and is done on the basis of they are original Improve and optimization.The extraction rate that our experiments show that ORB features is about 100 times of SIFT, is 10 times of SURF.
The present embodiment directly extracts each frame figure of function pair using the ORB feature descriptors in computer vision storehouse OPENCV As carrying out feature extraction.
Step 3:Characteristic matching and estimation
To extracting two field pictures adjacent after characteristic point, approximate KNN storehouse (Fast library for are used Approximate nearest neighbors, FLANN) carry out characteristic matching.FLANN storehouses be it is most complete at present it is approximate most Neighborhood matching is increased income storehouse, and it not only realizes a series of lookup algorithms, further comprises a kind of mechanism of the most fast algorithm of automatic selection. FLANN basic algorithm is by Muja et al. in proposition in 2009, " M.Muja and D.Lowe, " Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration",in International Conference on Computer Vision Theory and Applications(VISAPP'09),2009”。
The each two field picture newly inputted is extracted after characteristic point, using in the OPENCV of computer vision storehouse FLANNbasedMatcher functions carry out characteristic matching with previous frame image, obtain some to corresponding feature in two field pictures Point;Then the PnP that random sampling consensus method (RANdom SAmple Consensus, RANSAC) is based on to matching result is solved The attitudes vibration between two field pictures is shot to camera to estimate.RANSAC is according to one group of sample number for including abnormal data According to collection, the mathematical model parameter of data is calculated, the algorithm of effective sample data is obtained.By Fischler and Bolles in 1981 Propose at first in year, " Fischler, M.A.and Bolles, R.C.Random Sample Consensus:A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography.Communications of the ACM,24(6):381–395,1981.”.The purpose that PnP is solved is profit Determine that camera, relative to the displacement and rotation of world coordinate system, is existed by Moreno-Noguer with the known planar point pair matched Propose within 2007, " F.Moreno-Noguer, V.Lepetit and P.Fua, Accurate Non-Iterative O (n) Solution to the PnP Problem,IEEE International Conference on Computer Vision, Rio de Janeiro,Brazil,October 2007.”.The present embodiment is directly used in the OPENCV of computer vision storehouse Solve PnPRansac functions carry out computing.
By this step can obtain between arbitrary neighborhood two field pictures camera on world coordinate system (the first two field picture Visual angle be principal direction coordinate system) estimation (RCam,tCam) and camera estimation sequenceWherein,The i-th frame and the i-th -1 frame are being shot for camera Estimation between image, i=2 ..., N, N is number of image frames, RCamRepresent camera spin matrix, tCamRepresent camera displacement Matrix.
Because, with the presence of much noise, the attitudes vibration that this step is obtained still has necessarily in feature point extraction and matching process Error, will be modified using step 6.
Step 4:Input correspondence laser radar data is simultaneously pre-processed
The present embodiment uses the HDL-64E laser radars that Velodyne companies of the U.S. produce, and is a kind of small-sized multi-thread panorama Laser radar, possesses the visual field of 26.9 ° of longitudinal direction, laterally 360 °.It can be evenly distributed each second on 64 environmental scanning lines 220000 points, to represent distance and position relationship of the surrounding space point apart from laser radar central point.Effective range For 120m.
In actual use, for reduce error, will remove beyond the 0.8 of effective range times of point, that is, remove measurement away from From the spatial point more than 80m, it is subsequently introduced and remaining constitutes cloud data a little.
Step 5:The characteristic matching and estimation of laser radar point cloud data
In this step, the present invention is by the laser radar point cloud data according to input, every x frames, extracts feature and carries out Estimation, to reduce algorithm complex, improve computational efficiency, so, can obtain the estimation sequence of laser radar RowWherein,It is that laser radar is scanning jth Estimation between frame and jth-x frame data, j=x+1,2x+1 ..., M, M is laser radar data frame number, x spans For [5,20], x values are 5, R in the present embodimentLidarRepresent laser radar spin matrix, tLidarRepresent laser radar displacement square Battle array.Specially:
Because a cloud is discrete three-dimensional point in space, there is no other information in addition to coordinate, and same spatial point Coordinate is continually changing in mobile data acquisition, so the traceable feature defined in a cloud, determines for this A kind of feature extraction mode based on smoothness of justice, that is, defining smoothness c is:
Wherein, S represents the set that points all in kth time scanning are constituted, Lk,pWith Lk,qP and q points space coordinate are represented, | | | | represent Euclidean distance.
The calculating of smoothness is carried out for point all in S, the maximum point of smoothness is defined as planar point, and smoothness is minimum Point be then defined as marginal point.It is excessive in order to avoid the characteristic point of generation, the point cloud scanned every time is divided into four etc. first Big region, such as takes 0 °, 90 °, 180 ° and 270 ° four principal directions, each direction or so respectively takes ± 45 ° to be divided into a region.Often Individual region is up to 4 marginal points and 4 planar point constitutive characteristic points.
Obtain after characteristic point, the characteristic point of jth frame and jth-x frame data is matched according to smoothness, smoothness Difference is less than the 5% characteristic point point of the smoothness of current point to being considered expression of the same spatial point in twice sweep, The point pair as matched.Moved after the characteristic point matched using ICP algorithm (Iterative Closest Point) Estimation, i.e., obtain the estimation (R of laser radar by solving two groups of three dimensions pointsLidar,tLidar).ICP algorithm by Zhang et al. propositions, " Zhang, Zhengyou (1994) " Iterative point matching for registration of free-form curves and surfaces".International Journal of Computer Vision.Springer.13(12):119–152.”
Step 6:Error correction and recording track
Because laser radar point cloud data precision is high, data are reliable, and the error in the result of positioning is minimum, but computing Speed is slower;And the algorithm based on coloured image is very fast in arithmetic speed, but precision is poor.The present invention is for two kinds of sensors The different qualities followed the trail of of posture, carry out error by the way of laser radar estimation replaces camera motion estimation sum Amendment, lifts the accuracy of total algorithm, reduces the overall time complexity of algorithm.
Estimation (the once fortune between scanning jth frame and jth-x frame data of the laser radar obtained with step 5 Dynamic estimation) replace correspondence time interval (time intervals of scanning jth frame and jth-x frame data) interior camera estimation it With, obtain the camera motion estimated sequence after error correction, that is, camera movement locus.
Step 7:Set up three-dimensional point cloud map
When finally setting up high-precision three-dimensional point cloud map, only using the estimation of the less laser radar of error (RLidar,tLidar).Specially:
According toCalculating obtains three-dimensional point cloud map pclmap, wherein N-th shot for the synchronization camera that m frames laser radar point cloud is scannedjTwo field picture (sets Laser Radar Scanning m frame points The picture frame serial number n that cloud synchronization camera is shotj, by njThe sequence of composition is N) transform to the first two field picture angular view coordinate Three-dimensional point cloud map under system,For n-thjThe point cloud of two field picture Figure, L is n-th in step 2jTwo field picture extracts obtained feature point number, (xl,yl,zl) represent point coordinate, as follows Calculate:
xl=(ul-cx)·dl/fx
yl=(vl-cy)·dl/fy
zl=dl
Wherein, l=1 ..., L, (ul,vl) for the image coordinate of l-th characteristic point, (fx,fy,cx,cy) it is given camera Parameter, dlThe depth value retrieved for l-th of characteristic point in the corresponding laser radar point cloud of image capture moment.
As can be seen that the point cloud map so built is superimposed by Multiple-Scan, it is overall excessively dense.It is aobvious in order to improve Show speed, a reduction point cloud dense degree, the installation space dot density threshold value in the storage of a cloud, i.e., when in one cubic metre of space When spatial point number is more than 10000, the down-sampled number to ensure spatial point is carried out to cloud data in rational scope, Obtain final point cloud map.Certainly, when scanning narrow space, spatial point density threshold can be correspondingly improved.
The present embodiment is to be in central processing unitI5-4590 3.4GHz CPU, internal memory 16G, Ubuntu14.04 The experiment carried out on the computer of operating system, experiment using third party's data set in urban environment camera motion track to prove Levels of precision.
Test the sequence 13 used and come from KITTI data sets, the database is by A.Geiger et al. in document “A.Geiger and Philip Lenz,Vision meets Robotics:The KITTI Dataset, International Journal of Robotics Research, 2013 " middle propositions, by PointGray Flea2 Color cameras camera shoot coloured images, cloud data is gathered by Velodyne HDL-64E laser radars.
The ratio (%) and algorithm of the offset distance occupation of land figure size of camera motion track and actual value is used to take as property Energy parameter, the ORB-SLAM algorithms of the inventive method and main flow in monocular vision is contrasted, as a result as listed in table 1.Can be with Find out, the inventive method not only increases overall arithmetic speed, and surmount in precision after laser radar information is introduced Main flow vision SLAM methods.The present invention is also applied in terms of automatic Pilot, robot autonomous navigation.
The algorithm performance of table 1 compares
Method Time-consuming (s) Offset distance (%)
The inventive method 120.3 1.8
ORB-SLAM 126.0 3.1

Claims (1)

1. a kind of positioning immediately of the high accuracy of multisensor is with building drawing method, it is characterised in that step is as follows:
Step 1:Input color image is simultaneously pre-processed:The color image sequence that camera is shot is inputted, each color image frame is turned It is changed to gray level image;
Step 2:Feature extraction:Feature extraction is carried out to each frame gray level image using ORB feature extraction algorithms, each frame is obtained The ORB characteristic points of image;
Step 3:Characteristic matching and estimation:The characteristic point of adjacent two field pictures is matched using approximate KNN storehouse, Estimation is carried out to being solved using the PnP based on random sampling consensus method to the point matched, the estimation of camera is obtained SequenceWherein,For camera shoot the i-th frame and i-th- Estimation between 1 two field picture, i=2 ..., N, N is number of image frames, RCamRepresent camera spin matrix, tCamRepresent phase seat in the plane Move matrix;
Step 4:Input correspondence laser radar data is simultaneously pre-processed:Laser radar data is inputted, will be apart from more than laser radar amount 0.8 times of point of journey removes, and laser radar point cloud data is constituted by left point;It is assumed here that described laser radar and step 1 The position relationship of the camera of middle shoot coloured image fixes, completed combined calibrating, and the two visual field is identical, inner parameter Know;
Step 5:The characteristic matching and estimation of laser radar point cloud data:Laser radar point cloud data is carried out every x frames Characteristic matching and estimation, i.e., be divided into four Ge Deng great areas in the horizontal direction by each frame laser radar point cloud data 4 marginal points and 4 planar point constitutive characteristic points are extracted in domain, each region according to smoothness, according to smoothness to jth frame and the The characteristic point of j-x frame data is matched, if the point to matching obtains laser to carrying out estimation using ICP algorithm The estimation sequence of radar
Wherein,For estimation of the laser radar between scanning jth frame and jth-x frame data, j=x+1,2x+ 1 ..., M, M are laser radar data totalframes, and x spans are [5,20], RLidarRepresent laser radar spin matrix, tLidar Represent laser radar transposed matrix;
Described marginal point is the minimum point of smoothness in region, and described planar point is the maximum point of smoothness in region, light Slippery isWherein, S represents the set that points all in kth time scanning are constituted, Lk,pWith Lk,qP and q points space coordinate are represented, | | | | represent Euclidean distance;Described refers to if light according to smoothness progress matching The difference of slippery is less than the 5% of the smoothness of current point, then matches, otherwise, mismatches;
Step 6:Error correction and recording track:The estimation of the laser radar obtained with step 5 replaces correspondence time interval The estimation sum of interior camera, obtains the camera motion estimated sequence S ' after error correction, the i.e. movement locus of camera;
Step 7:Set up three-dimensional point cloud map:According toCalculating obtains three-dimensional point cloud map pclmap, wherein,For n-thjTwo field picture is with transforming to the three-dimensional point cloud under the first two field picture angular view coordinate system Figure, if the picture frame serial number n that the synchronization camera of m frames laser radar point cloud scanning is shotj, by njThe sequence of composition is N, m=x+1,2x+1 ..., M;For n-thjThe point cloud of two field picture Figure, L be step 2 in extract n-thjThe number of the characteristic point of two field picture, (xl,yl,zl) represent point coordinate, as follows Calculate:
xl=(ul-cx)·dl/fx
yl=(vl-cy)·dl/fy
zl=dl
Wherein, l=1 ..., L, (ul,vl) for the image coordinate of l-th characteristic point, (fx,fy,cx,cy) join for given camera Number, dlThe depth value retrieved for l-th of characteristic point in the corresponding laser radar point cloud of image capture moment;
Such as fruit dot cloud density is excessive, then down-sampled to a cloud map datum progress, that is, obtains final three-dimensional point cloud map;It is described Point the excessive number for referring to one cubic metre of space midpoint of cloud density more than 10000.
CN201710437709.1A 2017-06-12 2017-06-12 Multi-sensor high-precision instant positioning and mapping method Active CN107301654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710437709.1A CN107301654B (en) 2017-06-12 2017-06-12 Multi-sensor high-precision instant positioning and mapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710437709.1A CN107301654B (en) 2017-06-12 2017-06-12 Multi-sensor high-precision instant positioning and mapping method

Publications (2)

Publication Number Publication Date
CN107301654A true CN107301654A (en) 2017-10-27
CN107301654B CN107301654B (en) 2020-04-03

Family

ID=60135365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710437709.1A Active CN107301654B (en) 2017-06-12 2017-06-12 Multi-sensor high-precision instant positioning and mapping method

Country Status (1)

Country Link
CN (1) CN107301654B (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319976A (en) * 2018-01-25 2018-07-24 北京三快在线科技有限公司 Build drawing method and device
CN108399643A (en) * 2018-03-15 2018-08-14 南京大学 A kind of outer ginseng calibration system between laser radar and camera and method
CN108489496A (en) * 2018-04-28 2018-09-04 北京空间飞行器总体设计部 Noncooperative target Relative Navigation method for estimating based on Multi-source Information Fusion and system
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar
CN108709560A (en) * 2018-08-15 2018-10-26 苏州中研讯科智能科技有限公司 Carrying robot high accuracy positioning air navigation aid based on straightway feature
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium
CN108759823A (en) * 2018-05-28 2018-11-06 浙江大学 The positioning of low speed automatic driving vehicle and method for correcting error in particular link based on images match
CN108896994A (en) * 2018-05-11 2018-11-27 武汉环宇智行科技有限公司 A kind of automatic driving vehicle localization method and equipment
CN109031304A (en) * 2018-06-06 2018-12-18 上海国际汽车城(集团)有限公司 Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN109073398A (en) * 2018-07-20 2018-12-21 深圳前海达闼云端智能科技有限公司 Map establishing method, positioning method, device, terminal and storage medium
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109326006A (en) * 2018-09-30 2019-02-12 百度在线网络技术(北京)有限公司 Map amalgamation method and device
CN109341706A (en) * 2018-10-17 2019-02-15 张亮 A kind of production method of the multiple features fusion map towards pilotless automobile
CN109407115A (en) * 2018-12-25 2019-03-01 中山大学 A kind of road surface extraction system and its extracting method based on laser radar
CN109579843A (en) * 2018-11-29 2019-04-05 浙江工业大学 Multirobot co-located and fusion under a kind of vacant lot multi-angle of view build drawing method
CN109634279A (en) * 2018-12-17 2019-04-16 武汉科技大学 Object positioning method based on laser radar and monocular vision
CN109974742A (en) * 2017-12-28 2019-07-05 沈阳新松机器人自动化股份有限公司 A kind of laser Method for Calculate Mileage and map constructing method
CN110068824A (en) * 2019-04-17 2019-07-30 北京地平线机器人技术研发有限公司 A kind of sensor pose determines method and apparatus
CN110163968A (en) * 2019-05-28 2019-08-23 山东大学 RGBD camera large-scale three dimensional scenario building method and system
CN110398745A (en) * 2019-08-05 2019-11-01 湖南海森格诺信息技术有限公司 Fork truck localization method based on laser radar and vision
CN110786585A (en) * 2019-11-21 2020-02-14 朱利 Multifunctional intelligent helmet
CN110823211A (en) * 2019-10-29 2020-02-21 珠海市一微半导体有限公司 Multi-sensor map construction method, device and chip based on visual SLAM
WO2020097840A1 (en) * 2018-11-15 2020-05-22 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN111207670A (en) * 2020-02-27 2020-05-29 河海大学常州校区 Line structured light calibration device and method
CN111428578A (en) * 2020-03-03 2020-07-17 深圳市镭神智能***有限公司 Self-body and positioning method and device thereof
CN111445530A (en) * 2020-03-24 2020-07-24 广东博智林机器人有限公司 Wheelchair bed and wheelchair homing path determination method
CN111481109A (en) * 2019-01-28 2020-08-04 北京奇虎科技有限公司 Map noise elimination method and device based on sweeper
WO2020154966A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. A rgb point clouds based map generation system for autonomous vehicles
CN111694903A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Map construction method, map construction device, map construction equipment and readable storage medium
CN111912431A (en) * 2020-03-19 2020-11-10 中山大学 Positioning precision test scheme for mobile robot navigation system
CN112132754A (en) * 2020-11-25 2020-12-25 蘑菇车联信息科技有限公司 Vehicle movement track correction method and related device
CN112150507A (en) * 2020-09-29 2020-12-29 厦门汇利伟业科技有限公司 Method and system for synchronously reproducing 3D model of object posture and displacement
CN112230211A (en) * 2020-10-15 2021-01-15 长城汽车股份有限公司 Vehicle positioning method and device, storage medium and vehicle
CN112484746A (en) * 2020-11-26 2021-03-12 上海电力大学 Monocular vision-assisted laser radar odometer method based on ground plane
TWI772177B (en) * 2021-09-10 2022-07-21 迪伸電子股份有限公司 Movement control method of automatic guided device and automatic guided device
CN114863075A (en) * 2022-07-05 2022-08-05 深圳市新天泽消防工程有限公司 Fire-fighting evacuation path planning method, device and equipment based on multiple sensors

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040167716A1 (en) * 2002-12-17 2004-08-26 Goncalves Luis Filipe Domingues Systems and methods for controlling a density of visual landmarks in a visual simultaneous localization and mapping system
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN104764457A (en) * 2015-04-21 2015-07-08 北京理工大学 Urban environment composition method for unmanned vehicles
CN104933708A (en) * 2015-06-07 2015-09-23 浙江大学 Barrier detection method in vegetation environment based on multispectral and 3D feature fusion
US20150304634A1 (en) * 2011-08-04 2015-10-22 John George Karvounis Mapping and tracking system
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106153048A (en) * 2016-08-11 2016-11-23 广东技术师范学院 A kind of robot chamber inner position based on multisensor and Mapping System
CN106296812A (en) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 Synchronize location and build drawing method
CN106443687A (en) * 2016-08-31 2017-02-22 欧思徕(北京)智能科技有限公司 Piggyback mobile surveying and mapping system based on laser radar and panorama camera
CN106446815A (en) * 2016-09-14 2017-02-22 浙江大学 Simultaneous positioning and map building method
CN106595659A (en) * 2016-11-03 2017-04-26 南京航空航天大学 Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN106681330A (en) * 2017-01-25 2017-05-17 北京航空航天大学 Robot navigation method and device based on multi-sensor data fusion

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040167716A1 (en) * 2002-12-17 2004-08-26 Goncalves Luis Filipe Domingues Systems and methods for controlling a density of visual landmarks in a visual simultaneous localization and mapping system
US20150304634A1 (en) * 2011-08-04 2015-10-22 John George Karvounis Mapping and tracking system
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN104764457A (en) * 2015-04-21 2015-07-08 北京理工大学 Urban environment composition method for unmanned vehicles
CN104933708A (en) * 2015-06-07 2015-09-23 浙江大学 Barrier detection method in vegetation environment based on multispectral and 3D feature fusion
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106127739A (en) * 2016-06-16 2016-11-16 华东交通大学 A kind of RGB D SLAM method of combination monocular vision
CN106153048A (en) * 2016-08-11 2016-11-23 广东技术师范学院 A kind of robot chamber inner position based on multisensor and Mapping System
CN106296812A (en) * 2016-08-18 2017-01-04 宁波傲视智绘光电科技有限公司 Synchronize location and build drawing method
CN106443687A (en) * 2016-08-31 2017-02-22 欧思徕(北京)智能科技有限公司 Piggyback mobile surveying and mapping system based on laser radar and panorama camera
CN106446815A (en) * 2016-09-14 2017-02-22 浙江大学 Simultaneous positioning and map building method
CN106595659A (en) * 2016-11-03 2017-04-26 南京航空航天大学 Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
CN106681330A (en) * 2017-01-25 2017-05-17 北京航空航天大学 Robot navigation method and device based on multi-sensor data fusion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LV QIANG 等: "Absolute Scale Estimation of ORB-SLAM Algorithm based on Laser Ranging", 《PROCEEDINGS OF THE 35TH CHINESE CONTROL CONFERENCE》 *
RA´UL MUR-ARTAL 等: "ORB-SLAM: a Versatile and Accurate Monocular SLAM System", 《IEEE TRANSACTIONS ON ROBOTICS》 *
SATOSHIFUJIMOTO 等: "ORBSLAMMAPINITIALIZATIONIMPROVEMENTUSINGDEPTH", 《ICIP 2016》 *
侯荣波 等: "基于ORB-SLAM的室内机器人定位和三维稠密地图构建", 《计算机应用》 *
林连秀 等: "基于ORB-SLAM的移动机器人嵌入式实现与优化", 《微型机与应用》 *

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109974742A (en) * 2017-12-28 2019-07-05 沈阳新松机器人自动化股份有限公司 A kind of laser Method for Calculate Mileage and map constructing method
CN108319976A (en) * 2018-01-25 2018-07-24 北京三快在线科技有限公司 Build drawing method and device
CN108399643A (en) * 2018-03-15 2018-08-14 南京大学 A kind of outer ginseng calibration system between laser radar and camera and method
CN108489496A (en) * 2018-04-28 2018-09-04 北京空间飞行器总体设计部 Noncooperative target Relative Navigation method for estimating based on Multi-source Information Fusion and system
CN108896994A (en) * 2018-05-11 2018-11-27 武汉环宇智行科技有限公司 A kind of automatic driving vehicle localization method and equipment
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar
CN108647646B (en) * 2018-05-11 2019-12-13 北京理工大学 Low-beam radar-based short obstacle optimized detection method and device
CN108759823A (en) * 2018-05-28 2018-11-06 浙江大学 The positioning of low speed automatic driving vehicle and method for correcting error in particular link based on images match
CN108759823B (en) * 2018-05-28 2020-06-30 浙江大学 Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
CN108734654A (en) * 2018-05-28 2018-11-02 深圳市易成自动驾驶技术有限公司 It draws and localization method, system and computer readable storage medium
CN109031304A (en) * 2018-06-06 2018-12-18 上海国际汽车城(集团)有限公司 Vehicle positioning method in view-based access control model and the tunnel of millimetre-wave radar map feature
CN109073398A (en) * 2018-07-20 2018-12-21 深圳前海达闼云端智能科技有限公司 Map establishing method, positioning method, device, terminal and storage medium
CN109073398B (en) * 2018-07-20 2022-04-08 达闼机器人有限公司 Map establishing method, positioning method, device, terminal and storage medium
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109166149B (en) * 2018-08-13 2021-04-02 武汉大学 Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN108709560A (en) * 2018-08-15 2018-10-26 苏州中研讯科智能科技有限公司 Carrying robot high accuracy positioning air navigation aid based on straightway feature
CN109326006A (en) * 2018-09-30 2019-02-12 百度在线网络技术(北京)有限公司 Map amalgamation method and device
CN109341706A (en) * 2018-10-17 2019-02-15 张亮 A kind of production method of the multiple features fusion map towards pilotless automobile
CN109341706B (en) * 2018-10-17 2020-07-03 张亮 Method for manufacturing multi-feature fusion map for unmanned vehicle
US11035958B2 (en) 2018-11-15 2021-06-15 Bejing Didi Infinity Technology And Development Co., Ltd. Systems and methods for correcting a high-definition map based on detection of obstructing objects
WO2020097840A1 (en) * 2018-11-15 2020-05-22 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN109579843A (en) * 2018-11-29 2019-04-05 浙江工业大学 Multirobot co-located and fusion under a kind of vacant lot multi-angle of view build drawing method
CN109579843B (en) * 2018-11-29 2020-10-27 浙江工业大学 Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
CN109634279A (en) * 2018-12-17 2019-04-16 武汉科技大学 Object positioning method based on laser radar and monocular vision
CN109634279B (en) * 2018-12-17 2022-08-12 瞿卫新 Object positioning method based on laser radar and monocular vision
CN109407115A (en) * 2018-12-25 2019-03-01 中山大学 A kind of road surface extraction system and its extracting method based on laser radar
CN109407115B (en) * 2018-12-25 2022-12-27 中山大学 Laser radar-based pavement extraction system and extraction method thereof
CN111481109B (en) * 2019-01-28 2022-08-26 北京奇虎科技有限公司 Map noise elimination method and device based on sweeper
CN111481109A (en) * 2019-01-28 2020-08-04 北京奇虎科技有限公司 Map noise elimination method and device based on sweeper
CN112105890B (en) * 2019-01-30 2023-11-17 百度时代网络技术(北京)有限公司 Map generation system based on RGB point cloud for automatic driving vehicle
CN112105890A (en) * 2019-01-30 2020-12-18 百度时代网络技术(北京)有限公司 RGB point cloud based map generation system for autonomous vehicles
WO2020154966A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. A rgb point clouds based map generation system for autonomous vehicles
CN111694903A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Map construction method, map construction device, map construction equipment and readable storage medium
CN111694903B (en) * 2019-03-11 2023-09-12 北京地平线机器人技术研发有限公司 Map construction method, device, equipment and readable storage medium
CN110068824B (en) * 2019-04-17 2021-07-23 北京地平线机器人技术研发有限公司 Sensor pose determining method and device
CN110068824A (en) * 2019-04-17 2019-07-30 北京地平线机器人技术研发有限公司 A kind of sensor pose determines method and apparatus
CN110163968B (en) * 2019-05-28 2020-08-25 山东大学 RGBD camera large three-dimensional scene construction method and system
CN110163968A (en) * 2019-05-28 2019-08-23 山东大学 RGBD camera large-scale three dimensional scenario building method and system
CN110398745A (en) * 2019-08-05 2019-11-01 湖南海森格诺信息技术有限公司 Fork truck localization method based on laser radar and vision
CN110823211A (en) * 2019-10-29 2020-02-21 珠海市一微半导体有限公司 Multi-sensor map construction method, device and chip based on visual SLAM
CN110786585A (en) * 2019-11-21 2020-02-14 朱利 Multifunctional intelligent helmet
CN111207670A (en) * 2020-02-27 2020-05-29 河海大学常州校区 Line structured light calibration device and method
CN111428578A (en) * 2020-03-03 2020-07-17 深圳市镭神智能***有限公司 Self-body and positioning method and device thereof
CN111912431A (en) * 2020-03-19 2020-11-10 中山大学 Positioning precision test scheme for mobile robot navigation system
CN111445530A (en) * 2020-03-24 2020-07-24 广东博智林机器人有限公司 Wheelchair bed and wheelchair homing path determination method
CN112150507A (en) * 2020-09-29 2020-12-29 厦门汇利伟业科技有限公司 Method and system for synchronously reproducing 3D model of object posture and displacement
CN112150507B (en) * 2020-09-29 2024-02-02 厦门汇利伟业科技有限公司 3D model synchronous reproduction method and system for object posture and displacement
CN112230211A (en) * 2020-10-15 2021-01-15 长城汽车股份有限公司 Vehicle positioning method and device, storage medium and vehicle
CN112132754B (en) * 2020-11-25 2021-06-04 蘑菇车联信息科技有限公司 Vehicle movement track correction method and related device
CN112132754A (en) * 2020-11-25 2020-12-25 蘑菇车联信息科技有限公司 Vehicle movement track correction method and related device
CN112484746B (en) * 2020-11-26 2023-04-28 上海电力大学 Monocular vision auxiliary laser radar odometer method based on ground plane
CN112484746A (en) * 2020-11-26 2021-03-12 上海电力大学 Monocular vision-assisted laser radar odometer method based on ground plane
TWI772177B (en) * 2021-09-10 2022-07-21 迪伸電子股份有限公司 Movement control method of automatic guided device and automatic guided device
CN114863075A (en) * 2022-07-05 2022-08-05 深圳市新天泽消防工程有限公司 Fire-fighting evacuation path planning method, device and equipment based on multiple sensors
CN114863075B (en) * 2022-07-05 2022-10-14 深圳市新天泽消防工程有限公司 Fire-fighting evacuation path planning method, device and equipment based on multiple sensors

Also Published As

Publication number Publication date
CN107301654B (en) 2020-04-03

Similar Documents

Publication Publication Date Title
CN107301654A (en) A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
CN112001926B (en) RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping
CN106296812B (en) It is synchronous to position and build drawing method
CN114399554B (en) Calibration method and system of multi-camera system
CN106960442A (en) Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN109615698A (en) Multiple no-manned plane SLAM map blending algorithm based on the detection of mutual winding
CN111899280B (en) Monocular vision odometer method adopting deep learning and mixed pose estimation
CN114419147A (en) Rescue robot intelligent remote human-computer interaction control method and system
Qian et al. Robust visual-lidar simultaneous localization and mapping system for UAV
CN113012208B (en) Multi-view remote sensing image registration method and system
CN103544710A (en) Image registration method
Ceriani et al. Pose interpolation slam for large maps using moving 3d sensors
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN111860651A (en) Monocular vision-based semi-dense map construction method for mobile robot
CN104751451B (en) Point off density cloud extracting method based on unmanned plane low latitude high resolution image
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
Dong et al. Probability driven approach for point cloud registration of indoor scene
CN116740488B (en) Training method and device for feature extraction model for visual positioning
CN108921896A (en) A kind of lower view vision compass merging dotted line feature
CN117029870A (en) Laser odometer based on road surface point cloud
CN116681844A (en) Building white film construction method based on sub-meter stereopair satellite images
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN115830116A (en) Robust visual odometer method
Chen et al. Binocular vision localization based on vision SLAM system with multi-sensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant