CN108734654A - It draws and localization method, system and computer readable storage medium - Google Patents

It draws and localization method, system and computer readable storage medium Download PDF

Info

Publication number
CN108734654A
CN108734654A CN201810528303.9A CN201810528303A CN108734654A CN 108734654 A CN108734654 A CN 108734654A CN 201810528303 A CN201810528303 A CN 201810528303A CN 108734654 A CN108734654 A CN 108734654A
Authority
CN
China
Prior art keywords
point cloud
dimensional point
dimensional
localization method
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810528303.9A
Other languages
Chinese (zh)
Inventor
刘新
宋朝忠
郭烽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yicheng Automatic Driving Technology Co Ltd
Original Assignee
Shenzhen Yicheng Automatic Driving Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yicheng Automatic Driving Technology Co Ltd filed Critical Shenzhen Yicheng Automatic Driving Technology Co Ltd
Priority to CN201810528303.9A priority Critical patent/CN108734654A/en
Publication of CN108734654A publication Critical patent/CN108734654A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of drawing and localization method, system and computer readable storage medium, wherein it includes being handled to obtain three dimensional point cloud to the point cloud information in the visual angle that obtains by laser radar based on SLAM algorithms to draw with localization method;The corresponding 2 d texture information of the three dimensional point cloud is obtained based on monocular cam, passes through three dimensional point cloud, the three dimensional point cloud after being corrected described in the 2 d texture information correction;Three-dimensional point cloud map is built according to the three dimensional point cloud after the correction, obtains three-dimensional point cloud map.On the basis of laser radar SLAM, in conjunction with the 2 d texture information of monocular cam so that drawing and positioning in SLAM tasks have higher precision.Result after being corrected according to 2 d texture builds three-dimensional point cloud map, ensures the practicability of SLAM algorithms, improves the precision of SLAM.

Description

It draws and localization method, system and computer readable storage medium
Technical field
The present invention relates to technical field of machine vision more particularly to a kind of drawing can with localization method, system and computer Read storage medium.
Background technology
With sensor technology, the rapid development of artificial intelligence theory, computer technology, robot field, which deepens continuously, to grind Study carefully, the various autonomous shiftings with environment sensing ability, behaviour control and dynamic decision ability and interactive capability Mobile robot is developed.Relative to traditional industrial robot, the feature of autonomous mobile robot maximum is can be multiple It is moved freely in miscellaneous environment, to carry out a variety of tasks.In the case of target state and unknown space structure, Detection, the reconstruct of 3D scales are carried out to target by means such as visions, and its pose is accurately measured, for further behaviour Offer condition is provided.
Traditional measurement method based on scanning type laser radar, although three-dimensional that can be intensive by obtaining target surface Point cloud information, the final three-dimensionalreconstruction for realizing target, but its measurement accuracy is inversely proportional with square distance, and be only applicable to closely Move gentle target measurement.It is also simplest optical sensor that monocular vision, which is most common, has been on most of spacecraft Standard facility.Robot based on monocular camera " is positioned and builds diagram technology (Simultaneous by foreign study person immediately Localization and Mapping, hereinafter referred to as SLAM) " method is extended, and is successfully applied to the measurement of target In.But monocular vision can not directly acquire the depth information of target, be needed in noncooperative target pose measurement a variety of with other Sensor is used cooperatively, and is frequently subjected to limit in autonomous system application aspect.
Invention content
It draws and localization method, system and computer readable storage medium the main purpose of the present invention is to provide a kind of, Aim to solve the problem that laser radar or monocular cam are used alone in the prior art to be calculated slow, information and not enough enrich, positioning accurate Spend low technical problem.
To achieve the above object, the present invention provides a kind of draw and includes the following steps with localization method:
The point cloud information in the visual angle that obtains by laser radar is handled to obtain three-dimensional point cloud based on SLAM algorithms Data;
The corresponding 2 d texture information of the three dimensional point cloud is obtained based on monocular cam, passes through the 2 d texture Three dimensional point cloud described in information correction, the three dimensional point cloud after being corrected;
Three-dimensional point cloud map is built according to the three dimensional point cloud after the correction, obtains three-dimensional point cloud map.
Optionally, described that the point cloud information in the visual angle that obtains by laser radar handle based on SLAM algorithms Include to the step of three dimensional point cloud:
Feature point extraction is carried out every x frames to laser radar point cloud information, x is more than or equal to zero;
According to smoothness to being matched every the characteristic point of x frame point cloud information;
To matched characteristic point to carrying out estimation, the movement locus of laser radar is obtained, and then obtain three-dimensional point cloud Data.
Optionally, it is described according to smoothness to including the step of the characteristic point of x frame point cloud information matches:
If every the smoothness of the characteristic point of x frame point clouds difference be less than current signature point smoothness 5%, it is described It is matched between characteristic point;
If the difference every the smoothness of the characteristic points of x frame point clouds is more than or equal to the smoothness of current signature point 5%, then it is mismatched between the characteristic point.
Optionally, the step of 2 d texture information corresponding based on the monocular cam acquisition three dimensional point cloud Including:
Two-dimensional image information is obtained based on monocular cam, described image information is converted into gray level image;
Feature extraction is carried out to gray level image described in every frame using ORB feature extraction algorithms, obtains X-Y scheme described in every frame The ORB characteristic points of picture;
The characteristic point of adjacent two frames described image is matched using nearest neighbor method, to matched characteristic point to transporting Dynamic estimation, obtains the movement locus of monocular cam, and then obtain 2 d texture information corresponding with the three dimensional point cloud.
Optionally, described to pass through three dimensional point cloud, the three-dimensional after being corrected described in the 2 d texture information correction The step of point cloud data includes:
The movement locus of monocular cam in obtained correspondence time interval is matched with the movement locus of laser radar, is melted Close 2 d texture information, the three dimensional point cloud after being corrected.
Optionally, the three dimensional point cloud according to after the correction builds three-dimensional point cloud map, obtains three-dimensional point cloud The step of map further includes:
If number of the point cloud density of the three dimensional point cloud after the correction at one cubic metre of space midpoint is more than 10000, then it is down-sampled to three dimensional point cloud progress, and then obtain three-dimensional point cloud map.
Optionally, the three dimensional point cloud according to after the correction builds three-dimensional point cloud map, obtains three-dimensional point cloud After the step of map, further include:
When the matching for being carried out object to be positioned based on the three-dimensional point cloud map is positioned, using SLAM algorithms to be positioned Object position carries out data reduction, is matched with three-dimensional point cloud map to the point cloud of the extraction;
If described cloud and three-dimensional point cloud map match, obtain the 2 d texture information of the object to be positioned, to institute 2 d texture information is stated to be matched with three-dimensional point cloud map;
If the 2 d texture information and three-dimensional point cloud map match, it is determined that the position of object to be positioned.
It is optionally, described before the step of carrying out the matching positioning of object to be positioned based on the three-dimensional point cloud map, Further include:
According to the drawing three-dimensional point cloud map data base is pre-established with localization method;
Determine the target three-dimensional point cloud map in the three-dimensional point cloud map data base residing for object to be positioned.
The present invention also provides a kind of drawing and positioning system, the group, which draws with positioning system, includes:Monocular cam swashs Optical radar, memory, processor and it is stored in the drawing and positioning that can be run on the memory and on the processor The step of program, the drawing realizes above-mentioned drawing and localization method when being executed by the processor with finder.
The present invention also provides a kind of computer readable storage medium, drawing is stored on the computer readable storage medium With finder, the step of the realizing above-mentioned drawing and localization method when being executed by processor with finder of drawing.
A kind of drawing provided by the invention and localization method, based on SLAM algorithms in the visual angle that is obtained by laser radar Point cloud information handled to obtain three dimensional point cloud;The three dimensional point cloud corresponding two is obtained based on monocular cam Texture information is tieed up, three dimensional point cloud, the three dimensional point cloud after being corrected described in the 2 d texture information correction are passed through; Three-dimensional point cloud map is built according to the three dimensional point cloud after the correction, obtains three-dimensional point cloud map.In laser radar SLAM On the basis of, in conjunction with the 2 d texture information of monocular cam so that drawing and positioning in SLAM tasks have higher essence Degree.Result after being corrected according to 2 d texture builds three-dimensional point cloud map, ensures the practicability of SLAM algorithms, improves the essence of SLAM Degree.
Description of the drawings
Fig. 1 is the device structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram that the present invention draws with one embodiment of localization method;
Fig. 3 is the flow diagram that the present invention draws with another embodiment of localization method;
Fig. 4 is the flow diagram that the present invention draws with the another embodiment of localization method;
Fig. 5 is the flow diagram that the present invention draws with localization method another embodiment.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific implementation mode
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, using for indicating that the suffix of such as " module ", " component " or " unit " of element is only The explanation for being conducive to the present invention, itself does not have a specific meaning.Therefore, " module ", " component " or " unit " can mix Ground uses.
As shown in Figure 1, the device structure schematic diagram for the hardware running environment that Fig. 1, which is the embodiment of the present invention, to be related to.
The embodiment of the present invention is drawn and positioning system may include PC, can also include smart mobile phone, tablet computer, electronics (Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark by book reader, MP3 Quasi- audio level 3) player, (Moving Picture Experts Group Audio Layer IV, dynamic image are special by MP4 Family's compression standard audio level 3) the packaged type terminal device with display function such as player, pocket computer.
As shown in Figure 1, the drawing may include with positioning system:Laser radar, camera, processor 1001, such as CPU, network interface 1004, user interface 1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 is for real Connection communication between these existing components.User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), optional user interface 1003 can also include standard wireline interface and wireless interface.Network interface 1004 is optional May include standard wireline interface and wireless interface (such as WI-FI interfaces).Memory 1005 can be high-speed RAM memory, Can also be stable memory (non-volatile memory), such as magnetic disk storage.Memory 1005 optionally may be used also To be independently of the storage device of aforementioned processor 1001.
Optionally, drawing and positioning system can also include cloud server, RF (Radio Frequency, radio frequency) electricity Road, sensor, voicefrequency circuit, WiFi module etc..Wherein, sensor such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ring The light and shade of border light adjusts the brightness of display screen, and proximity sensor can close display screen when mobile terminal is moved in one's ear And/or backlight.As a kind of motion sensor, gravity accelerometer can detect in all directions (generally three axis) and add The size of speed can detect that size and the direction of gravity when static, the application that can be used to identify mobile terminal posture is (such as horizontal Vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Certainly, also The other sensors such as configurable gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
It will be understood by those skilled in the art that shown in Fig. 1 draw with positioning system structure do not constitute to draw with The restriction of positioning system may include either combining certain components or different components than illustrating more or fewer components Arrangement.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage media Believe module, Subscriber Interface Module SIM and drawing and finder.
In drawing and positioning system shown in Fig. 1, network interface 1004 is mainly used for connecting background server, with backstage Server is into row data communication;User interface 1003 is mainly used for connecting client (user terminal), and it is logical to carry out data with client Letter;And processor 1001 can be used for calling the drawing stored in memory 1005 and finder.
Based on above-mentioned drawing and positioning system hardware structure and communications network system, propose that the present invention draws and positioning side The each embodiment of method.
A kind of drawing of present invention offer and localization method, in the embodiment with localization method of drawing, reference attached drawing 2, This method includes:
Step S10 is handled to obtain based on SLAM algorithms to the point cloud information in the visual angle that obtains by laser radar Three dimensional point cloud;
Laser radar is to emit the radar system of the characteristic quantities such as the position of detecting laser beam target, speed.Its work is former Reason be to objective emission detectable signal (laser beam), then by the reflected signal of slave target (target echo) received with Transmitting signal is compared, after making proper treatment, so that it may target is obtained for information about, such as target range, orientation, height, speed Degree, posture, the even parameters such as shape, to which the targets such as aircraft, guided missile are detected, tracked and be identified.It is by Laser emission Electric pulse is become light pulse emission and gone out by the compositions such as machine, optical receiver, turntable and information processing system, laser, and light connects Receipts machine electric pulse is reduced into from the reflected light pulse of target, is sent to display again.Existing laser radar there are also Small-sized multi-thread panorama laser radar.
Specifically, as shown in figure 3, step S10 includes:
Step S11 carries out a feature point extraction to laser radar point cloud information every x frames, and x is more than or equal to zero;
Characteristic point is extracted, and carry out an estimation every x frames according to the laser radar point cloud information of acquisition, wherein X can be zero, can also be 1 or 2 etc., you can be extracted to every frame laser radar point cloud information characteristics point, or It extracts every a frame, two frames etc., then to the frame laser radar point cloud information characteristics point, is preferably spaced after 5 to 20 frames, Laser radar point cloud information characteristics point is extracted again, to reduce algorithm complexity, improve computational efficiency.
Step S12, according to smoothness to being matched every the characteristic point of x frame point cloud information;
Since cloud is three-dimensional point discrete in space, there is no other information other than coordinate, and the same spatial point Coordinate is continually changing in mobile data acquisition, so the traceable feature defined in cloud, fixed thus A kind of feature extraction mode based on smoothness of justice, i.e. smoothness.The maximum point of smoothness is defined as planar point, and smoothness is minimum Point be then defined as marginal point.It is excessive in order to avoid the characteristic point of generation, the point cloud scanned every time is divided into four etc. first Big region such as takes 0 °, 90 °, 180 ° and 270 ° of four principal directions, and each direction or so respectively takes ± 45 ° to be divided into a region.Often A region is up to 4 marginal points and 4 planar point constitutive characteristic points.
After obtaining characteristic point, the characteristic point of n frames and the n-th ﹣ x frame data is matched according to smoothness, the difference of smoothness To being considered expression of the same spatial point in twice sweep, i.e., value is less than the 5% characteristic point point of the smoothness of current point For matched point pair.
Step S13 obtains the movement locus of laser radar, and then obtain to matched characteristic point to carrying out estimation Three dimensional point cloud.
Estimation is carried out after obtaining matched characteristic point, i.e., obtains laser radar by solving two groups of three dimensions points Estimation obtains the movement locus of laser radar, and then obtains three dimensional point cloud.
Step S20 obtains the corresponding 2 d texture information of the three dimensional point cloud, by described based on monocular cam Three dimensional point cloud described in 2 d texture information correction, the three dimensional point cloud after being corrected;
Specifically, as shown in figure 4, step S20 includes:
Step S21 obtains two-dimensional image information based on monocular cam, described image information is converted to gray level image;
The image information that existing camera obtains is substantially colour, first, is carried out to the image information of acquisition pre- Processing, converts coloured image to gray level image, is convenient for follow-up calculation processing.
Step S22 carries out feature extraction to gray level image described in every frame using ORB feature extraction algorithms, obtains every frame institute State the ORB characteristic points of two dimensional image;
ORB (Oriented FAST and Rotated BRIEF) is a kind of algorithm of rapid characteristic points extraction and description. This algorithm was proposed in 2011 by scholars such as Rublee, " Rublee Ethan, et al. " ORB:An efficient alternative to SIFT or SURF."IEEE International Conference on Computer Vision IEEE,2012:2564-2571.".ORB algorithms are divided into two parts, are feature point extraction and feature point description respectively.Feature extraction It is by FAST (Features from Accelerated Segment Test) algorithm development Lai feature point description is basis BRIEF (Binary Robust Independent Elementary Features) feature description algorithm improvement.ORB is special Sign is to combine the detection method of FAST characteristic points with BRIEF Feature Descriptors, and done on the basis of they are original It improves and optimizes.Our experiments show that the extraction rate of ORB features is about 100 times of SIFT, it is 10 times of SURF.
For the gray level image obtained in above-mentioned steps, ORB characteristic points are extracted based on ORB feature extraction algorithms frame by frame.
Step S23 matches the characteristic point of adjacent two frames described image using nearest neighbor method, to matched characteristic point To carrying out estimation, the movement locus of monocular cam is obtained, and then obtain two dimension corresponding with the three dimensional point cloud Texture information;
Nearest neighbor method, it is assumed that each class includes multiple sample datas, and there are one unique categories for each data Note indicate these samples be belong to which classification, arest neighbors hair be exactly calculate each sample data to data to be sorted away from From.If most of in sample sample closest in feature space belong to some classification, which also belongs to In this classification.
The characteristic point of adjacent two field pictures is matched, to the point that matches to utilizing random sampling consensus method to carry out Estimation obtains the movement locus of monocular cam, and then has obtained 2 d texture corresponding with the three dimensional point cloud Information.
Step S24, by the movement locus of monocular cam in obtained correspondence time interval and the movement rail of laser radar Mark matches, and merges 2 d texture information, the three dimensional point cloud after being corrected.
Movement locus and three dimensional point cloud and monocular cam based on the laser radar obtained in above-mentioned steps Movement locus and 2 d texture information, by by corresponding with the movement locus of laser radar of the running orbit of monocular cam Match, the 2 d texture information of matched movement locus is fused in three dimensional point cloud, obtains using 2 d texture information correction Three dimensional point cloud later.
Step S30 builds three-dimensional point cloud map, with obtaining three-dimensional point cloud according to the three dimensional point cloud after the correction Figure.
Three dimensional point cloud after multiframe is corrected is overlapped, and builds three-dimensional point cloud map.If putting the dense degree of cloud Excessively high, the display speed of three-dimensional point cloud map can reduce, at this point, in order to improve display speed, need the intensive journey for reducing point cloud Degree.Can in the storage of cloud installation space dot density threshold value, i.e., when the spatial point number in one cubic metre of space is more than At 10000, the down-sampled number to ensure spatial point is carried out in reasonable range to point cloud data, obtains final three-dimensional Point cloud map.Certainly, when scanning narrow space, spatial point density threshold can be correspondingly improved.
In the present embodiment, the point cloud information in the visual angle that is obtained by laser radar is handled based on SLAM algorithms Obtain three dimensional point cloud;The corresponding 2 d texture information of the three dimensional point cloud is obtained based on monocular cam, passes through institute Three dimensional point cloud described in 2 d texture information correction is stated, the three dimensional point cloud after being corrected;After the correction Three dimensional point cloud builds three-dimensional point cloud map, obtains three-dimensional point cloud map.On the basis of laser radar SLAM, in conjunction with monocular The 2 d texture information of camera so that drawing and positioning in SLAM tasks have higher precision.According to 2 d texture school Result after just builds three-dimensional point cloud map, ensures the practicability of SLAM algorithms, improves the precision of SLAM.
Further, in the embodiment that the present invention draws with localization method, as shown in figure 5, basis described in step S30 After the step of three dimensional point cloud after the correction builds three-dimensional point cloud map, obtains three-dimensional point cloud map, further include:
Step S40 uses SLAM algorithms when the matching for being carried out object to be positioned based on the three-dimensional point cloud map is positioned Data reduction is carried out to object position to be positioned, the point cloud of the extraction is matched with three-dimensional point cloud map;
Step S41 obtains the 2 d texture letter of the object to be positioned if described cloud and three-dimensional point cloud map match Breath, the 2 d texture information is matched with three-dimensional point cloud map;
Step S42, if the 2 d texture information and three-dimensional point cloud map match, it is determined that the position of object to be positioned.
When for needing to carry out matching positioning to a certain object to be positioned, three-dimensional point cloud has been merged based on three-dimensional point cloud map The characteristics of data and 2 d texture data, to the matching of object to be positioned positioning can by point cloud matching and 2 d texture into Row matching, to improve matching precision, obtains object to be positioned and more accurately positions.
For example, when obtaining a certain object to be positioned, it is thus necessary to determine that the position of the object to be positioned, then, utilize SLAM Algorithm extracts the point cloud of object to be positioned, and the point cloud of extraction and the three-dimensional point cloud map to prestore are carried out point cloud matching, such as fruit dot Cloud matches, then is extracted again to the 2 d texture information of object to be positioned, and by the 2 d texture information and three-dimensional point cloud Map is matched, if the 2 d texture information and three-dimensional point cloud map match, can determine the position of the object to be positioned It sets.If the point cloud of the object to be positioned and three-dimensional point cloud map mismatch or 2 d texture information and three-dimensional point cloud map are not Matching, then, illustrate that the object to be positioned is not in the three-dimensional point cloud map, that is, cannot achieve and the object to be positioned is determined Position.
In the present embodiment, the position of object to be positioned is determined by point cloud matching and 2 d texture matching, it is only full Foot above-mentioned two condition just completes the positioning of object to be positioned, so that the precision raising to object matching to be positioned, right The positioning of object to be positioned is more accurate.
Optionally, in the embodiment that the present invention draws with localization method, when based on the three-dimensional point described in step S40 Before cloud map carries out the step of matching positioning of object to be positioned, further include:
Step S401 pre-establishes three-dimensional point cloud map data base according to the drawing with localization method;
Step S402 determines the target three-dimensional point cloud map in the three-dimensional point cloud map data base residing for object to be positioned.
The three-dimensional point cloud map that the drawing provided in based on the present invention is obtained with localization method is to object matching to be positioned Before positioning, three-dimensional point cloud map data base can be first established, stores the three-dimensional point cloud map of each region so that as comparison Matched three-dimensional point cloud map is more complete, avoids the occurrence of matching less than corresponding the case where positioning.Thus, when it needs to be determined that undetermined When the location information of position object, with first determining the target three-dimensional point cloud in the three-dimensional point cloud map data base residing for object to be positioned Figure, i.e., the position of object one to be positioned substantially accurately determine the specific location of object to be positioned again later.For example, three-dimensional point The three-dimensional point cloud map that all areas of Shenzhen are stored in cloud map data base is in deep when getting a vehicle to be positioned When ditch between fields city, at this point, first determine the approximate location residing for the vehicle to be positioned, e.g., Nanshan District's Shen Nan, then, based on this south The three-dimensional point cloud map of mountain area Shen Nan further determines that the specific location of the vehicle to be positioned.Certainly it is based on analyzing processing It is quick, more smaller three-dimensional point cloud maps can be divided into three-dimensional point cloud map.
In the present embodiment, three-dimensional point cloud map data base is first established, the three-dimensional point cloud map of each region is stored so that As comparing, matched three-dimensional point cloud map is more complete, avoids the occurrence of matching less than corresponding the case where positioning.According to blockette Three-dimensional point cloud map object to be positioned substantially can be positioned and then be accurately positioned with more convenient and quicker, carry The processing speed of height positioning.
The present invention also provides a kind of drawing and positioning system, the group, which draws with positioning system, includes:Monocular cam swashs Optical radar, memory, processor and it is stored in the drawing and positioning that can be run on the memory and on the processor The step of program, the drawing realizes above-mentioned drawing and localization method when being executed by the processor with finder.
The present invention also provides a kind of computer readable storage medium, drawing is stored on the computer readable storage medium With finder, the step of the realizing above-mentioned drawing and localization method when being executed by processor with finder of drawing.
In the embodiment that the present invention draws with positioning system and computer readable storage medium, contain above-mentioned drawing with The all technical features of each embodiment of localization method, specification expand and explain that content is respectively implemented with above-mentioned drawing with localization method Example is essentially identical, and this will not be repeated here.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that process, method, article or system including a series of elements include not only those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this There is also other identical elements in the process of element, method, article or system.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical scheme of the present invention substantially in other words does the prior art Going out the part of contribution can be expressed in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disc, CD), including some instructions are used so that a station terminal equipment (can be mobile phone, computer, clothes Be engaged in device, air conditioner or the network equipment etc.) execute method described in each embodiment of the present invention.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of drawing and localization method, which is characterized in that the drawing includes the following steps with localization method:
The point cloud information in the visual angle that obtains by laser radar is handled to obtain three dimensional point cloud based on SLAM algorithms;
The corresponding 2 d texture information of the three dimensional point cloud is obtained based on monocular cam, passes through the 2 d texture information The three dimensional point cloud is corrected, the three dimensional point cloud after being corrected;
Three-dimensional point cloud map is built according to the three dimensional point cloud after the correction, obtains three-dimensional point cloud map.
2. drawing as described in claim 1 and localization method, which is characterized in that the SLAM algorithms that are based on are to passing through laser thunder The step of being handled to obtain three dimensional point cloud up to the point cloud information in the visual angle of acquisition include:
Feature point extraction is carried out every x frames to laser radar point cloud information, x is more than or equal to zero;
According to smoothness to being matched every the characteristic point of x frame point cloud information;
To matched characteristic point to carrying out estimation, the movement locus of laser radar is obtained, and then obtain three dimensional point cloud.
Draw and localization method 3. as claimed in claim 2, which is characterized in that it is described according to smoothness to every x frame point clouds The step of characteristic point of information is matched include:
If every the characteristic point of x frame point clouds smoothness difference be less than current signature point smoothness 5%, the feature It is matched between point;
If every the characteristic point of x frame point clouds smoothness difference be more than or equal to current signature point smoothness 5%, It is mismatched between the characteristic point.
4. drawing as claimed in claim 2 and localization method, which is characterized in that described to obtain described three based on monocular cam The step of dimension point cloud data corresponding 2 d texture information includes:
Two-dimensional image information is obtained based on monocular cam, described image information is converted into gray level image;
Feature extraction is carried out to gray level image described in every frame using ORB feature extraction algorithms, obtains two dimensional image described in every frame ORB characteristic points;
The characteristic point of adjacent two frames described image is matched using nearest neighbor method, matched characteristic point is estimated to carrying out movement Meter, obtains the movement locus of monocular cam, and then obtain 2 d texture information corresponding with the three dimensional point cloud.
5. drawing as claimed in claim 4 and localization method, which is characterized in that described to pass through the 2 d texture information correction The step of three dimensional point cloud, three dimensional point cloud after being corrected includes:
The movement locus of monocular cam in obtained correspondence time interval is matched with the movement locus of laser radar, fusion two Tie up texture information, the three dimensional point cloud after being corrected.
6. drawing as described in claim 1 and localization method, which is characterized in that the three-dimensional point cloud according to after the correction Data build three-dimensional point cloud map, the step of obtaining three-dimensional point cloud map, further include:
If number of the point cloud density of the three dimensional point cloud after the correction at one cubic metre of space midpoint is more than 10000, It is down-sampled to three dimensional point cloud progress, and then obtain three-dimensional point cloud map.
7. drawing as described in claim 1 and localization method, which is characterized in that the three-dimensional point cloud according to after the correction Data build three-dimensional point cloud map:
When the matching for being carried out object to be positioned based on the three-dimensional point cloud map is positioned, using SLAM algorithms to object to be positioned Position carries out data reduction, is matched with three-dimensional point cloud map to the point cloud of the extraction;
If described cloud and three-dimensional point cloud map match, obtain the 2 d texture information of the object to be positioned, to described two Dimension texture information is matched with three-dimensional point cloud map;
If the 2 d texture information and three-dimensional point cloud map match, it is determined that the position of object to be positioned.
Draw and localization method 8. as claimed in claim 7, which is characterized in that it is described when based on the three-dimensional point cloud map into Before the step of matching positioning of row object to be positioned, further include:
According to the drawing three-dimensional point cloud map data base is pre-established with localization method;
Determine the target three-dimensional point cloud map in the three-dimensional point cloud map data base residing for object to be positioned.
9. a kind of drawing and positioning system, which is characterized in that the group, which draws with positioning system, includes:Monocular cam, laser The drawing that radar, memory, processor and being stored in can be run on the memory and on the processor and positioning journey Sequence, it is described drawing with finder is executed by the processor when realize as it is described in any item of the claim 1 to 8 drawing and The step of localization method.
10. a kind of computer readable storage medium, which is characterized in that be stored on the computer readable storage medium drawing with Finder, the drawing realize such as drawing described in any item of the claim 1 to 8 when being executed by processor with finder The step of with localization method.
CN201810528303.9A 2018-05-28 2018-05-28 It draws and localization method, system and computer readable storage medium Pending CN108734654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810528303.9A CN108734654A (en) 2018-05-28 2018-05-28 It draws and localization method, system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810528303.9A CN108734654A (en) 2018-05-28 2018-05-28 It draws and localization method, system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN108734654A true CN108734654A (en) 2018-11-02

Family

ID=63935607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810528303.9A Pending CN108734654A (en) 2018-05-28 2018-05-28 It draws and localization method, system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108734654A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784315A (en) * 2019-02-20 2019-05-21 苏州风图智能科技有限公司 Tracking detection method, device, system and the computer storage medium of 3D barrier
CN109993700A (en) * 2019-04-03 2019-07-09 百度在线网络技术(北京)有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN110458897A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Multi-cam automatic calibration method and system, monitoring method and system
WO2020097840A1 (en) * 2018-11-15 2020-05-22 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN111402332A (en) * 2020-03-10 2020-07-10 兰剑智能科技股份有限公司 AGV composite mapping and navigation positioning method and system based on S L AM
CN111694903A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Map construction method, map construction device, map construction equipment and readable storage medium
CN111724472A (en) * 2019-03-19 2020-09-29 北京四维图新科技股份有限公司 Method and device for determining spatial position of map element
CN112150595A (en) * 2020-09-21 2020-12-29 广东博智林机器人有限公司 Point cloud data processing method, device, equipment and medium
CN112598736A (en) * 2020-12-24 2021-04-02 长沙行深智能科技有限公司 Map construction based visual positioning method and device
CN112739983A (en) * 2020-04-24 2021-04-30 华为技术有限公司 Method for correcting point cloud data and related device
CN113077509A (en) * 2020-01-03 2021-07-06 上海依图信息技术有限公司 Space mapping calibration method and space mapping system based on synchronous positioning and mapping
WO2022205750A1 (en) * 2021-03-31 2022-10-06 深圳市慧鲤科技有限公司 Point cloud data generation method and apparatus, electronic device, and storage medium
CN116503566A (en) * 2023-06-25 2023-07-28 深圳市其域创新科技有限公司 Three-dimensional modeling method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643555A (en) * 2016-12-27 2017-05-10 清华大学 Connection piece identification method based on structured light three-dimensional measurement system
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN107830854A (en) * 2017-11-06 2018-03-23 深圳精智机器有限公司 Vision positioning method based on sparse cloud of ORB and Quick Response Code
CN108010123A (en) * 2017-11-23 2018-05-08 东南大学 A kind of three-dimensional point cloud acquisition methods for retaining topology information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643555A (en) * 2016-12-27 2017-05-10 清华大学 Connection piece identification method based on structured light three-dimensional measurement system
CN106940186A (en) * 2017-02-16 2017-07-11 华中科技大学 A kind of robot autonomous localization and air navigation aid and system
CN107301654A (en) * 2017-06-12 2017-10-27 西北工业大学 A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN107830854A (en) * 2017-11-06 2018-03-23 深圳精智机器有限公司 Vision positioning method based on sparse cloud of ORB and Quick Response Code
CN108010123A (en) * 2017-11-23 2018-05-08 东南大学 A kind of three-dimensional point cloud acquisition methods for retaining topology information

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11035958B2 (en) 2018-11-15 2021-06-15 Bejing Didi Infinity Technology And Development Co., Ltd. Systems and methods for correcting a high-definition map based on detection of obstructing objects
WO2020097840A1 (en) * 2018-11-15 2020-05-22 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN109784315A (en) * 2019-02-20 2019-05-21 苏州风图智能科技有限公司 Tracking detection method, device, system and the computer storage medium of 3D barrier
CN109784315B (en) * 2019-02-20 2021-11-09 苏州风图智能科技有限公司 Tracking detection method, device and system for 3D obstacle and computer storage medium
CN111694903A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Map construction method, map construction device, map construction equipment and readable storage medium
CN111694903B (en) * 2019-03-11 2023-09-12 北京地平线机器人技术研发有限公司 Map construction method, device, equipment and readable storage medium
CN111724472A (en) * 2019-03-19 2020-09-29 北京四维图新科技股份有限公司 Method and device for determining spatial position of map element
CN109993700A (en) * 2019-04-03 2019-07-09 百度在线网络技术(北京)有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN110458897A (en) * 2019-08-13 2019-11-15 北京积加科技有限公司 Multi-cam automatic calibration method and system, monitoring method and system
CN110458897B (en) * 2019-08-13 2020-12-01 北京积加科技有限公司 Multi-camera automatic calibration method and system and monitoring method and system
CN113077509A (en) * 2020-01-03 2021-07-06 上海依图信息技术有限公司 Space mapping calibration method and space mapping system based on synchronous positioning and mapping
CN111402332A (en) * 2020-03-10 2020-07-10 兰剑智能科技股份有限公司 AGV composite mapping and navigation positioning method and system based on S L AM
CN111402332B (en) * 2020-03-10 2023-08-18 兰剑智能科技股份有限公司 AGV composite map building and navigation positioning method and system based on SLAM
CN112739983A (en) * 2020-04-24 2021-04-30 华为技术有限公司 Method for correcting point cloud data and related device
CN112150595A (en) * 2020-09-21 2020-12-29 广东博智林机器人有限公司 Point cloud data processing method, device, equipment and medium
CN112598736A (en) * 2020-12-24 2021-04-02 长沙行深智能科技有限公司 Map construction based visual positioning method and device
WO2022205750A1 (en) * 2021-03-31 2022-10-06 深圳市慧鲤科技有限公司 Point cloud data generation method and apparatus, electronic device, and storage medium
CN116503566A (en) * 2023-06-25 2023-07-28 深圳市其域创新科技有限公司 Three-dimensional modeling method and device, electronic equipment and storage medium
CN116503566B (en) * 2023-06-25 2024-03-29 深圳市其域创新科技有限公司 Three-dimensional modeling method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108734654A (en) It draws and localization method, system and computer readable storage medium
CN109059895B (en) Multi-mode indoor distance measurement and positioning method based on mobile phone camera and sensor
CN109947886B (en) Image processing method, image processing device, electronic equipment and storage medium
TWI570633B (en) Method of matching image features with reference features
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
CN109993793B (en) Visual positioning method and device
EP2491529B1 (en) Providing a descriptor for at least one feature of an image
CN108052624A (en) Processing Method of Point-clouds, device and computer readable storage medium
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN104007817B (en) Wearable information system at least one video camera
CN112435338B (en) Method and device for acquiring position of interest point of electronic map and electronic equipment
CN109636854A (en) A kind of augmented reality three-dimensional Tracing Registration method based on LINE-MOD template matching
US20150098616A1 (en) Object recognition and map generation with environment references
TW201432548A (en) Three-dimensional interaction method and system based on identification code
JP5833507B2 (en) Image processing device
CN109559330A (en) Visual tracking method, device, electronic equipment and the storage medium of moving target
KR20170036747A (en) Method for tracking keypoints in a scene
CN114185073A (en) Pose display method, device and system
CN108052869B (en) Lane line recognition method, lane line recognition device and computer-readable storage medium
CN115482556A (en) Method for key point detection model training and virtual character driving and corresponding device
Alam et al. Pose estimation algorithm for mobile augmented reality based on inertial sensor fusion.
Liu et al. Comparison of 2D image models in segmentation performance for 3D laser point clouds
An et al. Image-based positioning system using LED Beacon based on IoT central management
CN116894876A (en) 6-DOF positioning method based on real-time image
CN110348333A (en) Object detecting method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181102

RJ01 Rejection of invention patent application after publication