CN109870167A - Positioning and map creating method while the pilotless automobile of view-based access control model - Google Patents

Positioning and map creating method while the pilotless automobile of view-based access control model Download PDF

Info

Publication number
CN109870167A
CN109870167A CN201811592793.5A CN201811592793A CN109870167A CN 109870167 A CN109870167 A CN 109870167A CN 201811592793 A CN201811592793 A CN 201811592793A CN 109870167 A CN109870167 A CN 109870167A
Authority
CN
China
Prior art keywords
image
map
feature
particle
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811592793.5A
Other languages
Chinese (zh)
Inventor
胡广地
赵鑫
郭峰
刘红星
胡坚耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiaya Automobile Technology Co Ltd
Southwest Jiaotong University
China Electronic Product Reliability and Environmental Testing Research Institute
Original Assignee
Sichuan Jiaya Automobile Technology Co Ltd
Southwest Jiaotong University
China Electronic Product Reliability and Environmental Testing Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiaya Automobile Technology Co Ltd, Southwest Jiaotong University, China Electronic Product Reliability and Environmental Testing Research Institute filed Critical Sichuan Jiaya Automobile Technology Co Ltd
Priority to CN201811592793.5A priority Critical patent/CN109870167A/en
Publication of CN109870167A publication Critical patent/CN109870167A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to pilotless automobile positioning and navigation field, it is related to positioning and map creating method while a kind of pilotless automobile of view-based access control model, including following operating procedure: carries out binocular camera parameter calibration;Carry out image enhancement;Characteristics of image is extracted;It carries out left images feature to be matched, obtains matching image;Matching image and characteristically picture library are subjected to data correlation;The predicted value for obtaining state vector and observation predicted value are converted using UT, state update is carried out to pilotless automobile with Unscented kalman filtering, according to resampling technique, adaptive samples particle, is updated matching according to the result to map of data correlation.Present invention employs adaptive resampling technology and Unscented kalman filtering, the system truncated error for being truncated by higher order term and being generated is avoided, improves the precision and vehicle location precision of system, the environmental map of foundation is also more accurate.

Description

Positioning and map creating method while the pilotless automobile of view-based access control model
Technical field
The invention belongs to pilotless automobile positioning and navigation fields, are related to a kind of the same of the pilotless automobile of view-based access control model Shi Dingwei and map creating method.
Background technique
With the development of pilotless automobile technology, its unmanned vapour of domestic and international major automaker's successively issue Vehicle strategic plan, while the chips such as the Internet enterprises such as Baidu, Huawei, LeEco and NVIDIA manufacturer also sets foot in nobody one after another Driving field.The field also forms the development mould of " vehicle enterprise+University Scientific Research mechanism ", " vehicle enterprise+Internet enterprises " at present Formula.It is future, universal with inexpensive sensor, pilotless automobile will high speed development.
Realize that automobile is unpiloted on condition that its navigation system being equipped with can in real time, accurately provide the state of vehicle Information, but vehicle can obtain these information from navigation system in not all situation, such as underground parking is not Know environment.In circumstances not known, pilotless automobile absolutely not or only seldom, very incomplete environmental information, nobody Driving can only obtain environmental information by self-contained sensor, to obtain environmental map and realize self poisoning, this Simultaneous localization and mapping (SLAM) under the circumstances not known being just known as.Pilotless automobile carries out in circumstances not known Simultaneous localization and mapping (SLAM) is the hot issue of pilotless automobile research.Traditional SLAM mainly applies laser The equidistant sensor of radar, sonar is realized.But the usual resolution ratio of these sensors is lower, is difficult to apply to complex environment.
Scale invariant feature conversion (Scale-invariantfeature transform or SIFT) is a kind of computer view The algorithm of feel is used to detect and describe the locality characteristic in image, it finds extreme point in space scale, and extracts it Position, scale, rotational invariants.The description and detecting of local image feature can help to recognize object, and SIFT feature is to be based on The point of interest of some local appearances on object and it is unrelated with the size of image and rotation.Light, noise, visual angle slightly are changed The tolerance of change is also quite high.Based on these characteristics, they are highly significant and relatively easily capture, in the spy that female number is huge It levies in database, it is easy to recognize object and rarely have misidentification.The detecting covered for fractional object is described using SIFT feature Rate is also quite high, or even 3 or more SIFT object features is only needed just to be enough to calculate out position and orientation.In computer now Under hardware speed and under the conditions of small-sized property data base, identification speed is close to real-time operation.SIFT feature contains much information, It is suitble to quick and precisely match in high-volume database.
Unscented kalman filtering (UKF) is the combination of non-loss transformation (UT) and standard Kalman filter system, by lossless Transformation makes nonlinear system equation be suitable for the standard Kalman filter system under linear hypothesis.Based on UKF is converted by UT, adopt With Kalman's linear filtering frame, for one-step prediction equation, converted using no mark (UT) to handle the non-of mean value and covariance Linear transmitting, just becomes UKF algorithm.UKF is to carry out approximation to the probability density distribution of nonlinear function, with a series of determining samples Originally the posterior probability density of state is approached.
Summary of the invention
The purpose of the present invention proposes one kind and is based on for the pilotless automobile location navigation problem in circumstances not known Positioning and map creating method while the pilotless automobile of view-based access control model, are solved the above problems with expectation.
To achieve the goals above, the present invention provides position while a kind of pilotless automobile based on view-based access control model With map creating method, including following operating procedure:
S1: binocular camera parameter calibration is carried out;
S2: image enhancement is carried out;
S3: characteristics of image is extracted;
S4: it carries out left images feature and is matched, obtain matching image;
S5: matching image and characteristically picture library are subjected to data correlation;
If current matching image is matched with some visual signature in characteristically picture library, map rejuvenation is carried out using feature It is updated with the pose of pilotless automobile;Otherwise, characteristically picture library is added in the feature new as one;
S6: when matching image is carried out data correlation with characteristically picture library, using adaptive resampling UKF- FastSLAM algorithm obtains observation model, and the input of the observation model is the characteristic image that matching obtains, to obtain characteristics map, Current environment map is obtained by this feature map;
Wherein, to the parameter of the odometer of pilotless automobile by using adaptive resampling UKF-FastSLAM algorithm Obtain Unscented kalman filtering motion model, the input of the Unscented kalman filtering motion model be odometer it is collected nobody Drive a car posture information;Position and ring by the prediction and renewal process of Unscented kalman filtering, to pilotless automobile Condition figure is updated.
The environmental information image and characteristically picture library image that this method is acquired by matching binocular camera, in conjunction with odometer Pilotless automobile posture information, utilize the FastSLAM algorithm based on adaptive resampling technology and Unscented kalman filtering Pilotless automobile pose and environmental map are updated, thus realize positioning of the pilotless automobile in circumstances not known with Map building.
Machine vision is answered pilotless automobile by the present invention, meanwhile, it is different from traditional Kalman filter truncation item institute Bring error, present invention employs adaptive resampling technology and Unscented kalman filtering, avoid by higher order term truncation and The system truncated error of generation, improves the precision and vehicle location precision of system, and the environmental map of foundation is also more accurate.
Further, the progress binocular camera parameter calibration includes following operating procedure:
S1.1.1: variable initializer show;By means of cvCreateMat () function of OpenCV come the inside and outside ginseng for video camera The angle point of the most probable number of several and all images distributes memory space;
S1.1.2: it reads the image of a secondary calibration and carries out Corner Detection;By means of OpenCV's CvFindChessshoardCorners () function is detected and extracts corner location;Return value is 1, indicates that angle point mentions Take success;Otherwise 0 is returned, indicates angle point grid failure, i.e., the angle point number extracted in this image will be less than preset number;
S1.1.3: angular coordinate is refined and is drawn;CvFindCornerSubPix () function by means of OpenCV is to mentioning The angle point got further refines to obtain more accurate corner pixels coordinate;Call OpenCV's CvDrawChessshoardCorners () draws the angle point extracted;
S1.1.4: the image for successfully extracting angle point stores coordinate value of the angle point under world coordinate system and in image The coordinate value of sub-pixel under coordinate system;
S1.1.5: after all image readings are complete, divide again according to successful picture number is extracted for their all angle points With data space, and discharge original data space;
S1.1.6: it is demarcated;The inside and outside of video camera is sought by means of cvCalibrateCamera2 () function of OpenCV Parameter;
S1.1.7: it calculates every argument point and extracts the successfully spin matrix of image, rotating vector and translation vector;
Further, the progress image enhancement includes following operating procedure:
S2.1: the image enhancement based on OpenCV gray scale transformation;
S2.1.1: image is opened with OpenCV and is shown in the window;
S2.1.2:, which being arranged, and adjusts threshold value carries out binary conversion treatment to image;
S2.1.3:, which being arranged, and adjusts γ value carries out gamma transformation to image;
S2.1.4:, which being arranged, and adjusts r value carries out logarithmic transformation to image;
S2.1.5: the complementary color transformation of color image.
Further, described extract using Scale invariant features transform algorithm characteristics of image, operating procedure is such as Under:
S3.1: scale space extremum extracting: searching for the picture position on entire scale, passes through determining pair of difference of Gaussian function Scale and direction have the point of interest of invariance;Primarily determine key point position and place scale;
S3.2: on each candidate position, position and ruler crucial point location: are determined by being fitted fine model Degree;Key point selection is carried out by stability, by fitting the quadratic function of 3 dimensions with the position of accurate determining point and scale, together When removal low contrast key point and unstable skirt response point;
S3.3: direction determines: each key point distributes one or more directions, key point according to topography's gradient direction After position determines, determines its scale and direction, calculate amplitude and the direction of key point;
S3.4: near each key point, local image gradient key point descriptor: is measured under selected scale;For Enhancing matching robustness, describes the multiple seed points of each key point, forms the feature vector of multidimensional.
Further, left images characteristic matching realizes images match using the feature extracted in S3;Generate two Phase after the Scale invariant features transform feature of width image, using the Euclidean distance of feature vector as key point in two images Like property decision content.
Further, the progress left images feature matched the following steps are included:
S4.1: setting ratio threshold value a, the feature vector of each key point in left image is calculated to each key point of right image Feature vector Euclidean distance, obtain distance matrix L;
S4.2: the elements in a main diagonal of distance matrix L is expert at, and the element value of column is compared with threshold value a, if Element value is less than threshold value a, this successful match;Otherwise it fails to match, carries out map match, i.e., the image obtained S4 and spy later Take over the land for use picture library carry out data correlation, if present image is matched with some visual signature in characteristically picture library, utilize feature into The pose of row map rejuvenation and pilotless automobile updates;Otherwise, characteristically picture library is added in the feature new as one.
S6: further, described obtain observation model using adaptive resampling UKF-FastSLAM algorithm, to obtain Characteristics map, by this feature map obtain current environment map operation the following steps are included:
S6.2.1: adaptive resampling:
The form of particle is in algorithm
Calculate effective sample number Neff, determine sample degeneracy degree;
WhereinFor particle weight;
Effective sample number threshold value is set, is carried out adaptively sampled;
Set effective sample number Nthreshoid=0.8n (n is particle number) is used as threshold value, works as Neff< NtfresholdWhen, it needs Carry out resampling;
The pose vector predictors of S6.2.2:UT transformation calculations k+1 moment each particle;
S6.2.3: obtaining observation and carries out data correlation with having feature in map;
S6.2.4: using Unscented kalman filtering to the prior distribution of particleIt is observed update, is obtained The importance density function of each particle
S6.2.5: path estimation
From the importance density functionMiddle samplingCalculate the non-canonical weight of each particleIt is normalizedBy calculating number of effective particles Compare NeffWith given threshold value NthresholdThe relationship of=0.8n, works as Neff< Nthreshold, illustrate that sample degeneracy is serious, need to grain SubsetResampling obtains new particle collection
Utilize the motion path of particle filter estimation pilotless automobile;Particle assembly is extended, will newly be obtained Particle collection be added thereto
S6.2.6: each particle has corresponding map, but is not that the corresponding all features of map require to carry out more Newly, for the matched observed quantity of some feature association in map, according to data correlation as a result, more using observation information Corresponding coordinate in new map;Observed quantity for any feature in no association, according to data correlation as a result, as new spy Sign increases in map;
Posterior probability p (the m of environmental characteristict|st,zt,ut,nt) be updated with Unscented kalman filtering.
In terms of filtering algorithm in the method, the predicted value for obtaining state vector and observation predicted value are converted using UT, State update is carried out to pilotless automobile with Unscented kalman filtering (UKF), it is adaptive to grain according to resampling technique Son is sampled, and is updated matching according to the result to map of data correlation.
The present invention is described further with reference to the accompanying drawings and detailed description.The additional aspect of the present invention and excellent Point will be set forth in part in the description, and partially will become apparent from the description below.Or practice through the invention It solves.
Detailed description of the invention
The attached drawing for constituting a part of the invention is used to assist the understanding of the present invention, content provided in attached drawing and its Related explanation can be used for explaining the present invention in the present invention, but not constitute an undue limitation on the present invention.In the accompanying drawings:
The flow chart of positioning and map creating method while Fig. 1 is the pilotless automobile of view-based access control model of the invention;
Fig. 2 is pilotless automobile kinematics model figure of the invention;
Fig. 3 is pilotless automobile observation model figure of the invention;
Fig. 4 is the adaptive resampling UKF-FastSLAM algorithm flow chart that the present invention uses;
Fig. 5 is the UKF algorithm flow chart that the present invention uses.
Specific embodiment
Clear, complete explanation is carried out to the present invention with reference to the accompanying drawing.Those of ordinary skill in the art are being based on these The present invention will be realized in the case where explanation.Before in conjunction with attached drawing, the present invention will be described, of particular note is that:
The technical solution provided in each section including following the description and technical characteristic in the present invention are not rushing In the case where prominent, these technical solutions and technical characteristic be can be combined with each other.
In addition, the embodiment of the present invention being related in following the description is generally only the embodiment of a branch of the invention, and The embodiment being not all of.Therefore, based on the embodiments of the present invention, those of ordinary skill in the art are not making creativeness Every other embodiment obtained, should fall within the scope of the present invention under the premise of labour.
About term in the present invention and unit.Term in description and claims of this specification and related part " comprising " and its any deformation, it is intended that cover and non-exclusive include.
The present invention provides positioning and map creating method while a kind of pilotless automobile based on view-based access control model, packets Include following operating procedure:
S1: camera parameters calibration: camera parameters calibration uses the camera calibration technology based on OpenCV.It is different from Traditional scaling method precision is not high, the camera calibration technology based on OpenCV.OpenCV seeks the inside and outside parameter of video camera When, it is contemplated that the radial distortion and tangential distortion of lens carry out more accurate definition to new normalized coordinate value, to make The image pixel coordinates that must be ultimately imaged a little are more accurate.
S1.1: the camera calibration based on OpenCV:
S1.1.1: variable initializer show.By means of cvCreateMat () function of OpenCV come the inside and outside ginseng for video camera The angle point of the most probable number of several and all images distributes memory space.
S1.1.2: it reads the image of a secondary calibration and carries out Corner Detection.By means of OpenCVcvFindChessshoardCorners () function is detected and extracts corner location.Return value is 1, is indicated Angle point grid success;Otherwise 0 is returned, indicates angle point grid failure, i.e., the angle point number extracted in this image will be less than preset number Mesh.
S1.1.3: angular coordinate is refined and is drawn.CvFindCornerSubPix () function by means of OpenCV is to mentioning The angle point got further refines to obtain more accurate corner pixels coordinate;Call OpenCV's CvDrawChessshoardCorners () draws the angle point extracted.
S1.1.4: the image for successfully extracting angle point stores coordinate value of the angle point under world coordinate system and in image The coordinate value of sub-pixel under coordinate system.
S1.1.5: after all image readings are complete, divide again according to successful picture number is extracted for their all angle points With data space, and discharge original data space.
S1.1.6: it is demarcated.The inside and outside of video camera is sought by means of cvCalibrateCamera2 () function of OpenCV Parameter.
S1.1.7: it calculates every argument point and extracts the successfully spin matrix of image, rotating vector and translation vector.
S2: image enhancement: image enhancement uses the gray scale transformation technology based on OpenCV.The purpose of greyscale transformation is to mention The contrast of high target and background comes out so that target is easier to be divided.Setting adjustment threshold value carries out at binaryzation image After reason, first it is arranged and adjusts γ value progress gamma transformation, then is arranged and adjusts r value to logarithmic transformation is carried out, finally again to colour Image carries out complementary color transformation.
S2.1: the image enhancement based on OpenCV gray scale transformation:
S2.1.1: image is opened with OpenCV, and is shown in the window.
S2.1.2:, which being arranged, and adjusts threshold value carries out binary conversion treatment to image.
S2.1.3:, which being arranged, and adjusts γ value carries out gamma transformation to image.
S2.1.4:, which being arranged, and adjusts r value carries out logarithmic transformation to image.
S2.1.5: the complementary color transformation of color image.
S3: feature extraction: Scale invariant features transform (SIFT) algorithm flow is as follows:
S3.1: scale space extremum extracting: searching for the picture position on entire scale, passes through determining pair of difference of Gaussian function Scale and direction have the point of interest of invariance.Primarily determine key point position and place scale.
S3.2: on each candidate position, position and ruler crucial point location: are determined by being fitted fine model Degree.Key point selection is carried out by stability, by fitting the quadratic function of 3 dimensions with the position of accurate determining point and scale, together When removal low contrast key point and unstable skirt response point to enhance matching stability improve noise immune.
S3.3: direction determines: each key point distributes one or more directions, key point according to topography's gradient direction After position determines, still needs to determine its scale and direction, calculate amplitude and the direction of key point.
S3.4: near each key point, local image gradient key point descriptor: is measured under selected scale.For Enhancing matching robustness, with 16 seed point descriptions to each key point, then each key point can generate 128 numbers According to eventually forming the feature vector of 128 dimensions.
S4: left images characteristic matching: left images characteristic matching realizes image using the feature extracted in S3 Match.Generate two images SIFT feature after, using feature vector Euclidean distance as in two images key point it is similar Property decision content.
S4.1: setting ratio threshold value a, the feature vector of each key point in left image is calculated to each key point of right image Feature vector Euclidean distance, obtain distance matrix L.
S4.2: the elements in a main diagonal of distance matrix L is expert at, and the element value of column is compared with threshold value a, if Element value is less than threshold value a, this successful match;Otherwise it fails to match.
S5: map match: image that S4 is obtained carries out data correlation with characteristically picture library, if present image with characteristically Some visual signature matching in picture library, then updated using the pose that feature carries out map rejuvenation and pilotless automobile;Otherwise, Characteristically picture library is added in the feature new as one.
S6: adaptive resampling UKF-FastSLAM algorithm.Adaptive resampling UKF-FastSLAM algorithm is that this is The core of system.Resampling is carried out to particle sample using adaptive resampling technology, can effectively improve the robustness of system, The non-linear property for considering pilotless automobile motion model simultaneously, uses Unscented kalman filtering, avoids and cast out high-order Item is truncated and the system truncated error of generation, improves the precision of system.The input of Unscented kalman filtering motion model is inner The collected pilotless automobile posture information of journey meter, the input of observation model are the characteristic image that matching obtains, and pass through filtering The prediction and renewal process of device are updated the position and environmental map of pilotless automobile.
S6.1: pilotless automobile motion model and observation model are the common model of standard FastSLAM algorithm;
S6.2: adaptive resampling UKF-FastSLAM algorithm (algorithm flow is as shown in Figure 4)
S6.2.1: adaptive resampling:
The form of particle is in algorithm
Calculate effective sample number Neff, determine sample degeneracy degree;
WhereinFor particle weight.
Effective sample number threshold value is set, is carried out adaptively sampled;
Set effective sample number Nthreshoid=0.8n (n is particle number) is used as threshold value, works as Neff< NtfresholdWhen, it needs Resampling is carried out, thus can be according to the concrete condition of sample, adaptive decides whether progress resampling.
The pose vector predictors of S6.2.2:UT transformation calculations k+1 moment each particle;
S6.2.3: obtaining observation and carries out data correlation with having feature in map
S6.2.4: using UKF to the prior distribution of particleIt is observed update, obtains the important of each particle Property density function
S6.2.5: path estimation
From the importance density functionMiddle samplingCalculate the non-canonical weight of each particleIt is normalizedBy calculating number of effective particles Compare NeffWith given threshold value NthresholdThe relationship of=0.8n, works as Neff< Nthreshold, illustrate that sample degeneracy is serious, need to grain SubsetResampling obtains new particle collection
Utilize the motion path of particle filter estimation pilotless automobile.Particle assembly is extended, will newly be obtained Particle collection be added thereto
S6.2.6: map estimation is updated
Each particle has corresponding map, but is not that the corresponding all features of map require to be updated, according to Data correlation as a result, for the matched observed quantity of some feature association in map, then using observation information update ground Corresponding coordinate in figure;Observed quantity for any feature in no association, increases in map as new feature.
Posterior probability p (the m of environmental characteristict|st,zt,ut,nt) be updated with UKF, it is assumed that the map of i-th of particle is estimated Meter is expressed asWhereinThe mean value and variance at respectively j-th feature k moment, to map are estimated Update of the update of meter namely to mean value and variance.
Related content of the invention is illustrated above.Those of ordinary skill in the art are in the feelings illustrated based on these The present invention will be realized under condition.Based on above content of the invention, those of ordinary skill in the art are not making creativeness Every other embodiment obtained, should fall within the scope of the present invention under the premise of labour.

Claims (7)

1. positioning and map creating method while the pilotless automobile of view-based access control model, which is characterized in that including following operation Step:
S1: binocular camera parameter calibration is carried out;
S2: image enhancement is carried out;
S3: characteristics of image is extracted;
S4: it carries out left images feature and is matched, obtain matching image;
S5: matching image and characteristically picture library are subjected to data correlation;
If current matching image is matched with some visual signature in characteristically picture library, map rejuvenation and nothing are carried out using feature The pose of people's driving updates;Otherwise, characteristically picture library is added in the feature new as one;
S6: it when matching image is carried out data correlation with characteristically picture library, is calculated using adaptive resampling UKF-FastSLAM Method obtains observation model, and the input of the observation model is that the characteristic image that matching obtains passes through this feature to obtain characteristics map Map obtains current environment map.
Wherein, the parameter of the odometer of pilotless automobile is obtained by using adaptive resampling UKF-FastSLAM algorithm Unscented kalman filtering motion model, the input of the Unscented kalman filtering motion model are that odometer is collected unmanned Automobile posture information;By the prediction and renewal process of Unscented kalman filtering, position and environment to pilotless automobile Figure is updated.
2. simultaneous localization and mapping method as described in claim 1, which is characterized in that the progress binocular camera ginseng Number calibration includes following operating procedure:
S1.1.1: variable initializer show;By means of cvCreateMat () function of OpenCV come for video camera inside and outside parameter and The angle point of the most probable number of all images distributes memory space;
S1.1.2: it reads the image of a secondary calibration and carries out Corner Detection;By means of OpenCV's CvFindChessshoardCorners () function is detected and extracts corner location;Return value is 1, indicates that angle point mentions Take success;Otherwise 0 is returned, indicates angle point grid failure, i.e., the angle point number extracted in this image will be less than preset number;
S1.1.3: angular coordinate is refined and is drawn;CvFindCornerSubPix () function by means of OpenCV is to extracting Angle point further refine to obtain more accurate corner pixels coordinate;Call OpenCV's CvDrawChessshoardCorners () draws the angle point extracted;
S1.1.4: the image for successfully extracting angle point stores coordinate value of the angle point under world coordinate system and in image coordinate It is the coordinate value of lower sub-pixel;
S1.1.5: after all image readings are complete, number is redistributed for their all angle points according to successful picture number is extracted According to space, and discharge original data space;
S1.1.6: it is demarcated;The inside and outside parameter of video camera is sought by means of cvCalibrateCamera2 () function of OpenCV;
S1.1.7: it calculates every argument point and extracts the successfully spin matrix of image, rotating vector and translation vector.
3. positioning and map creating method, feature while the pilotless automobile of view-based access control model as described in claim 1 It is, the progress image enhancement includes following operating procedure:
S2.1: the image enhancement based on OpenCV gray scale transformation;
S2.1.1: image is opened with OpenCV and is shown in the window;
S2.1.2:, which being arranged, and adjusts threshold value carries out binary conversion treatment to image;
S2.1.3:, which being arranged, and adjusts γ value carries out gamma transformation to image;
S2.1.4:, which being arranged, and adjusts r value carries out logarithmic transformation to image;
S2.1.5: the complementary color transformation of color image.
4. simultaneous localization and mapping method as described in claim 1, which is characterized in that described to be mentioned to characteristics of image It takes using Scale invariant features transform algorithm, operating procedure is as follows:
S3.1: scale space extremum extracting: searching for the picture position on entire scale, is determined by difference of Gaussian function to scale There is the point of interest of invariance with direction;Primarily determine key point position and place scale;
S3.2: on each candidate position, position and scale crucial point location: are determined by being fitted fine model; Key point selection is carried out by stability, by fitting the quadratic function of 3 dimensions with the position of accurate determining point and scale, is gone simultaneously Key point and unstable skirt response point except low contrast;
S3.3: direction determines: each key point distributes one or more directions, key point position according to topography's gradient direction After determination, its scale and direction are determined, calculate amplitude and the direction of key point;
S3.4: near each key point, local image gradient key point descriptor: is measured under selected scale;To each Key point is described with multiple seed points, forms the feature vector of multidimensional.
5. simultaneous localization and mapping method as described in claim 1, which is characterized in that left images characteristic matching uses The feature extracted in S3 realizes images match;After the Scale invariant features transform feature for generating two images, using feature Similitude decision content of the Euclidean distance of vector as key point in two images.
6. simultaneous localization and mapping method as claimed in claim 5, which is characterized in that the progress left images feature Matched the following steps are included:
S4.1: setting ratio threshold value a, the feature vector of each key point in left image is calculated to the spy of each key point of right image The Euclidean distance for levying vector, obtains distance matrix L;
S4.2: the elements in a main diagonal of distance matrix L is expert at, and the element value of column is compared with threshold value a, if element Value is less than threshold value a, this successful match;Otherwise it fails to match, carries out map match later.
7. simultaneous localization and mapping method as described in claim 1, which is characterized in that described to use adaptive resampling UKF-FastSLAM algorithm obtains observation model, to obtain characteristics map, obtains current environment map by this feature map Operation the following steps are included:
S6.2.1: adaptive resampling:
The form of particle is in algorithm
Calculate effective sample number Neff, determine sample degeneracy degree;
WhereinFor particle weight;
Effective sample number threshold value is set, is carried out adaptively sampled;
Set effective sample number Nthreshoid=0.8n (n is particle number) is used as threshold value, works as Neff< NtfresholdWhen, need into Row resampling;
The pose vector predictors of S6.2.2:UT transformation calculations k+1 moment each particle;
S6.2.3: obtaining observation and carries out data correlation with having feature in map;
S6.2.4: using Unscented kalman filtering to the prior distribution of particleIt is observed update, obtains each particle The importance density function
S6.2.5: path estimation
From the importance density functionMiddle samplingCalculate the non-canonical weight of each particleIt is normalizedBy calculating number of effective particles Compare NeffWith given threshold value NthresholdThe relationship of=0.8n, works as Neff< Nthreshold, need to particle collectionIt adopts again Sample obtains new particle collection
Utilize the motion path of particle filter estimation pilotless automobile;Particle assembly is extended, the grain that will newly obtain Subset is added thereto
S6.2.6: for the matched observed quantity of some feature association in map, according to data correlation as a result, using see Measurement information updates the corresponding coordinate in map;Observed quantity for any feature in no association, according to data correlation as a result, Increase in map as new feature;
Posterior probability p (the m of environmental characteristict|st,zt,ut,nt) be updated with Unscented kalman filtering.
CN201811592793.5A 2018-12-25 2018-12-25 Positioning and map creating method while the pilotless automobile of view-based access control model Pending CN109870167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811592793.5A CN109870167A (en) 2018-12-25 2018-12-25 Positioning and map creating method while the pilotless automobile of view-based access control model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811592793.5A CN109870167A (en) 2018-12-25 2018-12-25 Positioning and map creating method while the pilotless automobile of view-based access control model

Publications (1)

Publication Number Publication Date
CN109870167A true CN109870167A (en) 2019-06-11

Family

ID=66917164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811592793.5A Pending CN109870167A (en) 2018-12-25 2018-12-25 Positioning and map creating method while the pilotless automobile of view-based access control model

Country Status (1)

Country Link
CN (1) CN109870167A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110412472A (en) * 2019-09-02 2019-11-05 西北工业大学 A kind of battery charge state estimation method based on the filtering of normal state gamma
CN110702093A (en) * 2019-09-27 2020-01-17 五邑大学 Particle filter-based positioning method and device, storage medium and robot
CN111189454A (en) * 2020-01-09 2020-05-22 郑州轻工业大学 Unmanned vehicle SLAM navigation method based on rank Kalman filtering
CN111796600A (en) * 2020-07-22 2020-10-20 中北大学 Object recognition and tracking system based on quadruped robot
CN112749577A (en) * 2019-10-29 2021-05-04 北京初速度科技有限公司 Parking space detection method and device
CN113190564A (en) * 2020-01-14 2021-07-30 阿里巴巴集团控股有限公司 Map updating system, method and device
CN113537314A (en) * 2021-06-30 2021-10-22 上海西井信息科技有限公司 Longitudinal positioning method and device for unmanned vehicle, electronic equipment and storage medium
CN114136305A (en) * 2021-12-01 2022-03-04 纵目科技(上海)股份有限公司 Method, system and equipment for creating map with multiple map layers and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102944224A (en) * 2012-11-09 2013-02-27 大连理工大学 Automatic environmental perception system for remotely piloted vehicle and work method for automatic environmental perception system
CN104376581A (en) * 2014-12-02 2015-02-25 北京航空航天大学 Gaussian mixture unscented particle filter algorithm employing adaptive resampling
CN105678787A (en) * 2016-02-03 2016-06-15 西南交通大学 Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN106871904A (en) * 2017-03-02 2017-06-20 南阳师范学院 A kind of mobile robot code-disc positioning correction method based on machine vision
CN107589748A (en) * 2017-08-21 2018-01-16 江苏科技大学 AUV autonomous navigation methods based on UnscentedFastSLAM algorithms

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102944224A (en) * 2012-11-09 2013-02-27 大连理工大学 Automatic environmental perception system for remotely piloted vehicle and work method for automatic environmental perception system
CN104376581A (en) * 2014-12-02 2015-02-25 北京航空航天大学 Gaussian mixture unscented particle filter algorithm employing adaptive resampling
CN105678787A (en) * 2016-02-03 2016-06-15 西南交通大学 Heavy-duty lorry driving barrier detection and tracking method based on binocular fisheye camera
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN106871904A (en) * 2017-03-02 2017-06-20 南阳师范学院 A kind of mobile robot code-disc positioning correction method based on machine vision
CN107589748A (en) * 2017-08-21 2018-01-16 江苏科技大学 AUV autonomous navigation methods based on UnscentedFastSLAM algorithms

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曲丽萍: "移动机器人同步定位与地图构建关键技术的研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
杨雪梦等: "移动机器人SLAM 关键问题和解决方法综述", 《计算机***应用》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110412472A (en) * 2019-09-02 2019-11-05 西北工业大学 A kind of battery charge state estimation method based on the filtering of normal state gamma
CN110702093A (en) * 2019-09-27 2020-01-17 五邑大学 Particle filter-based positioning method and device, storage medium and robot
CN112749577A (en) * 2019-10-29 2021-05-04 北京初速度科技有限公司 Parking space detection method and device
CN112749577B (en) * 2019-10-29 2023-09-22 北京魔门塔科技有限公司 Parking space detection method and device
CN111189454A (en) * 2020-01-09 2020-05-22 郑州轻工业大学 Unmanned vehicle SLAM navigation method based on rank Kalman filtering
CN113190564A (en) * 2020-01-14 2021-07-30 阿里巴巴集团控股有限公司 Map updating system, method and device
CN111796600A (en) * 2020-07-22 2020-10-20 中北大学 Object recognition and tracking system based on quadruped robot
CN113537314A (en) * 2021-06-30 2021-10-22 上海西井信息科技有限公司 Longitudinal positioning method and device for unmanned vehicle, electronic equipment and storage medium
CN114136305A (en) * 2021-12-01 2022-03-04 纵目科技(上海)股份有限公司 Method, system and equipment for creating map with multiple map layers and computer-readable storage medium
CN114136305B (en) * 2021-12-01 2024-05-31 纵目科技(上海)股份有限公司 Multi-layer map creation method, system, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN109870167A (en) Positioning and map creating method while the pilotless automobile of view-based access control model
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN107067415B (en) A kind of object localization method based on images match
JP6095018B2 (en) Detection and tracking of moving objects
CN109949361A (en) A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN110245566B (en) Infrared target remote tracking method based on background features
US11238307B1 (en) System for performing change detection within a 3D geospatial model based upon semantic change detection using deep learning and related methods
CN107862708A (en) A kind of SAR and visible light image registration method
Shaoqing et al. The comparative study of three methods of remote sensing image change detection
CN102842134A (en) Rapid scene matching method based on SAR (Synthetic Aperture Radar) image
Wang et al. An overview of 3d object detection
Meger et al. Explicit Occlusion Reasoning for 3D Object Detection.
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
Venugopal Sample selection based change detection with dilated network learning in remote sensing images
CN115019201A (en) Weak and small target detection method based on feature refined depth network
CN110246165A (en) It improves visible images and SAR image matches the method and system of Quasi velosity
CN112419198B (en) Non-local mean weighting method for SAR interferogram filtering
Wang et al. Characterization of mountain drainage patterns for GPS-denied UAS navigation augmentation
CN115797769A (en) Infrared weak and small target detection method based on infrared characteristics and space-time significance
Misra et al. An approach for generation of multi temporal co-registered optical remote sensing images from Resourcesat-2/2A sensors
Lei et al. Deep global feature-based template matching for fast multi-modal image registration
Yu et al. Spatial Resolution Enhancement for Large-Scale Land Cover Mapping via Weakly Supervised Deep Learning
Ali et al. SURF and LA with RGB vector space based detection and monitoring of manholes with an application to tri-rotor UAS images
Zhou et al. Road detection based on edge feature with GAC model in aerial image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240209