CN106931969A - A kind of robot three-dimensional navigation map generation method based on Kinect - Google Patents

A kind of robot three-dimensional navigation map generation method based on Kinect Download PDF

Info

Publication number
CN106931969A
CN106931969A CN201511009650.3A CN201511009650A CN106931969A CN 106931969 A CN106931969 A CN 106931969A CN 201511009650 A CN201511009650 A CN 201511009650A CN 106931969 A CN106931969 A CN 106931969A
Authority
CN
China
Prior art keywords
kinect
robot
face
method based
navigation map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201511009650.3A
Other languages
Chinese (zh)
Inventor
杨亮
矫双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Henghe Sand Technology Development Co Ltd
Original Assignee
Heilongjiang Henghe Sand Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang Henghe Sand Technology Development Co Ltd filed Critical Heilongjiang Henghe Sand Technology Development Co Ltd
Priority to CN201511009650.3A priority Critical patent/CN106931969A/en
Publication of CN106931969A publication Critical patent/CN106931969A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

A kind of robot three-dimensional navigation map generation method based on Kinect, gathers Kinect initial data;Laplce's smoothing processing, the preliminary alignment in face;Based on the resurfacing of segmentation, the texture mapping based on visual angle.The present invention instructs Indoor Robot to be moved for the path planning and avoidance of indoor mobile robot, cheap with equipment price, operates easy advantage, and suitable domestic consumer uses.

Description

A kind of robot three-dimensional navigation map generation method based on Kinect
Technical field
The present invention relates to a kind of path planning for indoor mobile robot and avoidance technology, a kind of three-dimensional is related specifically to Navigation map generation method.
Background technology
Map building technology has a very important role for the path planning of mobile robot, first to two dimension in robot chamber Ground drawing generating method is described, including Grid Method, asdic method, Stereo Vision etc..With the development of science and technology, Three-dimensional map creates technology turns into study hotspot.Kinect is a kind of feeling device, possesses more powerful image processing function.Profit Indoor environment is scanned with Kinect, after the data that will be obtained are processed through computer, the three-dimensional map of indoor environment is obtained.
The content of the invention
It is an object of the invention to provide a kind of indoor mobile robot three-dimensional navigation ground drawing generating method, indoor machine is effectively instructed People moves.
A kind of robot three-dimensional navigation map generation method based on Kinect, it is characterised in that:It is original by gathering Kinect Data;Carry out Laplce's smoothing processing;The preliminary alignment in face;Resurfacing based on segmentation;Texture mapping based on visual angle.
Described a kind of robot three-dimensional navigation map generation method based on Kinect, it is characterised in that described Kinect is original Collecting method is:The data that Kinect is collected are filled to buffering area first, is then added to depth data therein Accumulation buffer, is then added with the data of former buffering area and averages, if this average value is certain less than what is be previously set Threshold value, then it is just cumulative, can not otherwise be added up, accumulative frequency is set to 5 here.
Described a kind of robot three-dimensional navigation map generation method based on Kinect, it is characterized in that described Laplce smooths Processing method is:By the three-dimensional situation of the surrounding vertex of umbrella apposition, offset to its position of centre of gravity.Then for each on face Individual summit, relation and point according to this point and surrounding point calculate the sky of this point in theory again in the positional information in space Between position.Shown in smoothing processing formula such as formula (1):
In above-mentioned formulaIt is the position that i-th summit is newest, N is to want to ask the summit around the point of locus again Number, can determine the scope of N according to experiment.
Described a kind of robot three-dimensional navigation map generation method based on Kinect, it is characterized in that the preliminary alignment in described face Method is:Using surface in alignment-semi-automatic method:First determine a datum level, then determine the determination by former and later two faces substantially The profile of object.
Described a kind of robot three-dimensional navigation map generation method based on Kinect, it is characterized in that the described table based on segmentation Face method for reconstructing is:A cloud generation Octree is first passed through, the outer surface data based on these clouds is then obtained, from a cloud to face The data conversion that Kinect can be collected of reconstruction into conventional surface data, so obtain a several independent faces for object, These independent faces are merged again, the outer surface profile of a cloud can be obtained after merging.
Described a kind of robot three-dimensional navigation map generation method based on Kinect, it is characterized in that the described line based on visual angle Managing chart pasting method is:View directions are unit vector of the required surface point to camera position, and face n-tuple is the top of body surface To the unit vector of virtual Kinect positions, when particular curvature is shot, virtual Kinect directions are current point and Kinect to point Direction.Wn is the angle cosine value of view directions and face n directions.When final texture color is calculated, weighted average is taken, So that the color on each summit is together decided on by each surface texture.
The essence of technical scheme is:Raw data acquisition is carried out first with Kinect, environment depth information is obtained Data, then carry out Laplce's smoothing processing, to reduce the influence of noise, by the preliminary of face to the view data for collecting After alignment, precise transformation matrix is obtained using iteration closest approach algorithm, to obtain coordinate streamline;Then carried out using split plot design Resurfacing and texture mapping, so that model is truer, finally derive threedimensional model.
The advantage of the technical scheme is:
Equipment cost is low, and operation is easy, and suitable domestic consumer uses.
Brief description of the drawings
Fig. 1 is that earth-magnetism navigation reference map builds flow chart;
Fig. 2 indoor environment plans;
Specific embodiment
Step 1:Collection Kinect initial data:
The data that Kinect is collected are filled to buffering area first, depth data therein are then added to accumulation buffer, Then it is added with the data of former buffering area and is averaged, if this average value is less than the certain threshold value being previously set, then It is just cumulative, can not otherwise be added up, accumulative frequency is set to 5 here.
Step 2:Laplce's smoothing processing:
By the three-dimensional situation of the surrounding vertex of umbrella apposition, offset to its position of centre of gravity.Then for each summit on face, Relation and point according to this point and surrounding point calculate the locus of this point in theory again in the positional information in space. Shown in smoothing processing formula such as formula (1):
In above-mentioned formulaIt is the position that i-th summit is newest, N is to want to ask the summit around the point of locus again Number.
The value of N can not it is excessive can not be too small, it is excessive it is too small can all bring certain influence, and this influence is passive. If N values are too small, then cannot just ensure that the smooth effect for obtaining meets actual requirement, but N values are too greatly, Can cause that treatment of details is undesirable again, therefore the selection of N values is that have certain scope, and expected effect can be just reached within this range Really, the scope of N can determine according to experiment.
Step 3:The preliminary alignment in face:
Using surface in alignment-semi-automatic method:First determine a datum level, then determine by former and later two faces substantially earnest body really Profile.
After computation the face in face relative to datum level transformation matrix when, bounding box center below is translated first, translate To the center of the rear side rectangle of the bounding box of datum level.180 degree will be rotated according to z-axis below again, finally carry out curved surface profile Extract.It is that the three-dimensional point range obtained after step before is projected into X/Y plane that curved surface profile extracts specific practice, obtains one Plane mask figure.After the extraction of front and rear two curved surface profile, constantly rotation translates curved surface below, makes the wheel of its profile and front curve Wide closest, the calculating of distance herein uses the distance between point cloud under Cartesian coordinates.
In the projected outline of YZ planes calculates, the value of virtual Kinect positions is left surface in data acquisition to front and rear composite surface During Kinect position.The computational methods of the virtual Kinect positions in right side are similar.Obtain virtual Kinect positions it Afterwards, by reverse projection matrix, the two dimensional image that three-dimensional body is projected in perspective plane can be calculated, by this two dimension The method of the calculating projection that image can be used before and after in the alignment of two sides, calculates the projection on left and right two sides.
Step 4:Resurfacing based on segmentation:
A cloud generation Octree is first passed through, the outer surface data based on these clouds are then obtained, can to the reconstruction in face from a cloud The data conversion that Kinect is collected so obtains a several independent faces for object into conventional surface data, then this is several Individual independent face merges, and the outer surface profile of a cloud can be obtained after merging.
Step 5:Texture mapping method based on visual angle is:
View directions are unit vector of the required surface point to camera position, and face n-tuple is the summit of body surface to virtually The unit vector of Kinect positions, when particular curvature is shot, virtual Kinect directions are the direction of current point and Kinect. Wn is the angle cosine value of view directions and face n directions.When final texture color is calculated, weighted average is taken so that every One color on summit is together decided on by each surface texture.

Claims (6)

1. a kind of robot three-dimensional navigation map generation method based on Kinect, it is characterised in that:It is former by gathering Kinect Beginning data;Carry out Laplce's smoothing processing;The preliminary alignment in face;Resurfacing based on segmentation;Texture patch based on visual angle Figure.
2. a kind of robot three-dimensional navigation map generation method based on Kinect according to claim 1, its feature exists It is in described Kinect raw data acquisition methods:The data that Kinect is collected are filled to buffering area, then by it first In depth data be added to accumulation buffer, be then added with the data of former buffering area and averaged, if this average value Less than the certain threshold value being previously set, then just cumulative, can not otherwise be added up, accumulative frequency is set to 5 here.
3. a kind of robot three-dimensional navigation map generation method based on Kinect according to claim 1, it is characterized in that Described Laplce's smoothing processing method is:By the three-dimensional situation of the surrounding vertex of umbrella apposition, offset to its position of centre of gravity. Then for each summit on face, relation and point according to this point and surrounding point exist again in the positional information in space The locus of this point is calculated in theory.Shown in smoothing processing formula such as formula (1):
x i ‾ = 1 N Σ j = 1 N X j - - - ( 1 )
In above-mentioned formulaIt is the position that i-th summit is newest, N is to want to ask the summit around the point of locus again Number, can determine the scope of N according to experiment.
4. a kind of robot three-dimensional navigation map generation method based on Kinect according to claim 1, it is characterized in that The preliminary alignment schemes in described face are:Using surface in alignment-semi-automatic method:First determine a datum level, then determine front and rear two Individual face can the substantially profile of earnest body really.
5. a kind of robot three-dimensional navigation map generation method based on Kinect according to claim 1, it is characterized in that The described method of surface reconstruction based on segmentation is:A cloud generation Octree is first passed through, the appearance based on these clouds is then obtained Face data, the data conversion that the reconstruction from a cloud to face can collect Kinect so obtains one into conventional surface data Several independent faces of individual object, then these independent faces are merged, the outer surface profile of a cloud can be obtained after merging.
6. a kind of robot three-dimensional navigation map generation method based on Kinect according to claim 1, it is characterized in that The described texture mapping method based on visual angle is:View directions are unit vector of the required surface point to camera position, face n Vector is the summit of body surface to the unit vector of virtual Kinect positions, when particular curvature is shot, virtual Kinect directions It is current point and the direction of Kinect.Wn is the angle cosine value of view directions and face n directions.Calculating final texture color When, take weighted average so that the color on each summit is together decided on by each surface texture.
CN201511009650.3A 2015-12-29 2015-12-29 A kind of robot three-dimensional navigation map generation method based on Kinect Pending CN106931969A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511009650.3A CN106931969A (en) 2015-12-29 2015-12-29 A kind of robot three-dimensional navigation map generation method based on Kinect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511009650.3A CN106931969A (en) 2015-12-29 2015-12-29 A kind of robot three-dimensional navigation map generation method based on Kinect

Publications (1)

Publication Number Publication Date
CN106931969A true CN106931969A (en) 2017-07-07

Family

ID=59457538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511009650.3A Pending CN106931969A (en) 2015-12-29 2015-12-29 A kind of robot three-dimensional navigation map generation method based on Kinect

Country Status (1)

Country Link
CN (1) CN106931969A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844292A (en) * 2017-10-31 2018-03-27 苏州乐米信息科技股份有限公司 The control method and its system of camera curves track in a kind of cool run game
CN111216124A (en) * 2019-12-02 2020-06-02 广东技术师范大学 Robot vision guiding method and device based on integration of global vision and local vision

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844292A (en) * 2017-10-31 2018-03-27 苏州乐米信息科技股份有限公司 The control method and its system of camera curves track in a kind of cool run game
CN111216124A (en) * 2019-12-02 2020-06-02 广东技术师范大学 Robot vision guiding method and device based on integration of global vision and local vision
CN111216124B (en) * 2019-12-02 2020-11-06 广东技术师范大学 Robot vision guiding method and device based on integration of global vision and local vision

Similar Documents

Publication Publication Date Title
CN104677347A (en) Indoor mobile robot capable of producing 3D navigation map based on Kinect
CN108520554B (en) Binocular three-dimensional dense mapping method based on ORB-SLAM2
CN102136155B (en) Object elevation vectorization method and system based on three dimensional laser scanning
Concha et al. Using superpixels in monocular SLAM
CN104376596B (en) A kind of three-dimensional scene structure modeling and register method based on single image
CN102880866B (en) Method for extracting face features
CN100559398C (en) Automatic deepness image registration method
CN104966316A (en) 3D face reconstruction method, apparatus and server
CN111508074A (en) Three-dimensional building model simplification method based on roof contour line
US20130124148A1 (en) System and Method for Generating Editable Constraints for Image-based Models
CN104616286B (en) Quick semi-automatic multi views depth restorative procedure
CN105354883A (en) 3ds Max fast and precise three-dimensional modeling method and system based on point cloud
CN104661010A (en) Method and device for establishing three-dimensional model
CN109242954A (en) Multi-view angle three-dimensional human body reconstruction method based on template deformation
CN107730587B (en) Rapid three-dimensional interactive modeling method based on pictures
CN103854301A (en) 3D reconstruction method of visible shell in complex background
CN102081733B (en) Multi-modal information combined pose-varied three-dimensional human face five-sense organ marking point positioning method
CN108564619B (en) Realistic three-dimensional face reconstruction method based on two photos
CN107545602B (en) Building modeling method under space topological relation constraint based on LiDAR point cloud
CN115564926A (en) Three-dimensional patch model construction method based on image building structure learning
CN104657713A (en) Three-dimensional face calibrating method capable of resisting posture and facial expression changes
CN115393548A (en) Grid texture simplifying algorithm suitable for three-dimensional reconstruction
CN107093182B (en) A kind of human height's estimation method based on feature corners
CN106931969A (en) A kind of robot three-dimensional navigation map generation method based on Kinect
CN109727255A (en) A kind of three-dimensional model building dividing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170707