CN109064429B - Pseudo laser data generation method for accelerating depth image restoration by fusing GPU - Google Patents

Pseudo laser data generation method for accelerating depth image restoration by fusing GPU Download PDF

Info

Publication number
CN109064429B
CN109064429B CN201810868160.6A CN201810868160A CN109064429B CN 109064429 B CN109064429 B CN 109064429B CN 201810868160 A CN201810868160 A CN 201810868160A CN 109064429 B CN109064429 B CN 109064429B
Authority
CN
China
Prior art keywords
value
point
depth image
depth
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810868160.6A
Other languages
Chinese (zh)
Other versions
CN109064429A (en
Inventor
张建华
陈付豪
李文彬
张洪华
周有杰
何伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201810868160.6A priority Critical patent/CN109064429B/en
Publication of CN109064429A publication Critical patent/CN109064429A/en
Application granted granted Critical
Publication of CN109064429B publication Critical patent/CN109064429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a pseudo laser data generation method for accelerating depth image restoration by fusing a GPU (graphics processing unit), which comprises the following steps of: (1) acquiring a depth map using an RGB-D camera; (2) the data acquisition domain is determined, the repair area of the depth map is reduced, the calculation complexity of the system is reduced, and the repair efficiency is improved; (3) detecting a 0-value point of data acquisition, marking a connected domain by using an 8-connected marking method, and eliminating a smaller connected domain to further improve the algorithm instantaneity; (4) the connected domain of the 0-value point is repaired by improved 0-value point decision filtering, the highest statistical value of the neighborhood pixels of the 0-value point is used as a repairing result, the process is realized by utilizing a CUDA parallel structure of a GPU, the repairing speed is improved, and the actual depth value is approached to the maximum extent; (5) the pseudo laser data are output by combining a pseudo laser data generation method, the continuity of the generated pseudo laser data is greatly improved, the map construction precision of the visual SLAM is effectively improved, and the requirement of a visual SLAM system is met.

Description

Pseudo laser data generation method for accelerating depth image restoration by fusing GPU
Technical Field
The invention belongs to the field of computer vision and vision navigation systems, and particularly relates to a pseudo laser data generation method for accelerating depth image restoration by fusing a GPU.
Background
Simultaneous localization and mapping (SLAM) is a key technology for realizing the exploration, detection and navigation of the mobile robot to an unknown environment, and the estimation of the self motion pose and the construction of a surrounding map are completed by acquiring environment information in real time. In the prior SLAM technical scheme, a laser radar is the dominant of an environment acquisition device by virtue of characteristics such as wide visual angle and high precision, but the wide popularization of the laser radar is influenced by the problems that the laser radar is expensive and only can detect obstacles at the same height. With the development of computer vision and the wide application of vision sensors in recent years, the SLAM technology based on vision sensors, such as a monocular camera, a binocular camera, and an RGB-D camera, has become a research hotspot. Compared with a laser radar, the RGB-D camera based on physical measurement has the advantages of fast acquisition of depth data, low cost and good application prospect.
In the process of utilizing an RGB-D camera to carry out indoor SLAM and outputting a two-dimensional environment map, the collection of depth information is influenced by environment light, object materials and software and hardware of a sensor, and the acquired depth image has data loss, namely, a large number of pixel points with 0 depth value exist, so that the precision of pseudo laser data detected by a target object is reduced, and the mapping precision of a subsequent SLAM is influenced.
Through analysis, the repair of the depth image is the basis for improving the precision of the SLAM system. The depth image restoration algorithm mainly uses the relevance of the depth values of the field pixel points in the depth image to restore the depth data. In a common filtering algorithm, a mean value filtering of a linear method is adopted, and the purpose of repairing is achieved through a pixel value in an average window range, but the detail of an image cannot be well protected, and the detail part of the image is damaged while the image is denoised. The median filtering adopts a nonlinear method which is very effective in smoothing impulse noise and can protect sharp edges of an image, but a method of taking a template pixel middle value makes a repair value not representative and has high distortion. In addition, in the document of application No. 201610147426.9, a repair algorithm is disclosed, which processes a depth image by combining a morphological filtering algorithm and a bilateral filtering algorithm, fills up holes in the depth image by several basic algorithms in morphology, and then smoothes the image by using bilateral filtering, however, the method using a multilateral joint iterative filter has a large amount of calculation, occupies a large amount of calculation resources, and seriously affects the real-time performance of the system.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to solve the technical problem of providing a pseudo laser data generation method for fusing a GPU to accelerate depth image restoration.
The technical scheme for solving the technical problem is to provide a pseudo laser data generation method for accelerating depth image restoration by fusing a GPU, and the method is characterized by comprising the following steps:
the method comprises the steps of firstly, collecting a depth image by using an RGB-D camera;
step two, determining a data acquisition domain: calculating a horizontal central line of the depth image, and defining a neighborhood range containing d rows of pixels up and down as a data acquisition domain by taking the horizontal central line as a reference;
step three, detecting a 0-value point of data acquisition, marking a connected domain, and removing a smaller connected domain:
(1) traversing the data acquisition domain and detecting a point with a pixel value of 0; setting a point A (x, y) in the data acquisition domain as a 0-value point, and setting a depth [ x ] [ y ] as 0; point a has 8 neighborhood pixels; marking the point A by using an 8-communication marking method; if other 0 value points exist around the point A, the 0 value points are included in the neighborhood range of the point A, otherwise, the 0 value points are not included; after marking, 0 value points in the data acquisition domain are gathered to form a connected domain;
(2) calculating the area of the connected domain, and discarding the connected domain with the area less than 10;
step four, improved 0-value point decision filtering restoration based on the GPU:
(1) for the connected domain with the area larger than 10, repairing from the upper left corner, and selecting a 5 multiplied by 5 region B which is equivalent to the filter template by taking a 0 value point to be repaired as the center;
(2) 5 multiplied by 5 filter template restoration is carried out on the 0 value point; firstly, traversing a 5 multiplied by 5 filter template, searching a non-0 value point Dj and counting the frequency Kj of each value; sorting the frequency of each value in the region B according to the sequence; then, counting the frequency of D1 on the inner layer and the outer layer of the filter template respectively; if the frequency of the D1 appearing in the inner and outer layers of filter templates is higher than a set threshold value, taking the statistical value D1 as a repair value of the 0 value point; otherwise, judging whether D2 meets the requirement, and so on;
(3) when repairing the second 0-value point in the same connected domain, after selecting the statistical value in the step (2), averaging the statistical value selected by the second 0-value point and the repairing value of the first 0-value point, and using the average value as the repairing value of the second 0-value point;
(4) for each 0-value point, after the statistic value of the neighborhood is calculated, the statistic value is summed and averaged with each repair value before, and the final average value is used for the repair value of the point;
(5) repeating the step four (2) - (4) until the 0 value point of the connected domain is repaired;
step five, generating pseudo laser data: and after the 0-value point in the data acquisition domain is repaired, outputting the pseudo laser data by combining a pseudo laser data generation method.
Compared with the prior art, the invention has the beneficial effects that:
(1) when the data acquisition domain is determined, the data of the whole depth image is not utilized, but the data acquisition domain taking the center of the image as the center is determined, so that the repair area of the depth image is reduced, the calculation complexity of the system is reduced, and the repair efficiency and the real-time performance are improved.
(2) When the 0 value point of data acquisition is detected, a connected domain with small influence on precision is eliminated, and the algorithm real-time performance is further improved.
(3) In the aspect of repairing 0-value points, the highest statistical value of 0-value point neighborhood pixels is used as a repairing result, compared with the traditional mean value, median value, Gaussian filter and the like, the precision is higher, the acceleration of a high-resolution video image processing algorithm is realized by utilizing a parallel structure of a CUDA (compute Unified Device architecture) of a GPU (graphic processing unit), the calculation time can be greatly reduced, the repairing speed is improved, the actual depth value is approached to the maximum extent, and the accuracy of the repairing value is improved. Compared with multilateral combined filtering, the algorithm is simple and easy to implement and high in instantaneity.
(4) The continuity of the pseudo laser data generated after the algorithm is repaired is greatly improved, accurate pseudo laser data are output, the robustness of the SLAM system is improved, the problems of discontinuous pseudo laser data, low accuracy and the like in the visual SLAM technology are solved, the map construction precision of the visual SLAM is effectively improved, and the requirements of the visual SLAM system are met.
Drawings
FIG. 1 is a diagram of a distribution of 8 neighborhood pixel points of a 0-valued point A in a data acquisition domain according to an embodiment of a method for generating pseudo laser data for fusing GPU accelerated depth image restoration of the present invention;
FIG. 2 is a 5 × 5 filter template according to an embodiment of the method for generating pseudo laser data for accelerating depth image restoration by merging with a GPU of the present invention;
FIG. 3 is a depth measurement model of an RGB-D camera according to an embodiment of a pseudo laser data generation method for accelerating depth image restoration by fusing a GPU of the present invention;
Detailed Description
Specific examples of the present invention are given below. The specific examples are only intended to illustrate the invention in further detail and do not limit the scope of protection of the claims of the present application.
The invention provides a pseudo laser data generation method (short method) for fusing GPU (graphics processing unit) to accelerate depth image restoration, which is characterized by comprising the following steps:
step one, collecting a depth image by using an RGB-D camera: when the mobile robot carries out indoor SLAM, a depth image of an environment can be acquired in real time along with movement through an RGB-D camera fixed on the robot; the RGB-D camera adopts KinectV 2;
step two, determining a data acquisition domain: calculating a horizontal central line of the depth image, and defining a neighborhood range containing d rows of pixels up and down as a data acquisition domain by taking the horizontal central line as a reference; wherein the d value is selected to influence the detection precision of the barrier; a too small value of d, a lower height of the obstacle is not detected; if the value of d is too large, the ground is introduced into a data acquisition domain, so that the detection of normal obstacles is seriously influenced; after multiple experiments, the value of D is 25-35, the pixel of a common RGB-D depth image is 512 x 424, and the area of a determined data acquisition domain is 512 x 51-512 x 71;
step three, detecting a 0-value point of data acquisition, marking a connected domain, and removing a smaller connected domain:
(1) traversing the data acquisition domain and detecting a 0 value, namely detecting a point with a pixel value of 0; setting a point A (x, y) in the data acquisition domain as a 0-value point, namely setting a depth [ x ] [ y ] as 0; then point a has 8 neighborhood pixels (except for the edge points) as in fig. 1; marking the point A by using an 8-way marking method, wherein the point A is analyzed and marked respectively according to 8 directions from top to bottom and from left to right in the embodiment; if other 0 value points exist around the point A, the 0 value points are included in the neighborhood range of the point A, otherwise, the 0 value points are not included; after marking, 0 value points in the data acquisition domain are gathered to form a connected domain;
(2) calculating the area of the connected domain, abandoning the connected domain with the area smaller than 10 (namely a smaller connected domain) in order to improve the repair speed, and finding that the connected domains with the area smaller than 10 are scattered through research, wherein the influence can be ignored;
step four, improved 0-value point decision filtering restoration based on the GPU:
(1) for the connected domain with the area larger than 10, repairing from the upper left corner, and selecting a 5 multiplied by 5 region B which is equivalent to the filter template by taking a 0 value point to be repaired as the center;
(2) 5 x 5 filter template (as figure 2) repair is carried out on the 0 value point; firstly, traversing a 5 multiplied by 5 filter template, searching a non-0 value point Dj and counting the frequency Kj of each value; sorting the frequency of each value in the region B according to the sequence D1> D2> …, wherein the corresponding frequency is K1> K2> …; then, counting the frequency of D1 on the inner layer and the outer layer of the filter template respectively; if the frequency of the D1 appearing in the inner and outer layers of filter templates is higher than a set threshold value, taking the statistical value D1 as a repair value of the 0 value point; otherwise, judging whether D2 meets the requirement, and so on;
(3) when the actual depth value of the obstacle and the actual depth value of the environmental background have larger difference, repairing the joint of the obstacle and the environmental background, and selecting the repairing value becomes a key; therefore, when repairing the second 0-value point in the same connected domain, after selecting the statistical value in step four (2), averaging the statistical value selected by the second 0-value point and the repair value of the first 0-value point, and using the average value as the repair value of the second 0-value point;
(4) for each 0-value point, after the statistic value of the neighborhood is calculated, the statistic value is summed and averaged with each repair value before, and the final average value is used for the repair value of the point;
(5) repeating the step four (2) - (4) until the 0 value point of the connected domain is repaired;
step five, generating pseudo laser data: after the 0-value point in the data acquisition domain is repaired, outputting pseudo laser data by combining a pseudo laser data generation method;
(1) detecting a pixel minimum value: under a pixel coordinate system, defining the pixel value of the leftmost column of a data acquisition domain as a first column, detecting the minimum value of the pixels of the first column and recording a corresponding pixel point M (u, v), wherein the minimum depth value of the pixel value is depth [ u ] [ v ], and the pixel value corresponds to a point M (x, y, z) under a camera coordinate system;
(2) and (3) coordinate conversion: in a data acquisition domain, a point M (u, v) corresponds to a pixel coordinate and specifically describes the position of a point M in a depth image, the origin of the depth image in a pixel coordinate system is at the upper left corner of the depth image, and a u axis and a v axis are respectively parallel to the side edges of the depth image; since depth [ u ] is][v]Not the actual distance r of the point M to the camera optical center O, but the perpendicular distance z of the point M to the plane of the camera optical center is indicated, so that z is depth u][v](ii) a The coordinates of point M in the camera coordinate system are:
Figure GDA0003345433580000061
wherein K is an internal reference of the RGB-D camera;
(3) calculating a simulation angle: if the depth corresponding to the pseudo laser data is the distance r from M to the optical center of the camera, calculating the included angle theta (simulation angle) between the point M and the z axis under the coordinate system of the camera;
Figure GDA0003345433580000062
(4) solving the position of the laser groove: storing the pseudo laser data in the corresponding laser groove N ═ NiAnd i is more than or equal to alpha and less than or equal to beta, wherein alpha and beta respectively correspond to the minimum angle and the maximum angle of the laser scanning position; point M in the stress optical groovePosition i is:
Figure GDA0003345433580000063
wherein c is α - β;
the point M corresponds to the generated pseudo laser data value niComprises the following steps:
Figure GDA0003345433580000064
(5) after the first-column minimum depth value is converted into laser data, repeating the steps (1) - (4) of the step five until the conversion of each column of the data acquisition domain is completed; and finally, storing the calculated laser data value and outputting the converted pseudo laser data.
Nothing in this specification is said to apply to the prior art.

Claims (5)

1. A pseudo laser data generation method for accelerating depth image restoration by fusing a GPU is characterized by comprising the following steps:
the method comprises the steps of firstly, collecting a depth image by using an RGB-D camera;
step two, determining a data acquisition domain: calculating a horizontal central line of the depth image, and defining a neighborhood range containing d rows of pixels up and down as a data acquisition domain by taking the horizontal central line as a reference;
step three, detecting a 0-value point of data acquisition, marking a connected domain, and removing a smaller connected domain:
(1) traversing the data acquisition domain and detecting a point with a pixel value of 0; setting a point A (x, y) in the data acquisition domain as a 0-value point, and setting a depth [ x ] [ y ] as 0; point a has 8 neighborhood pixels; marking the point A by using an 8-communication marking method; if other 0 value points exist around the point A, the 0 value points are included in the neighborhood range of the point A, otherwise, the 0 value points are not included; after marking, 0 value points in the data acquisition domain are gathered to form a connected domain;
(2) calculating the area of the connected domain, and discarding the connected domain with the area less than 10;
step four, improved 0-value point decision filtering restoration based on the GPU:
(1) for the connected domain with the area larger than 10, repairing from the upper left corner, and selecting a 5 multiplied by 5 region B which is equivalent to the filter template by taking a 0 value point to be repaired as the center;
(2) 5 multiplied by 5 filter template restoration is carried out on the 0 value point; firstly, traversing a 5 multiplied by 5 filter template, searching a non-0 value point Dj and counting the frequency Kj of each value; sorting the frequency of each value in the region B according to the sequence; then, counting the frequency of D1 on the inner layer and the outer layer of the filter template respectively; if the frequency of the D1 appearing in the inner and outer layers of filter templates is higher than a set threshold value, taking the statistical value D1 as a repair value of the 0 value point; otherwise, judging whether D2 meets the requirement, and so on;
(3) when repairing the second 0-value point in the same connected domain, after selecting the statistical value in the step (2), averaging the statistical value selected by the second 0-value point and the repairing value of the first 0-value point, and using the average value as the repairing value of the second 0-value point;
(4) for each 0-value point, after the statistic value of the neighborhood is calculated, the statistic value is summed and averaged with each repair value before, and the final average value is used for the repair value of the point;
(5) repeating the step four (2) - (4) until the 0 value point of the connected domain is repaired;
step five, generating pseudo laser data: and after the 0-value point in the data acquisition domain is repaired, outputting the pseudo laser data by combining a pseudo laser data generation method.
2. The method for generating pseudo laser data fusing GPU to accelerate depth image restoration according to claim 1, wherein the first step is to obtain the depth image of the environment in real time along with movement of an RGB-D camera fixed on a mobile robot when the mobile robot performs indoor SLAM.
3. The method of claim 1, wherein the RGB-D camera uses KinectV 2.
4. The method for generating the pseudo laser data for the fusion GPU to accelerate the depth image restoration according to claim 1, wherein in the second step, the value of d is 25-35, and the area of the data acquisition domain is 512 x 51-512 x 71.
5. The method for generating pseudo laser data for fusing GPU to accelerate depth image restoration according to claim 1, wherein the concrete steps of the fifth step are as follows:
(1) detecting a pixel minimum value: under a pixel coordinate system, defining the pixel value of the leftmost column of a data acquisition domain as a first column, detecting the minimum value of the pixels of the first column and recording a corresponding pixel point M (u, v), wherein the minimum depth value of the pixel value is depth [ u ] [ v ], and the pixel value corresponds to a point M (x, y, z) under a camera coordinate system;
(2) and (3) coordinate conversion: in a data acquisition domain, a point M (u, v) corresponds to a pixel coordinate and describes the position of a point M in a depth image, the origin of the depth image in a pixel coordinate system is at the upper left corner of the depth image, and a u axis and a v axis are respectively parallel to the side edges of the depth image; since depth [ u ] is][v]Indicating the perpendicular distance z from the point M to the plane of the camera's optical center, and hence z is depth u][v](ii) a The coordinates of point M in the camera coordinate system are:
Figure FDA0003345433570000021
wherein K is an internal reference of the RGB-D camera;
(3) calculating a simulation angle: if the depth corresponding to the pseudo laser data is the distance r from M to the optical center of the camera, calculating the included angle theta between the point M and the z axis under the coordinate system of the camera, wherein the theta is a simulation angle;
Figure FDA0003345433570000022
(4) solving the position of the laser groove: storing the pseudo laser data in the corresponding laser groove N ═ NiAnd i is more than or equal to alpha and less than or equal to beta, wherein alpha and beta respectively correspond to the minimum angle and the maximum angle of the laser scanning position; the point M for the position i in the stress optical groove is:
Figure FDA0003345433570000031
wherein c is α - β;
the point M corresponds to the generated pseudo laser data value niComprises the following steps:
Figure FDA0003345433570000032
(5) after the first-column minimum depth value is converted into laser data, repeating the steps (1) - (4) of the step five until the conversion of each column of the data acquisition domain is completed; and finally outputting the converted pseudo laser data.
CN201810868160.6A 2018-08-02 2018-08-02 Pseudo laser data generation method for accelerating depth image restoration by fusing GPU Active CN109064429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810868160.6A CN109064429B (en) 2018-08-02 2018-08-02 Pseudo laser data generation method for accelerating depth image restoration by fusing GPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810868160.6A CN109064429B (en) 2018-08-02 2018-08-02 Pseudo laser data generation method for accelerating depth image restoration by fusing GPU

Publications (2)

Publication Number Publication Date
CN109064429A CN109064429A (en) 2018-12-21
CN109064429B true CN109064429B (en) 2022-02-08

Family

ID=64832644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810868160.6A Active CN109064429B (en) 2018-08-02 2018-08-02 Pseudo laser data generation method for accelerating depth image restoration by fusing GPU

Country Status (1)

Country Link
CN (1) CN109064429B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110078183A (en) * 2009-12-30 2011-07-07 삼성전자주식회사 Method and apparatus for generating 3d image data
CN103455984A (en) * 2013-09-02 2013-12-18 清华大学深圳研究生院 Method and device for acquiring Kinect depth image
CN104822059A (en) * 2015-04-23 2015-08-05 东南大学 Virtual viewpoint synthesis method based on GPU acceleration
CN105894047A (en) * 2016-06-28 2016-08-24 深圳市唯特视科技有限公司 Human face classification system based on three-dimensional data
CN106485672A (en) * 2016-09-12 2017-03-08 西安电子科技大学 Improved Block- matching reparation and three side Steerable filter image enchancing methods of joint
CN106898048A (en) * 2017-01-19 2017-06-27 大连理工大学 A kind of undistorted integration imaging 3 D displaying method for being suitable for complex scene
CN106920263A (en) * 2017-03-10 2017-07-04 大连理工大学 Undistorted integration imaging 3 D displaying method based on Kinect

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017080420A1 (en) * 2015-11-09 2017-05-18 Versitech Limited Auxiliary data for artifacts –aware view synthesis

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110078183A (en) * 2009-12-30 2011-07-07 삼성전자주식회사 Method and apparatus for generating 3d image data
CN103455984A (en) * 2013-09-02 2013-12-18 清华大学深圳研究生院 Method and device for acquiring Kinect depth image
CN104822059A (en) * 2015-04-23 2015-08-05 东南大学 Virtual viewpoint synthesis method based on GPU acceleration
CN105894047A (en) * 2016-06-28 2016-08-24 深圳市唯特视科技有限公司 Human face classification system based on three-dimensional data
CN106485672A (en) * 2016-09-12 2017-03-08 西安电子科技大学 Improved Block- matching reparation and three side Steerable filter image enchancing methods of joint
CN106898048A (en) * 2017-01-19 2017-06-27 大连理工大学 A kind of undistorted integration imaging 3 D displaying method for being suitable for complex scene
CN106920263A (en) * 2017-03-10 2017-07-04 大连理工大学 Undistorted integration imaging 3 D displaying method based on Kinect

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Depth enhancement using RGB-D guided filtering";Tak-Wai Hui等;《2014 IEEE International Conference on Image Processing (ICIP)》;20150129;page1-7 *
"基于RGB-D摄像机的同步定位与建图研究";辛冠希;《中国硕士学位论文全文数据库(电子期刊)信息科技辑》;20170228;第156-162页 *
"基于RGB-D相机的实时人数统计方法";张华等;《计算机工程与应用》;20140918;I138-4169 *

Also Published As

Publication number Publication date
CN109064429A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN108519605B (en) Road edge detection method based on laser radar and camera
CN109684921B (en) Road boundary detection and tracking method based on three-dimensional laser radar
CN111781608B (en) Moving target detection method and system based on FMCW laser radar
CN111046776A (en) Mobile robot traveling path obstacle detection method based on depth camera
CN102938142A (en) Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN103605978A (en) Urban illegal building identification system and method based on three-dimensional live-action data
WO2020237516A1 (en) Point cloud processing method, device, and computer readable storage medium
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN103679167A (en) Method for processing CCD images
CN114639115B (en) Human body key point and laser radar fused 3D pedestrian detection method
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
CN107220964A (en) A kind of linear feature extraction is used for geology Taking stability appraisal procedure
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
CN116188417A (en) Slit detection and three-dimensional positioning method based on SLAM and image processing
Wang et al. A method for detecting windows from mobile LiDAR data
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN109727255B (en) Building three-dimensional model segmentation method
CN109064429B (en) Pseudo laser data generation method for accelerating depth image restoration by fusing GPU
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN113192133B (en) Monocular instant positioning and dense semantic map construction method based on semantic plane
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
CN114429469A (en) Heading machine body pose determination method and system based on three-laser-spot target
Yang et al. LiDAR data reduction assisted by optical image for 3D building reconstruction
CN113674360A (en) Covariant-based line structured light plane calibration method
Guo et al. 3D Lidar SLAM Based on Ground Segmentation and Scan Context Loop Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant