CN107392845A - A kind of method of 3D point cloud imaging and positioning - Google Patents

A kind of method of 3D point cloud imaging and positioning Download PDF

Info

Publication number
CN107392845A
CN107392845A CN201710642043.3A CN201710642043A CN107392845A CN 107392845 A CN107392845 A CN 107392845A CN 201710642043 A CN201710642043 A CN 201710642043A CN 107392845 A CN107392845 A CN 107392845A
Authority
CN
China
Prior art keywords
point
point cloud
template
region
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710642043.3A
Other languages
Chinese (zh)
Inventor
刘大鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Cloud Robot Co Ltd
Original Assignee
Wuhu Cloud Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhu Cloud Robot Co Ltd filed Critical Wuhu Cloud Robot Co Ltd
Priority to CN201710642043.3A priority Critical patent/CN107392845A/en
Publication of CN107392845A publication Critical patent/CN107392845A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a kind of 3D point cloud imaging method, it is characterised in that:Laser sends scanning light beam to testee surface, the information of binocular imaging system acquisition laser spots, by the way that each laser spots are carried out with conversion of the plane to space as the parameter that a camera calibration is crossed, the spatial coordinated information of laser spots is obtained, the information integration of all scanning elements is obtained into 3D point cloud model together.The advantage of the invention is that:During laser scanning, employ two cameras and position light beam spot in real time, the three-dimensional information of laser spots is obtained using binocular stereo vision, so as to avoid the precision dependence to laser scanning, it also avoid mechanical structure and light channel structure because hydraulic performance decline caused by use;It can analyze for the 3D point cloud information of acquisition and judge whether drain sweep region, and drain sweep region is rescaned to obtain new 3D point cloud information, reduce the missing image problem that drain sweep is brought.

Description

A kind of method of 3D point cloud imaging and positioning
Technical field
The present invention relates to computer generated image field, the method for more particularly to a kind of 3D point cloud imaging and positioning.
Background technology
It is exactly that each point of laser scanning is caught by camera to carry out 3D point cloud imaging by laser, is then based on Laser triangulation shape geometric theory, the two-dimensional signal that camera is caught is changed into three-dimensional information.Catch again and complete all laser Point, you can generation reflects the 3D point cloud of body surface form.The coordinates of motion position the configuration of surface that not only obtain object, also Confirm pose of the object coordinates system relative to world coordinate system.3D point cloud is imaged and location technology is studied and used in reverse engineering In have consequence.
Prior art employs the deltic method of laser and single-lens/sensor, positioning of such method to laser Required precision is high, and light path is complicated, and precision can be reduced constantly during use.When in face of extremely complex surface, meeting There is the scan blind spot of laser, missing is had in 3D point cloud imaging.
The content of the invention
, should it is an object of the invention to overcome the deficiencies of the prior art and provide the method for a kind of imaging of 3D point cloud and positioning Point cloud imaging method is acquired using binocular imaging system to laser point data, adds acquisition precision, and can be to shape Into 3D point cloud information carry out detection and judge whether drain sweep part, drain sweep part is rescaned, while can obtain The pose of scanned object is determined to the kinetic coordinate system of object, for object scanning after identify, Machine automated control Data basis is provided.
To achieve these goals, the technical solution adopted by the present invention is:A kind of 3D point cloud imaging method, including binocular into As system, laser, laser sends scanning light beam value testee surface, the information of binocular imaging system acquisition laser spots, Plane is carried out to the conversion in space to each laser spots by the parameter that two camera calibrations are crossed, obtains the space coordinates of laser spots Information, the information integration of all scanning elements is obtained into 3D point cloud model together.
Closure detection is carried out to obtained 3D point cloud model, determines whether unclosed curved surface, if in the presence of scanning In the presence of point is omitted, it is determined that omitting the region where point, the region is rescaned, updates 3D point cloud model.
Determine whether that unclosed curved surface comprises the following steps:
Step 1:Multiple detection zones corresponding with each point delimited centered on each point in 3D point cloud model;
Step 2:The point cloud density of each detection zone is calculated respectively;
Step 3:Whether extremely to be judged according to the point cloud density in each region in detection zone, if point cloud density anomaly, is deposited It is that 3D point cloud has unclosed curved surface in the region do not scanned.
If detection zone is point cloud density anomaly region in step 3, central point corresponding to detection zone is boundary point, phase It is drain sweep point region be present that adjacent boundary point, which connects the region that the contour line to be formed surrounds, and the region is rescaned, Update 3D point cloud information.
After boundary point is obtained, filter operation is carried out to border point, filters out noise point.Here boundary point is suspicious points, It is specifically included in one three-dimensional square convolution region of division around each suspicious points, and workpiece is carried out to the square Gridding, and be the weight of each mesh definition grid, the grid nearer apart from suspicious points possesses higher weight, distance The grid of suspicious points farther out possesses higher weight.The numeral of all weights constitutes wave filter, can also turn into convolution kernel, uses This convolution kernel carries out convolutional calculation to each suspicious points, is essentially the calculating of weighted average.If deposited in the grid In a number of suspicious points, the weight of the grid is just multiplied by with the quantity of suspicious points, if there is no suspicious points, suspicious points Quantity is zero.After being weighted the calculating of average to the suspicious points quantity in all grids, a fractional value can be obtained, should Fraction is noise point if less than critical threshold value, the then point, can be cut.Filter out after noise point again to adjacent after filtering Boundary point connect to form contour line.
It is corresponding that 3D point cloud after extraction renewal characterizes the characteristic point of body form, the three-dimensional coordinate of characteristic point and characteristic point Plane normal vector, the kinetic coordinate system of scanned object is obtained by ICP algorithm.
Ask for characteristic point and planar process vector approach is:3D point cloud is filtered by different degrees of Gauss by ICP algorithm Ripple device carries out Fuzzy Processing, has obtained highlighting the new point cloud of three-dimensional point cloud different characteristic.Again obtained new point Yun Heyuan There is a cloud to be contrasted, obtained the extreme point of the coherence factor in certain domain.These extreme points are to reflect to sweep Retouch the characteristic point of the shape facility of object.In original cloud, these characteristic points and adjacent are found according to coordinate information Point, fitting are tangential on a cloud and have passed through the plane of the point, ask for the normal vector of the plane.
Define the characteristic vector that this normal vector is this feature point.Obtained all characteristic points, characteristic vector, Yi Jite The three-dimensional coordinate of point is levied, is retrieved with ICP algorithm in database.There is the three-dimensional point of user's association area in database Cloud template, each template have customized kinetic coordinate system, and the determination of the kinetic coordinate system is combined with the shape of the template Feature can clearly show position and the posture of the object come what is set.After ICP algorithm retrieved template again, it can return The attribute definition of the template is returned, the space vector that can also return between this feature point described in each characteristic point and template closes System.By these vector correlations, can be sat according to the rule of kinetic coordinate system defined in template, the motion of self-defined three-dimensional point cloud Mark system.And by space coordinates converting algorithm, bring the space vector relation of each characteristic point into, ask for the three of self-defined mistake The mapping between template coordinate system in dimension point cloud coordinate system and template.
The advantage of the invention is that:During laser scanning, employ two cameras and position light beam spot in real time, make The three-dimensional information of laser spots is obtained with binocular stereo vision, so as to avoid the precision dependence to laser scanning, it also avoid machine Tool structure and light channel structure are because hydraulic performance decline caused by use;It can analyze for the 3D point cloud information of acquisition and judge whether to deposit In drain sweep region, and drain sweep region is rescaned to obtain new 3D point cloud information, reduce the missing image problem that drain sweep is brought.
Brief description of the drawings
Mark in the content and figure expressed below each width accompanying drawing of description of the invention is briefly described:
Fig. 1 is the method flow schematic diagram that 3D point cloud of the present invention is imaged and positioned.
Embodiment
It is further detailed to the embodiment work of the present invention by the description to optimum embodiment below against accompanying drawing Thin explanation.
As shown in figure 1, a kind of method of 3D point cloud imaging and positioning, binocular imaging system is used for the letter for gathering laser spots Breath, plane is carried out to each laser spots by the parameter that the binocular imaging system calibrating formed to two cameras is crossed and turned to space Change, obtain the spatial coordinated information of laser spots, the information integration of all scanning elements is obtained into 3D point cloud model together.Binocular into As system includes being made up of the control of the computer of two cameras and camera, the demarcation of binocular imaging system includes internal reference, outer Ginseng, then achievement binocular imaging algorithm carries out being converted to space coordinates, and algorithm, parameter calibration can use prior art binocular Imaging system is realized.
The 3D point cloud information obtained after scanning there may be drain sweep described point, it is therefore desirable to drain sweep is taken aim at and a little carries out judgement inspection Survey, closure detection is carried out to obtained 3D point cloud model, determines whether unclosed curved surface, if in the presence of scanning is present Point is omitted, it is determined that omitting the region where point, the region is rescaned, updates 3D point cloud model.If the curved surface of 3D point cloud connects Continuous, then explanation scanning does not occur omission point, if discontinuously illustrating at discontinuous place by one or more discontinuity poinies.
After judging to have obtained drain sweep described point, it is thus necessary to determine that the region where drain sweep described point, then the region is swept Retouch, so as to update a cloud information so that point cloud information is more accurate, reduces the loss of image.Determine whether unclosed song Face comprises the following steps:
Step 1:Multiple detection zones corresponding with each point delimited centered on each point in 3D point cloud model;
Step 2:The point cloud density of each detection zone is calculated respectively;
Step 3:Whether extremely to be judged according to the point cloud density in each region in detection zone, if point cloud density anomaly, is deposited It is that 3D point cloud has unclosed curved surface in the region do not scanned.Whether point cloud density is according to the point cloud being calculated extremely Density and criteria threshold density ratio relatively judge because in the case where normally there is not drain sweep, point cloud density be it is certain, When there is the point that does not scan or region, then the point cloud density in detection zone can be remarkably decreased, less than standard density value, this Shi Ze thinks a cloud density anomaly, in fact it could happen that drain sweep region.
If detection zone is point cloud density anomaly region in step 3, central point corresponding to detection zone is boundary point, phase It is drain sweep point region be present that adjacent boundary point, which connects the region that the contour line to be formed surrounds, and the region is rescaned, Update 3D point cloud information.After boundary's point is obtained, filter operation is carried out to border point, filters out and passes through between the boundary point after noise point Digital contour line algorithm connects adjacent boundary point to form contour line, and then the region of encirclement is rescaned.Here Boundary point is suspicious points, and it is specifically included in one three-dimensional square convolution region of division around each suspicious points, right The square carry out workpiece gridding, and for each mesh definition grid weight, the grid nearer apart from suspicious points Possess higher weight, possess higher weight apart from the grid of suspicious points farther out.The numeral of all weights constitutes wave filter, Convolution kernel can also be turned into, convolutional calculation is carried out to each suspicious points with this convolution kernel, is essentially the meter of weighted average Calculate.If there is a number of suspicious points in the grid, the weight of the grid is just multiplied by with the quantity of suspicious points, if not Suspicious points be present, the quantity of suspicious points is zero., can after being weighted the calculating of average to the suspicious points quantity in all grids To obtain a fractional value, the fraction is noise point if less than critical threshold value, the then point, can be cut.
, can be by carrying out laser scanning to the subregion when being rescaned to the region that contour line surrounds Light path emulates or read the pose of the pose of the camera in the region and object in preliminary scan.These signals are double for controlling Mesh imaging system and laser controlling, object space control, rescan to the region so as to realize, obtain complete 3D points Cloud information.
Complete point cloud information is whole 3D imagings basis, the basis being controlled also with 3D point cloud information, is extracted 3D point cloud after renewal characterizes the characteristic point of body form, the three-dimensional coordinate of characteristic point and plane normal direction corresponding to characteristic point Amount, the kinetic coordinate system of scanned object is obtained by ICP algorithm.I.e. scanning system can obtain thing after obtaining kinetic coordinate system Classification, position, the posture of body, improve the automatization level of scanning, and data basis is provided for the execution after scanning.
After complete 3D point cloud model is obtained, the plane normal vector sum positional information of characteristic point, these positions letter are extracted Breath carries out template matches based on ICP algorithm in database, fine positioning movements coordinate system, is imaged to be follow-up based on 3D point cloud Intelligent grabbing, automatic machinery people's vision after surface sweeping etc. provide data basis.
ICP algorithm is the conventional algorithm of three-dimensional laser scanning technique, and ICP algorithm English full name is Iterative Closest point, that is, iteration carry out fuzzy place to 3D point cloud with regard to proximal point algorithm by different degrees of Gaussian filter Reason, has obtained highlighting the new point cloud of three-dimensional point cloud different characteristic.Obtained new point cloud and original cloud are contrasted again, The extreme point of the coherence factor in certain domain is obtained.These points are the spy for the shape facility for reflecting scanning object Sign point.In original cloud, these characteristic points and adjacent point are found, fitting is tangential on a cloud and has passed through the flat of the point Face, ask for the normal vector of the plane.Define the characteristic vector that this normal vector is this feature point.Obtained all characteristic points, Characteristic vector, and the three-dimensional coordinate of characteristic point, retrieved with ICP algorithm in database.Have in advance in database The three-dimensional point cloud template of user's association area is auto parts as what is scanned, then has a variety of automobiles in database in advance and commonly uses The point cloud template of part, each template have customized kinetic coordinate system, and the determination of the kinetic coordinate system is combined with the mould The shape facility of plate can clearly show position and the posture of the object come what is set.ICP algorithm retrieved again template it Afterwards, the attribute definition of the template can be returned to, attribute definition is the type of template, returns to the species of object, such as scanning is automobile Tire, the template obtained by characteristic vector, characteristic point is tire template, then can feed back the attribute that scanned object is " tire " Definition, while can also return to the space vector relation between this feature point in each characteristic point of description and template.Pass through these Vector correlation, can be according to the rule of kinetic coordinate system defined in template, the kinetic coordinate system of self-defined three-dimensional point cloud.And lead to Space coordinates converting algorithm is crossed, brings the space vector relation of each characteristic point into, asks for the three-dimensional point cloud coordinate of self-defined mistake The mapping between template coordinate system in system and template.There is customized kinetic coordinate system, the attribute of the template matched, with And the space reflection between self-defined kinetic coordinate system and template coordinate system, scanning system can the automatic identification scanning objects Classification, placement location and the posture placed, improve the automatization level of scanning system.Certainly, can be with by 3D point cloud information It is that the intelligent grabbing after the scanning work, intelligent production line balance, or automated detection system provide to do other processing Data basis.
Obviously present invention specific implementation is not subject to the restrictions described above, as long as employing the methodology and skill of the present invention The improvement for the various unsubstantialities that art scheme is carried out, within protection scope of the present invention.

Claims (7)

1. a kind of 3D point cloud imaging and the method for positioning, it is characterised in that:Laser sends scanning light beam to testee surface, The information of binocular imaging system acquisition laser spots, plane is carried out to sky to each laser spots by the parameter that two camera calibrations are crossed Between conversion, obtain the spatial coordinated information of laser spots, the information integration of all scanning elements obtained into 3D point cloud model together.
2. a kind of 3D point cloud imaging as claimed in claim 1 and the method for positioning, it is characterised in that:To obtained 3D point cloud mould Type carries out closure detection, determines whether unclosed curved surface, if in the presence of scanning, which exists, omits point, it is determined that omitting point institute Region, the region is rescaned, update 3D point cloud model.
3. a kind of 3D point cloud imaging as claimed in claim 2 and the method for positioning, it is characterised in that:Determine whether unclosed Curved surface comprise the following steps:
Step 1:Multiple detection zones corresponding with each point delimited centered on each point in 3D point cloud model;
Step 2:The point cloud density of each detection zone is calculated respectively;
Step 3:Whether extremely to be judged according to the point cloud density in each region in detection zone, if point cloud density anomaly, is present not Unclosed curved surface be present in the region of scanning, as 3D point cloud.
4. a kind of 3D point cloud imaging as claimed in claim 3 and the method for positioning, it is characterised in that:If detection zone in step 3 Domain is point cloud density anomaly region, then central point corresponding to detection zone is boundary point, and adjacent boundary point connects the wheel to be formed The region that profile surrounds is drain sweep point region be present, and the region is rescaned, and updates 3D point cloud information.
5. a kind of 3D point cloud imaging as claimed in claim 4 and the method for positioning, it is characterised in that:After boundary point is obtained, Filter operation is carried out to border point, again the adjacent boundary point after filtering is connected to form contour line after filtering out noise point.
6. a kind of 3D point cloud imaging as claimed in claim 4 and the method for positioning, it is characterised in that:3D points after extraction renewal The characteristic point of body form, the three-dimensional coordinate of characteristic point and plane normal vector corresponding to characteristic point are characterized in cloud, is calculated by ICP Method obtains the kinetic coordinate system of scanned object.
7. a kind of 3D point cloud imaging as claimed in claim 6 and the method for positioning, it is characterised in that:ICP algorithm is in database In retrieved, have the three-dimensional point cloud template of user's association area in database in advance, each template is equipped with customized Kinetic coordinate system, the determination of the kinetic coordinate system is combined with the shape facility of the template to set, to show the object Position and posture, ICP algorithm can return to the attribute definition of the template after it retrieved template, it is every also to return to description The space vector relation between this feature point in individual characteristic point and template, by these vector correlations, defined in template The rule of kinetic coordinate system, the kinetic coordinate system of three-dimensional point cloud is defined, and by space coordinates converting algorithm, brought into each The space vector relation of characteristic point, ask for reflecting between the template coordinate system in the three-dimensional point cloud coordinate system and template of self-defined mistake Penetrate.
CN201710642043.3A 2017-07-31 2017-07-31 A kind of method of 3D point cloud imaging and positioning Pending CN107392845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710642043.3A CN107392845A (en) 2017-07-31 2017-07-31 A kind of method of 3D point cloud imaging and positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710642043.3A CN107392845A (en) 2017-07-31 2017-07-31 A kind of method of 3D point cloud imaging and positioning

Publications (1)

Publication Number Publication Date
CN107392845A true CN107392845A (en) 2017-11-24

Family

ID=60343471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710642043.3A Pending CN107392845A (en) 2017-07-31 2017-07-31 A kind of method of 3D point cloud imaging and positioning

Country Status (1)

Country Link
CN (1) CN107392845A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389221A (en) * 2018-01-30 2018-08-10 深圳市菲森科技有限公司 The scan method and system of 3-D view
CN108389233A (en) * 2018-02-23 2018-08-10 大连理工大学 The laser scanner and camera calibration method approached based on boundary constraint and mean value
CN108446597A (en) * 2018-02-14 2018-08-24 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D collecting methods and device based on Visible Light Camera
CN108492357A (en) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on laser
CN111311679A (en) * 2020-01-31 2020-06-19 武汉大学 Free floating target pose estimation method based on depth camera
CN114998522A (en) * 2022-06-15 2022-09-02 中国测绘科学研究院 Method and system for accurately extracting dense point cloud of indoor scene of multi-view continuous light field image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102920459A (en) * 2012-09-26 2013-02-13 杭州电子科技大学 Human body circumference parameter measuring method based on three-dimensional point cloud
CN103673916A (en) * 2012-09-06 2014-03-26 上海船舶工艺研究所 On-line detection method for line heating forming
CN106780725A (en) * 2016-12-23 2017-05-31 西安交通大学 A kind of binocular three-dimensional reconstructing method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103673916A (en) * 2012-09-06 2014-03-26 上海船舶工艺研究所 On-line detection method for line heating forming
CN102920459A (en) * 2012-09-26 2013-02-13 杭州电子科技大学 Human body circumference parameter measuring method based on three-dimensional point cloud
CN106780725A (en) * 2016-12-23 2017-05-31 西安交通大学 A kind of binocular three-dimensional reconstructing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柯科勇: "基于双目视觉的散乱堆放工件拾取***", 《中国优秀硕士论文全文数据库.信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389221A (en) * 2018-01-30 2018-08-10 深圳市菲森科技有限公司 The scan method and system of 3-D view
CN108389221B (en) * 2018-01-30 2021-10-01 深圳市菲森科技有限公司 Three-dimensional image scanning method and system
CN108446597A (en) * 2018-02-14 2018-08-24 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D collecting methods and device based on Visible Light Camera
CN108492357A (en) * 2018-02-14 2018-09-04 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on laser
CN108446597B (en) * 2018-02-14 2019-06-25 天目爱视(北京)科技有限公司 A kind of biological characteristic 3D collecting method and device based on Visible Light Camera
CN108389233A (en) * 2018-02-23 2018-08-10 大连理工大学 The laser scanner and camera calibration method approached based on boundary constraint and mean value
CN111311679A (en) * 2020-01-31 2020-06-19 武汉大学 Free floating target pose estimation method based on depth camera
CN111311679B (en) * 2020-01-31 2022-04-01 武汉大学 Free floating target pose estimation method based on depth camera
CN114998522A (en) * 2022-06-15 2022-09-02 中国测绘科学研究院 Method and system for accurately extracting dense point cloud of indoor scene of multi-view continuous light field image

Similar Documents

Publication Publication Date Title
CN107392845A (en) A kind of method of 3D point cloud imaging and positioning
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
US8396284B2 (en) Smart picking in 3D point clouds
CN110335234B (en) Three-dimensional change detection method based on antique LiDAR point cloud
CN110766758B (en) Calibration method, device, system and storage device
CN107481287A (en) It is a kind of based on the object positioning and orientation method and system identified more
CN112486207A (en) Unmanned aerial vehicle autonomous landing method based on visual identification
CN115620261A (en) Vehicle environment sensing method, system, equipment and medium based on multiple sensors
CN110702028A (en) Three-dimensional detection positioning method and device for orchard trunk
CN110851978B (en) Camera position optimization method based on visibility
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN109556533B (en) Automatic extraction method for multi-line structured light stripe image
CN117671637A (en) Object point cloud overlapping identification method, device, equipment, robot and storage medium
CN112991327A (en) Steel grid welding system and method based on machine vision and terminal equipment
CN117496467A (en) Special-shaped lane line detection method based on fusion of monocular camera and 3D LIDAR
CN116921932A (en) Welding track recognition method, device, equipment and storage medium
US20040267682A1 (en) Model-based object classification and target recognition
CN115971004A (en) Intelligent putty spraying method and system for carriage
CN110231035A (en) Climb mobile robot path guide method
KR100516119B1 (en) Automatic Analysing Method for High Precision Satellite Image
Hahn et al. Tracking of human body parts using the multiocular contracting curve density algorithm
Sequeira et al. High-level surface descriptions from composite range images
CN117148315B (en) Unmanned automobile operation detection method and system
Baibai et al. 3D acquisition system for 3D forms recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171124

RJ01 Rejection of invention patent application after publication