CN114119864A - Positioning method and device based on three-dimensional reconstruction and point cloud matching - Google Patents

Positioning method and device based on three-dimensional reconstruction and point cloud matching Download PDF

Info

Publication number
CN114119864A
CN114119864A CN202111320058.0A CN202111320058A CN114119864A CN 114119864 A CN114119864 A CN 114119864A CN 202111320058 A CN202111320058 A CN 202111320058A CN 114119864 A CN114119864 A CN 114119864A
Authority
CN
China
Prior art keywords
dimensional
point cloud
scene
target
dense point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111320058.0A
Other languages
Chinese (zh)
Inventor
董延超
龚佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202111320058.0A priority Critical patent/CN114119864A/en
Publication of CN114119864A publication Critical patent/CN114119864A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a positioning method and a positioning device based on three-dimensional reconstruction and point cloud matching, wherein the method comprises the steps of obtaining a plurality of scene images containing target objects, which are shot by a camera from different angles, wherein the scene comprises at least one two-dimensional code with a known size; manufacturing a target 3D model according to the actual size of a target object; carrying out three-dimensional reconstruction on the scene image to obtain dense point cloud of the scene and obtain the corresponding relation between three-dimensional point information in the dense point cloud and pixel points in the scene image; detecting the angular points of the two-dimensional code in the image, acquiring three-dimensional point coordinates of the angular points in the dense point cloud according to the corresponding relation, and updating the scale of the dense point cloud by combining the actual size of the two-dimensional code; the dense point cloud of the target object is segmented from the dense point cloud of the scene, and the relative pose of the target 3D model and the dense point cloud of the target object is calculated through a registration algorithm; and carrying out pose transformation on the target 3D model according to the relative pose to replace dense point cloud of the target object, and extracting the coordinate of any point of the target 3D model at the moment as the coordinate of the real object. Compared with the prior art, the method has the advantages of high precision, strong flexibility and the like.

Description

Positioning method and device based on three-dimensional reconstruction and point cloud matching
Technical Field
The invention relates to the field of computer vision, in particular to a positioning method and a positioning device based on three-dimensional reconstruction and point cloud matching.
Background
In many cases, in a static scene, we need to know the accurate pose information of an object, i.e. the coordinates and direction in space, or the relative poses of multiple target objects. Such as the exact relative pose of two buildings in a group, or the position of a piece of work equipment in a hazardous scene with radiation. The scenes have a common point and are limited by the environment, the interested target object is difficult to measure manually, and accurate target object position information cannot be obtained.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a positioning method and a positioning device based on three-dimensional reconstruction and point cloud matching.
The purpose of the invention can be realized by the following technical scheme:
a positioning method based on three-dimensional reconstruction and point cloud matching comprises the following steps:
s1, acquiring a plurality of scene images containing the target object, which are shot by the camera from different angles, wherein the scene comprises at least one two-dimensional code with a known size;
s2, manufacturing a target 3D model according to the actual size of the target object;
s3, performing three-dimensional reconstruction on the scene image to obtain dense point cloud of the scene and obtain the corresponding relation between three-dimensional point information in the dense point cloud and pixel points in the scene image;
s4, detecting the corner points of the two-dimensional codes in the image, acquiring three-dimensional point coordinates of the corner points in the dense point cloud according to the corresponding relation, and updating the scale of the dense point cloud by combining the actual size of the two-dimensional codes;
s5, dividing dense point clouds of the target object from the dense point clouds of the scene, and calculating the relative position of the target 3D model and the dense point clouds of the target object through a registration algorithm;
and S6, performing pose transformation on the target 3D model according to the relative pose to replace dense point cloud of the target object, and extracting the coordinates of the target 3D model at the moment as real object coordinates.
Further, the three-dimensional reconstruction comprises the steps of:
SS1, extracting the feature point coordinates of all scene images, and matching the feature points among all scene images;
SS2, calculating the camera pose by using an epipolar geometry algorithm and calculating the coordinates of the three-dimensional points corresponding to the feature points according to the camera pose;
SS3, constructing a projection error equation, and optimizing the camera pose and the coordinates of the three-dimensional points by a light beam adjustment method;
and SS4, inputting the optimized camera pose and the whole image, acquiring dense point cloud of the scene, and obtaining the corresponding relation between three-dimensional point information in the dense point cloud and pixel points in the scene image.
Further, the three-dimensional point information includes a three-dimensional point ID, coordinates, a color, an error, a corresponding image ID, and a corresponding pixel point ID.
Further, the categories of the feature points in step SS1 include FAST, SIFT, ORB, and Harris.
Further, after matching the feature points between all the images, the false matching pairs are removed using a random sampling consistency method.
Further, the step S4 of obtaining the three-dimensional coordinates of the corner points of the two-dimensional code in the dense point cloud specifically includes the following steps:
a1, preprocessing a scene image;
a2, detecting the two-dimensional code in the scene image by using a two-dimensional code recognition tool and obtaining the corner point coordinates of the two-dimensional code, and obtaining the three-dimensional point coordinates of the two-dimensional code in the dense point cloud according to the corresponding relation between the three-dimensional point information in the dense point cloud obtained in the step S4 and the pixel points in the scene image.
Further, the preprocessing step includes: and carrying out binarization and filtering on the image, and extracting a quadrilateral area in the image.
Further, the relative pose calculation method in step S5 is as follows:
and constructing an error function, continuously performing iterative computation through an ICP (inductively coupled plasma) algorithm, and outputting the relative pose at the moment when the error function is minimum.
Further, the error function f (R, T) is calculated as follows:
Figure BDA0003345270520000021
in the formula, piRepresenting all point coordinates in the target 3D model, qiThree-dimensional point coordinates of dense point cloud of the target object are represented, R and T represent relative pose matrixes, and m represents the number of three-dimensional points.
A positioning device based on three-dimensional reconstruction and point cloud matching is characterized by comprising a memory and a processor; the memory for storing a computer program; the processor, when executing the computer program, is configured to implement the following method:
s1, acquiring a plurality of scene images containing the target object, which are shot by the camera from different angles, wherein the scene comprises at least one two-dimensional code with a known size;
s2, manufacturing a target 3D model according to the actual size of the target object;
s3, performing three-dimensional reconstruction on the scene image to obtain dense point cloud of the scene and obtain the corresponding relation between three-dimensional point information in the dense point cloud and pixel points in the scene image;
s4, detecting the corner points of the two-dimensional codes in the image, acquiring three-dimensional point coordinates of the corner points in the dense point cloud according to the corresponding relation, and updating the scale of the dense point cloud by combining the actual size of the two-dimensional codes;
s5, dividing dense point clouds of the target object from the dense point clouds of the scene, and calculating the relative position of the target 3D model and the dense point clouds of the target object through a registration algorithm;
and S6, performing pose transformation on the target 3D model according to the relative pose to replace dense point cloud of the target object, and extracting the coordinates of the target 3D model at the moment as real object coordinates.
1. According to the invention, the positioning information of the target object is obtained by performing three-dimensional reconstruction and point cloud matching on the target object in the scene, so that the problem that the positioning information is difficult to measure manually is solved, any object can be measured, the accuracy is improved, and the flexibility is strong; meanwhile, the target 3D model and the two-dimensional code are combined, point cloud is further optimized, and accuracy is guaranteed.
2. According to the invention, the relative pose is calculated by constructing the error function in an iterative calculation mode, so that the accuracy is improved.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a schematic view of a scene image according to the present invention.
FIG. 3 is a schematic diagram of a 3D model of an object of the present invention.
FIG. 4 is a schematic diagram of a dense point cloud of the present invention.
FIG. 5 is a schematic diagram of three-dimensional point information stored in the present invention.
Fig. 6 is a schematic diagram of two-dimensional code detection according to the present invention.
FIG. 7 is a schematic diagram of a dense point cloud of a target object segmented by the present invention.
FIG. 8 is a schematic diagram of the matching of the target 3D model and the dense point cloud of the target object according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The embodiment provides a positioning method based on three-dimensional reconstruction and point cloud matching, as shown in fig. 1, specifically including the following steps:
and step S1, a camera is used for collecting scene images, and the camera can be a monocular camera or a binocular camera, because only disordered pictures need to be input in the three-dimensional reconstruction step used by the invention, the monocular camera is preferably selected when the pictures are selected, the operation is simple and convenient, and the processing speed is high. The scene is shot by a camera from different angles, and a plurality of scene images are acquired. At least twenty images containing the target object are needed in the acquisition process, and various angles of the target are fully contained as far as possible, so that the point cloud information of the target which is accurate and rich enough is obtained, and meanwhile, the images containing the complete two-dimensional code are guaranteed. The two adjacent scene images should contain no less than 30% overlap. The scene comprises at least one two-dimensional code with a known size, the two-dimensional code and the target to be positioned are required to be free from mutual occlusion, the two-dimensional code and the target to be positioned are positioned at positions which can be obviously observed in the whole scene, and the image is required to fully embody a complete image of the two-dimensional code.
Step S2, a target 3D model is created by 3D software according to the actual size of the target object, as shown in fig. 3.
Step S3, performing three-dimensional reconstruction on the scene image, specifically including the following steps:
firstly, extracting feature points of all images, wherein the types of the extracted feature points can be FAST, SIFT, ORB, Harris corner points and the like, matching the feature points among the images, and removing wrong matched pairs by using a random sampling consistency method; and calculating the pose of the camera by using an antipodal geometric algorithm and solving the three-dimensional point coordinates corresponding to the feature points.
In order to optimize the three-dimensional point coordinates and the camera pose, a projection error equation is constructed according to a pinhole camera model, and the specific expression is as follows:
Figure BDA0003345270520000041
in the formula, g (C, X) is a minimum reprojection equation, the parameters of the equation are all three-dimensional points X to be optimized and all camera poses C, n represents that a plurality of scene images are selected, m represents the number of the three-dimensional points, q represents the number of the three-dimensional points, andijthe pixel coordinate, P (C), of the feature point corresponding to the jth three-dimensional point in the ith imagei,Xj) Representing the jth three-dimensional point XjCombined with the ith camera position CiProjection coordinates in the ith frame image, when X isjWhen there is projection in the ith frameij1, otherwise ωij0. Optimizing the pose and the three-dimensional point cloud of a camera by a light beam adjustment method, determining the matching relation of pixels between scene images by polar line search and block matching technology, estimating the depth of each pixel in the scene images by triangulation,and generating dense point clouds of the scene, and simultaneously storing the corresponding relation between all three-dimensional point information of the dense point clouds and pixel points in the scene image, wherein the stored scene dense point clouds are shown in fig. 4. The stored three-dimensional point information includes a three-dimensional point ID, coordinates, colors, errors, a corresponding scene image ID, and a pixel point ID, as shown in fig. 5.
Step S4, obtaining three-dimensional point coordinates of the corner points of the two-dimensional code in the dense point cloud, and updating the scale of the dense point cloud by combining the actual size of the two-dimensional code, wherein the method specifically comprises the following steps:
firstly, selecting any scene image containing a complete two-dimensional code, and carrying out preprocessing such as binarization, filtering and the like on the scene image; a quadrilateral region in the scene image is extracted, a two-dimensional code recognition tool such as QRCODE is used to detect the two-dimensional code, and the vertex coordinates of the two-dimensional code are output, as shown in fig. 6. And according to the corresponding relation between the three-dimensional point information obtained in the step S3 and the pixel points in the scene image, obtaining the three-dimensional coordinates of the two-dimensional code in the dense point cloud. And updating the dense point cloud scale by combining the three-dimensional coordinates with the actual size of the two-dimensional code.
Step S5, the scene point cloud is segmented by using a cloudcompare tool, the dense point cloud of the target object is segmented from the dense point cloud of the scene, and the segmentation result is shown in fig. 7. After the segmentation is finished, in order to enable the follow-up matching result to be more accurate, the pose of the target 3D model is manually adjusted, and the model and the dense point cloud of the target object have the same position and direction and serve as the initial pose of point cloud matching. Constructing an error function, wherein the expression of the error function f (R, T) is as follows:
Figure BDA0003345270520000051
in the formula, piRepresenting all point coordinates in the target 3D model, qiThree-dimensional point coordinates of dense point cloud of the target object are represented, R and T represent relative pose matrixes, and m represents the number of three-dimensional points. Continuously and iteratively calculating a rotation matrix R and a translation matrix T through an ICP (inductively coupled plasma) algorithm to minimize an error function, thereby obtaining a target 3D model and a target object corresponding to the target 3D modelAnd (4) the pose relation of dense point cloud of the body. The matching result of the 3D model and the dense point cloud is shown in fig. 8.
And S6, performing pose transformation on the target 3D model in the dense point cloud of the scene according to the obtained relative pose matrixes R and T to replace the dense point cloud of the target object, wherein the 3D model has complete and readable three-dimensional data of the target object, and the coordinate of the target 3D model at the moment is extracted as the real object coordinate.
The method obtains the pose information of the target object, solves the problem that the pose information is difficult to measure manually, can measure any object, improves the accuracy and has strong flexibility; meanwhile, the target 3D model and the two-dimensional code are combined, point cloud is further optimized, and accuracy is guaranteed.
The embodiment also provides a positioning device based on three-dimensional reconstruction and point cloud matching, which comprises a memory and a processor; a memory for storing a computer program; a processor for implementing a localization method based on three-dimensional reconstruction and point cloud matching when executing the computer program in the memory.
The present embodiment further provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the positioning method based on three-dimensional reconstruction and point cloud matching as mentioned in the embodiments of the present invention, and any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A positioning method based on three-dimensional reconstruction and point cloud matching is characterized by comprising the following steps:
s1, acquiring a plurality of scene images containing the target object, which are shot by the camera from different angles, wherein the scene comprises at least one two-dimensional code with a known size;
s2, manufacturing a target 3D model according to the actual size of the target object;
s3, performing three-dimensional reconstruction on the scene image to obtain dense point cloud of the scene and obtain the corresponding relation between three-dimensional point information in the dense point cloud and pixel points in the scene image;
s4, detecting the corner points of the two-dimensional codes in the image, acquiring three-dimensional point coordinates of the corner points in the dense point cloud according to the corresponding relation, and updating the scale of the dense point cloud by combining the actual size of the two-dimensional codes;
s5, dividing dense point clouds of the target object from the dense point clouds of the scene, and calculating the relative position of the target 3D model and the dense point clouds of the target object through a registration algorithm;
and S6, performing pose transformation on the target 3D model according to the relative pose to replace dense point cloud of the target object, and extracting the coordinates of the target 3D model at the moment as real object coordinates.
2. The method for positioning based on three-dimensional reconstruction and point cloud matching according to claim 1, wherein the three-dimensional reconstruction comprises the following steps:
SS1, extracting the feature point coordinates of all scene images, and matching the feature points among all scene images;
SS2, calculating the camera pose by using an epipolar geometry algorithm and calculating the coordinates of the three-dimensional points corresponding to the feature points according to the camera pose;
SS3, constructing a projection error equation, and optimizing the camera pose and the coordinates of the three-dimensional points by a light beam adjustment method;
and SS4, inputting the optimized camera pose and the whole image, acquiring dense point cloud of the scene, and obtaining the corresponding relation between three-dimensional point information in the dense point cloud and pixel points in the scene image.
3. The method as claimed in claim 1, wherein the three-dimensional point information includes three-dimensional point ID, coordinates, color, error, corresponding image ID and corresponding pixel point ID.
4. The method as claimed in claim 2, wherein the classes of the feature points in step SS1 include FAST, SIFT, ORB and Harris.
5. The method of claim 2, wherein after matching the feature points of all images, the method removes the wrong matching pairs by using a random sampling consistency method.
6. The positioning method based on three-dimensional reconstruction and point cloud matching according to claim 1, wherein the step S4 of obtaining three-dimensional coordinates of corner points of the two-dimensional code in the dense point cloud comprises the following specific steps:
a1, preprocessing a scene image;
a2, detecting the two-dimensional code in the scene image by using a two-dimensional code recognition tool and obtaining the corner point coordinates of the two-dimensional code, and obtaining the three-dimensional point coordinates of the two-dimensional code in the dense point cloud according to the corresponding relation between the three-dimensional point information in the dense point cloud obtained in the step S4 and the pixel points in the scene image.
7. The method of claim 6, wherein the preprocessing comprises: and carrying out binarization and filtering on the image, and extracting a quadrilateral area in the image.
8. The positioning method based on three-dimensional reconstruction and point cloud matching according to claim 1, wherein the relative pose calculation method in step S5 is as follows:
and constructing an error function, continuously performing iterative computation through an ICP (inductively coupled plasma) algorithm, and outputting the relative pose at the moment when the error function is minimum.
9. The method of claim 8, wherein the error function f (R, T) is calculated as follows:
Figure FDA0003345270510000021
in the formula, piRepresenting all point coordinates in the target 3D model, qiThree-dimensional point coordinates of dense point cloud of the target object are represented, R and T represent relative pose matrixes, and m represents the number of three-dimensional points.
10. A positioning device based on three-dimensional reconstruction and point cloud matching is characterized by comprising a memory and a processor; the memory for storing a computer program; the processor, when executing the computer program, is configured to implement the following method:
s1, acquiring a plurality of scene images containing the target object, which are shot by the camera from different angles, wherein the scene comprises at least one two-dimensional code with a known size;
s2, manufacturing a target 3D model according to the actual size of the target object;
s3, performing three-dimensional reconstruction on the scene image to obtain dense point cloud of the scene and obtain the corresponding relation between three-dimensional point information in the dense point cloud and pixel points in the scene image;
s4, detecting the corner points of the two-dimensional codes in the image, acquiring three-dimensional point coordinates of the corner points in the dense point cloud according to the corresponding relation, and updating the scale of the dense point cloud by combining the actual size of the two-dimensional codes;
s5, dividing dense point clouds of the target object from the dense point clouds of the scene, and calculating the relative position of the target 3D model and the dense point clouds of the target object;
and S6, performing pose transformation on the target 3D model according to the relative pose to replace dense point cloud of the target object, and extracting the coordinates of the target 3D model at the moment as real object coordinates.
CN202111320058.0A 2021-11-09 2021-11-09 Positioning method and device based on three-dimensional reconstruction and point cloud matching Pending CN114119864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111320058.0A CN114119864A (en) 2021-11-09 2021-11-09 Positioning method and device based on three-dimensional reconstruction and point cloud matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111320058.0A CN114119864A (en) 2021-11-09 2021-11-09 Positioning method and device based on three-dimensional reconstruction and point cloud matching

Publications (1)

Publication Number Publication Date
CN114119864A true CN114119864A (en) 2022-03-01

Family

ID=80377746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111320058.0A Pending CN114119864A (en) 2021-11-09 2021-11-09 Positioning method and device based on three-dimensional reconstruction and point cloud matching

Country Status (1)

Country Link
CN (1) CN114119864A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187713A (en) * 2022-09-08 2022-10-14 山东信通电子股份有限公司 Method, device and medium for accelerating point cloud point selection operation
CN115330966A (en) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 Method, system, device and storage medium for generating house type graph
CN115359193A (en) * 2022-10-19 2022-11-18 南京航空航天大学 Rapid semi-dense three-dimensional reconstruction method based on binocular fisheye camera
CN115546420A (en) * 2022-11-28 2022-12-30 苏州魔视智能科技有限公司 Vehicle body model construction method and device, electronic equipment and storage medium
CN116310099A (en) * 2023-03-01 2023-06-23 南京工业大学 Three-dimensional reconstruction method of steel bridge component based on multi-view images
CN116704111A (en) * 2022-12-08 2023-09-05 荣耀终端有限公司 Image processing method and apparatus
CN117036758A (en) * 2023-10-10 2023-11-10 湖北星纪魅族集团有限公司 Two-dimensional image target matching method, electronic device and storage medium
CN117197246A (en) * 2023-11-07 2023-12-08 江苏云幕智造科技有限公司 Human-shaped robot position confirmation method based on three-dimensional point cloud and binocular vision
CN117911465A (en) * 2024-03-20 2024-04-19 深圳市森歌数据技术有限公司 Natural protection area dense point cloud generation method based on binocular stereo matching

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115330966A (en) * 2022-08-15 2022-11-11 北京城市网邻信息技术有限公司 Method, system, device and storage medium for generating house type graph
CN115187713B (en) * 2022-09-08 2023-01-13 山东信通电子股份有限公司 Method, equipment and medium for accelerating point cloud point selection operation
CN115187713A (en) * 2022-09-08 2022-10-14 山东信通电子股份有限公司 Method, device and medium for accelerating point cloud point selection operation
CN115359193A (en) * 2022-10-19 2022-11-18 南京航空航天大学 Rapid semi-dense three-dimensional reconstruction method based on binocular fisheye camera
CN115546420A (en) * 2022-11-28 2022-12-30 苏州魔视智能科技有限公司 Vehicle body model construction method and device, electronic equipment and storage medium
CN115546420B (en) * 2022-11-28 2023-05-16 苏州魔视智能科技有限公司 Vehicle body model construction method and device, electronic equipment and storage medium
CN116704111A (en) * 2022-12-08 2023-09-05 荣耀终端有限公司 Image processing method and apparatus
CN116310099A (en) * 2023-03-01 2023-06-23 南京工业大学 Three-dimensional reconstruction method of steel bridge component based on multi-view images
CN117036758A (en) * 2023-10-10 2023-11-10 湖北星纪魅族集团有限公司 Two-dimensional image target matching method, electronic device and storage medium
CN117036758B (en) * 2023-10-10 2024-01-12 湖北星纪魅族集团有限公司 Two-dimensional image target matching method, electronic device and storage medium
CN117197246A (en) * 2023-11-07 2023-12-08 江苏云幕智造科技有限公司 Human-shaped robot position confirmation method based on three-dimensional point cloud and binocular vision
CN117197246B (en) * 2023-11-07 2024-01-26 江苏云幕智造科技有限公司 Human-shaped robot position confirmation method based on three-dimensional point cloud and binocular vision
CN117911465A (en) * 2024-03-20 2024-04-19 深圳市森歌数据技术有限公司 Natural protection area dense point cloud generation method based on binocular stereo matching
CN117911465B (en) * 2024-03-20 2024-05-17 深圳市森歌数据技术有限公司 Natural protection area dense point cloud generation method based on binocular stereo matching

Similar Documents

Publication Publication Date Title
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
CN110135455B (en) Image matching method, device and computer readable storage medium
CN108369741B (en) Method and system for registration data
CN109993793B (en) Visual positioning method and device
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN110176032B (en) Three-dimensional reconstruction method and device
CN111735439B (en) Map construction method, map construction device and computer-readable storage medium
CN109145969B (en) Method, device, equipment and medium for processing point cloud data of three-dimensional object
JP4709668B2 (en) 3D object recognition system
CN107492107B (en) Object identification and reconstruction method based on plane and space information fusion
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN112697044B (en) Static rigid object vision measurement method based on unmanned aerial vehicle platform
CN112348890B (en) Space positioning method, device and computer readable storage medium
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
CN108447092B (en) Method and device for visually positioning marker
JP6411188B2 (en) Stereo matching device, stereo matching program, and stereo matching method
CN114273826A (en) Automatic identification method for welding position of large-sized workpiece to be welded
CN114782529A (en) High-precision positioning method and system for line grabbing point of live working robot and storage medium
Kim et al. Target-free automatic registration of point clouds
CN116921932A (en) Welding track recognition method, device, equipment and storage medium
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
Wan et al. A performance comparison of feature detectors for planetary rover mapping and localization
CN115222789A (en) Training method, device and equipment for instance depth estimation model
CN113177903B (en) Fusion method, system and equipment of foreground point cloud and background point cloud
Thisse et al. 3D Dense & Scaled Reconstruction Pipeline with Smartphone Acquisition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination