CN115342796A - Map construction method, system, device and medium based on visual laser fusion - Google Patents

Map construction method, system, device and medium based on visual laser fusion Download PDF

Info

Publication number
CN115342796A
CN115342796A CN202210867433.1A CN202210867433A CN115342796A CN 115342796 A CN115342796 A CN 115342796A CN 202210867433 A CN202210867433 A CN 202210867433A CN 115342796 A CN115342796 A CN 115342796A
Authority
CN
China
Prior art keywords
data
orb
visual
feature points
optimized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210867433.1A
Other languages
Chinese (zh)
Inventor
曾光
佟景泉
殷玲
万敏华
黄健盛
黄杨灵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Communications Polytechnic
Original Assignee
Guangdong Communications Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Communications Polytechnic filed Critical Guangdong Communications Polytechnic
Priority to CN202210867433.1A priority Critical patent/CN115342796A/en
Publication of CN115342796A publication Critical patent/CN115342796A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a map construction method, a system, a device and a medium based on visual laser fusion, wherein the method comprises the following steps: determining point cloud characteristic points and ORB characteristic points according to first laser radar data and first visual data of the robot; determining depth information of ORB feature points according to the point cloud feature points to obtain Space-ORB feature points, and performing pose estimation in a loose coupling mode according to the Space-ORB feature points and the point cloud feature points to obtain initial pose data; and mapping the point cloud characteristic points and the Space-ORB characteristic points to a world coordinate system according to the initial pose data to obtain a plurality of characteristic points to be optimized, and performing global optimization on the characteristic points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data so as to construct a three-dimensional scene map. The method improves the accuracy and comprehensiveness of the construction of the three-dimensional scene map, and can be widely applied to the technical field of robot navigation.

Description

Map construction method, system, device and medium based on visual laser fusion
Technical Field
The invention relates to the technical field of robot navigation, in particular to a map construction method, a system, a device and a medium based on visual laser fusion.
Background
At present, the navigation and positioning algorithm of the indoor mobile robot mainly adopts a SLAM (synchronous positioning and mapping) algorithm, and the SLAM algorithm can be roughly divided into two categories, namely a laser SLAM and a visual SLAM. The method comprises the steps that laser SLAM collects two pieces of point cloud information (a series of dispersed point sets with accurate angle and distance information) at different moments, then two pieces of point cloud images are matched, the distance and the posture change of relative movement of a laser radar are calculated, and a scene is reconstructed. The positioning laser SLAM of the robot needs to carry a laser radar with higher cost, and meanwhile, the environmental information obtained by the laser radar is not as rich as that obtained by a visual sensor. The visual SLAM is low in cost and rich in obtained environment information, however, in order to improve the positioning accuracy and stability of the mobile robot, the environment is not sensed by only depending on the visual SLAM system, because the visual SLAM system is often easily influenced by illumination change, and meanwhile, the positioning accuracy is very low or even the visual SLAM system cannot be used under the condition that image information is fuzzy due to low texture and high movement speed.
Disclosure of Invention
The present invention aims to solve at least to some extent one of the technical problems existing in the prior art.
Therefore, an object of the embodiments of the present invention is to provide a map construction method based on visual laser fusion, which improves the accuracy and comprehensiveness of map construction of a three-dimensional scene.
Another object of the embodiments of the present invention is to provide a map building system based on visual laser fusion.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the invention comprises the following steps:
in a first aspect, an embodiment of the present invention provides a map construction method based on visual laser fusion, including the following steps:
acquiring first laser radar data and first visual data of a robot, and determining point cloud characteristic points and ORB characteristic points according to the first laser radar data and the first visual data;
determining depth information of the ORB characteristic points according to the point cloud characteristic points to obtain Space-ORB characteristic points, and further performing pose estimation in a loose coupling mode according to the Space-ORB characteristic points and the point cloud characteristic points to obtain initial pose data;
mapping the point cloud feature points and the Space-ORB feature points to a world coordinate system according to the initial pose data to obtain a plurality of feature points to be optimized, and performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data;
and constructing a three-dimensional scene map according to the first laser radar data, the first visual data and the optimized pose data.
Further, in an embodiment of the present invention, the step of acquiring first lidar data and first visual data of the robot, and determining a point cloud feature point and an ORB feature point according to the first lidar data and the first visual data specifically includes:
acquiring second laser radar data of the robot through a laser radar, and acquiring second visual data of the robot through an industrial camera, wherein the laser radar and the industrial camera are both arranged on the robot;
performing space-time reference alignment on the second laser radar data and the second visual data to obtain first laser radar data and first visual data;
and performing feature extraction on the first laser radar data to obtain a plurality of point cloud feature points, and performing feature extraction on the first visual data to obtain a plurality of ORB feature points.
Further, in an embodiment of the present invention, the step of determining the depth information of the ORB feature points according to the point cloud feature points to obtain Space-ORB feature points, and further performing pose estimation according to the Space-ORB feature points and the point cloud feature points in a loose coupling manner to obtain initial pose data specifically includes:
performing interframe matching on the point cloud characteristic points and the ORB characteristic points respectively to enable fragment imaging tracks between frames of the first laser radar data to be in one-to-one correspondence, and enable fragment imaging tracks between frames of the first visual data to be in one-to-one correspondence;
determining depth information of the ORB characteristic points according to the point cloud characteristic points of the corresponding first laser radar data frame to obtain Space-ORB characteristic points;
and performing pose estimation according to the point cloud feature points to obtain first pose data, performing pose estimation according to the Space-ORB feature points to obtain second pose data, and determining initial pose data according to the first pose data and the second pose data.
Further, in an embodiment of the present invention, the step of performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data specifically includes:
determining key frame extraction frequency and optimizing window length according to the balance selection strategy;
dividing the feature points to be optimized into a plurality of categories of feature points to be optimized according to the classification optimization strategy;
performing feature point matching calculation on feature points to be optimized of the same category according to the key frame extraction frequency and the optimization window length to obtain an optimized feature point data frame;
and performing pose estimation according to the feature point data frame to obtain optimized pose data.
Further, in one embodiment of the present invention, the key frame extraction frequency is determined by the following formula:
Figure BDA0003759743800000031
wherein, f k Representing the key frame extraction frequency, lb representing a base-2 logarithm operation, n representing the number of continuously traceable feature points, n max Denotes the maximum value of n, n min Represents the minimum value of n, V represents the moving speed of the robot, V max Represents the maximum value of V;
the optimized window length is determined by:
Figure BDA0003759743800000032
wherein l w It is indicated that the window length is optimized,
Figure BDA0003759743800000033
l max represents a predetermined maximum value of the optimization window length, l min Indicating a preset minimum value of the optimization window length.
Further, in an embodiment of the present invention, the classification optimization policy is:
dividing a region near the robot into a near field region and a far field region according to a preset first threshold;
according to near field region with far field region is right treat optimizing the characteristic point and divide, obtain near field edge laser characteristic point, far field edge laser characteristic point, near field plane laser characteristic point, far field plane laser characteristic point, near field Space-ORB characteristic point and far field Space-ORB characteristic point.
Further, in an embodiment of the present invention, the step of constructing a three-dimensional scene map according to the first lidar data, the first visual data, and the optimized pose data specifically includes:
and performing data fusion and texture mapping according to the first laser radar data, the first visual data and the optimized pose data to obtain a three-dimensional scene map.
In a second aspect, an embodiment of the present invention provides a map building system based on visual laser fusion, including:
the system comprises a feature extraction module, a data acquisition module and a data processing module, wherein the feature extraction module is used for acquiring first laser radar data and first visual data of a robot and determining point cloud feature points and ORB feature points according to the first laser radar data and the first visual data;
the initial pose estimation module is used for determining the depth information of the ORB characteristic points according to the point cloud characteristic points to obtain Space-ORB characteristic points, and then performing pose estimation in a loose coupling mode according to the Space-ORB characteristic points and the point cloud characteristic points to obtain initial pose data;
the pose optimization module is used for mapping the point cloud feature points and the Space-ORB feature points to a world coordinate system according to the initial pose data to obtain a plurality of feature points to be optimized, and performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data;
and the scene reconstruction module is used for constructing a three-dimensional scene map according to the first laser radar data, the first visual data and the optimized pose data.
In a third aspect, an embodiment of the present invention provides a map building apparatus based on visual laser fusion, including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a visual laser fusion based mapping method as described above.
In a fourth aspect, the present invention also provides a computer-readable storage medium, in which a processor-executable program is stored, and when the processor-executable program is executed by a processor, the processor-executable program is configured to perform a visual laser fusion-based map building method as described above.
Advantages and benefits of the present invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention:
according to the method and the device, depth information of ORB feature points is determined according to point cloud feature points to obtain Space-ORB feature points, pose estimation is carried out according to the Space-ORB feature points and the point cloud feature points in a loose coupling mode to obtain initial pose data, the point cloud feature points and the Space-ORB feature points are mapped into feature points to be optimized according to the initial pose data, global optimization is carried out through a balance selection strategy and a classification optimization strategy to obtain optimized pose data, and a three-dimensional scene map is constructed according to laser radar data, visual data and the optimized pose data. According to the embodiment of the invention, the laser radar data and the visual data are fused, so that the accuracy and comprehensiveness of the three-dimensional scene map construction are improved; in addition, global optimization is carried out through a balance selection strategy and a classification optimization strategy, the calculated amount in the global optimization process is reduced, and the calculation force requirement on the robot system is lowered.
Drawings
In order to more clearly illustrate the technical solution in the embodiment of the present invention, the drawings required to be used in the embodiment of the present invention are described below, and it should be understood that the drawings in the description below are only for convenience and clarity in describing some embodiments in the technical solution of the present invention, and it is obvious for those skilled in the art that other drawings may also be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a map construction method based on visual laser fusion according to an embodiment of the present invention;
fig. 2 is a block diagram of a map building system based on visual laser fusion according to an embodiment of the present invention;
fig. 3 is a block diagram of a map building apparatus based on visual laser fusion according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, the meaning of a plurality is two or more, if there is a description that the first and the second are only used for distinguishing technical features, but not understood as indicating or implying relative importance or implicitly indicating the number of the indicated technical features or implicitly indicating the precedence of the indicated technical features. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art.
Referring to fig. 1, an embodiment of the present invention provides a map construction method based on visual laser fusion, which specifically includes the following steps:
s101, first laser radar data and first visual data of the robot are obtained, and point cloud characteristic points and ORB characteristic points are determined according to the first laser radar data and the first visual data.
Specifically, various sensors are used for acquiring environmental data around the robot, and feature extraction is performed on the data to obtain corresponding feature points. Step S101 specifically includes the following steps:
s1011, acquiring second laser radar data of the robot through the laser radar, and acquiring second visual data of the robot through the industrial camera, wherein the laser radar and the industrial camera are both arranged on the robot;
s1012, performing space-time reference alignment on the second laser radar data and the second visual data to obtain first laser radar data and first visual data;
and S1013, performing feature extraction on the first laser radar data to obtain a plurality of point cloud feature points, and performing feature extraction on the first visual data to obtain a plurality of ORB feature points.
Specifically, to describe the SLAM problem, first, the motion of the robot in a continuous period of time needs to be decomposed into motions at discrete moments, and the position information of the moments is recorded to form a motion track of the vehicle; then, the map is set to be composed of a plurality of characteristic points, a sensor at each moment can measure a part of the characteristic points to obtain observation data of the characteristic points, and finally three-dimensional map reconstruction is carried out according to the observation data.
In the embodiment of the invention, the laser radar data and the visual data are aligned by time-space reference, so that the obtained data frame of the first laser radar data and the data frame of the first visual data are consistent in time and space coordinates, and subsequent fusion and optimization processing are facilitated. The point cloud feature points and ORB feature points may be extracted by using a technique known in the art, which is not the key point of the embodiment of the present invention and will not be described herein again.
S102, determining depth information of ORB feature points according to the point cloud feature points to obtain Space-ORB feature points, and estimating the pose according to the Space-ORB feature points and the point cloud feature points in a loose coupling mode to obtain initial pose data.
Specifically, depth information of visual features is determined by using laser point cloud, space-ORB feature points (Space ORB feature points) are obtained, the point cloud feature points and the Space-ORB feature points are input to a pose estimation unit which is provided with an SLAM pose estimation algorithm in advance in a loose coupling mode, and initial pose data obtained through estimation are obtained. Step S102 specifically includes the following steps:
s1021, performing interframe matching on the point cloud characteristic points and the ORB characteristic points respectively to enable the fragment imaging tracks between the frames of the first laser radar data to correspond one to one, and enable the fragment imaging tracks between the frames of the first visual data to correspond one to one;
s1022, determining depth information of each ORB characteristic point according to the corresponding point cloud characteristic points of the first laser radar data frame to obtain Space-ORB characteristic points;
and S1023, performing pose estimation according to the point cloud feature points to obtain first pose data, performing pose estimation according to the Space-ORB feature points to obtain second pose data, and determining initial pose data according to the first pose data and the second pose data.
Specifically, point cloud characteristic points and ORB characteristic points are subjected to interframe matching respectively, depth information of the ORB characteristic points is obtained in a three-side projection mode by using point clouds of data frames of corresponding laser radar data, space-ORB characteristic points are obtained, the Space-ORB characteristic points and the point cloud characteristic points are input into a position and pose estimation unit in a loose coupling mode, and initial pose data are output.
In the embodiment of the invention, the loose coupling refers to that the pose estimation is respectively carried out on the point cloud characteristic points and the Space-ORB characteristic points, and then the estimation results are fused.
S103, mapping the point cloud feature points and the Space-ORB feature points to a world coordinate system according to the initial pose data to obtain a plurality of feature points to be optimized, and performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data.
Specifically, the obtained initial pose data is used for mapping the feature points to a world coordinate system, the feature points are input to a global optimization unit which is preset with an SLAM pose estimation algorithm, then the initial pose data and the feature points are further optimized through a balance selection strategy and a classification optimization strategy based on a key frame and a sliding window, finally, the reduction of process calculation amount is achieved, and meanwhile, the positioning and mapping accuracy is improved.
In the embodiment of the invention, the global optimization refers to that whether the position of a sensor (a laser radar and an industrial camera) is beyond the previous position is judged by carrying out loop detection from a global angle, and the position is used as a constraint to carry out global adjustment optimization on the initial pose data obtained by estimation, so that the navigation pose calculation accuracy is further improved.
Further as an optional implementation manner, the step of performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data specifically includes:
a1, determining key frame extraction frequency and optimizing window length according to a balance selection strategy;
a2, dividing the feature points to be optimized into a plurality of categories of feature points to be optimized according to a classification optimization strategy;
a3, performing feature point matching calculation on feature points to be optimized of the same category according to the key frame extraction frequency and the optimization window length to obtain an optimized feature point data frame;
and A4, performing pose estimation according to the feature point data frame to obtain optimized pose data.
As a further optional implementation, the key frame extraction frequency is determined by the following formula:
Figure BDA0003759743800000071
wherein f is k Representing the key frame extraction frequency, lb representing a base-2 logarithm operation, n representing the number of continuously traceable feature points, n max Denotes the maximum value of n, n min Represents the minimum value of n, V represents the moving speed of the robot, V max Represents the maximum value of V;
the optimized window length is determined by:
Figure BDA0003759743800000072
wherein l w It is shown that the optimal window length is,
Figure BDA0003759743800000073
l max represents a predetermined maximum value of the optimization window length, l min Indicating a preset minimum value of the optimization window length.
Specifically, all data frames are first classified into 3 types: frames that can be used directly for stabilization optimization, frames that are too garbage unused, and frames that can be used but need to be processed to reduce computational cost. Extracting key frame frequency f k Comprises the following steps:
Figure BDA0003759743800000074
wherein f is k Representing the key frame extraction frequency, lb representing a base-2 logarithm operation, n representing the number of continuously traceable feature points, n max Denotes the maximum value of n, n min Represents the minimum value of n, V represents the moving speed of the robot, V max Represents the maximum value of V.
Then, in order to ensure that the calculation amount is as small as possible while the optimization accuracy is met, a balance selection formula is provided to determine the length l of the optimization window w Namely:
Figure BDA0003759743800000075
wherein l w It is shown that the optimal window length is,
Figure BDA0003759743800000081
l max indicating a preset maximum value of the length of the optimization window,/ min Indicating a preset minimum value of the optimization window length.
And finally, performing feature point matching calculation on the current key frame and the latest key frame of the window, and bringing the current key frame into the optimization window as the latest key frame when the current key frame and the latest key frame of the window are smaller than a preset matching degree threshold value until the global optimization of all the data frames is completed.
As a further optional implementation, the classification optimization strategy is:
b1, dividing a region near the robot into a near field region and a far field region according to a preset first threshold;
and B2, dividing the characteristic points to be optimized according to the near field area and the far field area to obtain near field edge laser characteristic points, far field edge laser characteristic points, near field plane laser characteristic points, far field plane laser characteristic points, near field Space-ORB characteristic points and far field Space-ORB characteristic points.
In particular, to realize more efficient global optimization, the selection of feature points is also very important. The feature points themselves should have good observability, no abnormal values, and be uniformly distributed in a 3-dimensional space, so as to facilitate the subsequent mapping work. The method and the device have the advantages that the maximum detection range of the laser radar is limited, and the area near the vehicle is divided into a far-field area and a near-field area; dividing the point cloud characteristic points into edge laser characteristic points and plane laser characteristic points, and dividing the Space-ORB characteristic points, the edge laser characteristic points and the plane laser characteristic points into near fields and far fields respectively to obtain near field edge laser characteristic points, far field edge laser characteristic points, near field plane laser characteristic points, far field plane laser characteristic points, near field Space-ORB characteristic points and far field Space-ORB characteristic points.
And S104, constructing a three-dimensional scene map according to the first laser radar data, the first visual data and the optimized pose data.
Further as an optional implementation manner, the step of constructing the three-dimensional scene map according to the first lidar data, the first visual data and the optimized pose data specifically includes:
and performing data fusion and texture mapping according to the first laser radar data, the first visual data and the optimized pose data to obtain a three-dimensional scene map.
The method steps of the embodiments of the present invention are described above. As can be appreciated, the embodiment of the invention integrates the laser radar data and the visual data, and improves the accuracy and comprehensiveness of the three-dimensional scene map construction; in addition, global optimization is carried out through a balance selection strategy and a classification optimization strategy, the calculated amount in the global optimization process is reduced, and the calculation force requirement on the robot system is lowered.
Based on respective advantages of vision and laser, the embodiment of the invention provides a laser and vision fused scene map construction method, aiming at the problems that the existing laser radar is not rich in perception environment information and the vision is easily influenced by illumination to cause positioning precision reduction, and aiming at improving the overall performance of a pose estimation algorithm and constructing a three-dimensional scene map with more rich and accurate information by fusing complementary respective advantages of laser and vision, and providing the most effective and direct help for searching, rescuing, indoor positioning and autonomous cruising in an unknown complex environment (especially an indoor closed or semi-closed space region).
Referring to fig. 2, an embodiment of the present invention provides a map building system based on visual laser fusion, including:
the system comprises a characteristic extraction module, a point cloud characteristic point acquisition module and an ORB characteristic point acquisition module, wherein the characteristic extraction module is used for acquiring first laser radar data and first visual data of a robot and determining the point cloud characteristic point and the ORB characteristic point according to the first laser radar data and the first visual data;
the initial pose estimation module is used for determining depth information of the ORB characteristic points according to the point cloud characteristic points to obtain Space-ORB characteristic points, and then performing pose estimation in a loose coupling mode according to the Space-ORB characteristic points and the point cloud characteristic points to obtain initial pose data;
the pose optimization module is used for mapping the point cloud feature points and the Space-ORB feature points to a world coordinate system according to the initial pose data to obtain a plurality of feature points to be optimized, and performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data;
and the scene reconstruction module is used for constructing a three-dimensional scene map according to the first laser radar data, the first visual data and the optimized pose data.
The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.
Referring to fig. 3, an embodiment of the present invention provides a map building apparatus based on visual laser fusion, including:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to implement the mapping method based on visual laser fusion.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
Embodiments of the present invention also provide a computer-readable storage medium, in which a processor-executable program is stored, and when the processor-executable program is executed by a processor, the processor-executable program is configured to perform the above-mentioned map building method based on visual laser fusion.
The computer-readable storage medium of the embodiment of the invention can execute the map construction method based on visual laser fusion provided by the embodiment of the method of the invention, can execute any combination of the implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise indicated to the contrary, one or more of the functions and/or features described above may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer given the nature, function, and interrelationships of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The above functions, if implemented in the form of software functional units and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer readable medium could even be paper or another suitable medium upon which the above described program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A map construction method based on visual laser fusion is characterized by comprising the following steps:
acquiring first laser radar data and first visual data of a robot, and determining point cloud characteristic points and ORB characteristic points according to the first laser radar data and the first visual data;
determining depth information of the ORB characteristic points according to the point cloud characteristic points to obtain Space-ORB characteristic points, and further performing pose estimation in a loose coupling mode according to the Space-ORB characteristic points and the point cloud characteristic points to obtain initial pose data;
mapping the point cloud feature points and the Space-ORB feature points to a world coordinate system according to the initial pose data to obtain a plurality of feature points to be optimized, and performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data;
and constructing a three-dimensional scene map according to the first laser radar data, the first visual data and the optimized pose data.
2. The map building method based on visual laser fusion according to claim 1, wherein the step of obtaining first laser radar data and first visual data of the robot and determining point cloud feature points and ORB feature points according to the first laser radar data and the first visual data specifically comprises:
acquiring second laser radar data of the robot through a laser radar, and acquiring second visual data of the robot through an industrial camera, wherein the laser radar and the industrial camera are both arranged on the robot;
performing space-time reference alignment on the second laser radar data and the second visual data to obtain first laser radar data and first visual data;
and performing feature extraction on the first laser radar data to obtain a plurality of point cloud feature points, and performing feature extraction on the first visual data to obtain a plurality of ORB feature points.
3. The map construction method based on visual laser fusion of claim 1, wherein the step of determining the depth information of the ORB feature points according to the point cloud feature points to obtain Space-ORB feature points, and further performing pose estimation according to the Space-ORB feature points and the point cloud feature points in a loose coupling manner to obtain initial pose data specifically comprises:
performing interframe matching on the point cloud characteristic points and the ORB characteristic points respectively to enable fragment imaging tracks between frames of the first laser radar data to correspond one to one, and enable fragment imaging tracks between frames of the first visual data to correspond one to one;
determining depth information of the ORB characteristic points according to the point cloud characteristic points of the corresponding first laser radar data frame to obtain Space-ORB characteristic points;
and performing pose estimation according to the point cloud feature points to obtain first pose data, performing pose estimation according to the Space-ORB feature points to obtain second pose data, and determining initial pose data according to the first pose data and the second pose data.
4. The map construction method based on visual laser fusion according to claim 1, wherein the step of performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a classification optimization strategy to obtain optimized pose data specifically comprises:
determining key frame extraction frequency and optimization window length according to the balance selection strategy;
dividing the feature points to be optimized into a plurality of categories of feature points to be optimized according to the classification optimization strategy;
performing feature point matching calculation on feature points to be optimized of the same category according to the key frame extraction frequency and the optimization window length to obtain an optimized feature point data frame;
and performing pose estimation according to the feature point data frame to obtain optimized pose data.
5. The mapping method based on visual laser fusion of claim 4, wherein the key frame extraction frequency is determined by the following formula:
Figure FDA0003759743790000021
wherein f is k Representing the key frame extraction frequency, lb representing a base-2 logarithm operation, n representing the number of continuously traceable feature points, n max Denotes the maximum value of n, n min Represents the minimum value of n, V represents the moving speed of the robot, V max Represents the maximum value of V;
the optimized window length is determined by:
Figure FDA0003759743790000022
wherein l w It is indicated that the window length is optimized,
Figure FDA0003759743790000023
l max indicating a preset maximum value of the length of the optimization window,/ min Indicating a preset minimum value of the optimization window length.
6. The mapping method based on visual laser fusion of claim 4, wherein the classification optimization strategy is:
dividing a region near the robot into a near field region and a far field region according to a preset first threshold;
according to near field region with far field region is right the characteristic point of treating optimizing divides, obtains near field edge laser characteristic point, far field edge laser characteristic point, near field plane laser characteristic point, far field plane laser characteristic point, near field Space-ORB characteristic point and far field Space-ORB characteristic point.
7. The map construction method based on visual laser fusion according to any one of claims 1 to 6, wherein the step of constructing the three-dimensional scene map according to the first lidar data, the first visual data and the optimized pose data specifically comprises:
and performing data fusion and texture mapping according to the first laser radar data, the first visual data and the optimized pose data to obtain a three-dimensional scene map.
8. A map construction system based on visual laser fusion, comprising:
the system comprises a feature extraction module, a point cloud feature point and an ORB feature point, wherein the feature extraction module is used for acquiring first laser radar data and first visual data of a robot and determining the point cloud feature point and the ORB feature point according to the first laser radar data and the first visual data;
the initial pose estimation module is used for determining the depth information of the ORB characteristic points according to the point cloud characteristic points to obtain Space-ORB characteristic points, and then performing pose estimation in a loose coupling mode according to the Space-ORB characteristic points and the point cloud characteristic points to obtain initial pose data;
the pose optimization module is used for mapping the point cloud feature points and the Space-ORB feature points to a world coordinate system according to the initial pose data to obtain a plurality of feature points to be optimized, and performing global optimization on the feature points to be optimized and the initial pose data through a preset balance selection strategy and a preset classification optimization strategy to obtain optimized pose data;
and the scene reconstruction module is used for constructing a three-dimensional scene map according to the first laser radar data, the first visual data and the optimized pose data.
9. A map building device based on visual laser fusion is characterized by comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a visual laser fusion based mapping method according to any of claims 1 to 7.
10. A computer readable storage medium in which a processor executable program is stored, wherein the processor executable program when executed by a processor is adapted to perform a method of mapping based on visual laser fusion according to any of claims 1 to 7.
CN202210867433.1A 2022-07-22 2022-07-22 Map construction method, system, device and medium based on visual laser fusion Withdrawn CN115342796A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210867433.1A CN115342796A (en) 2022-07-22 2022-07-22 Map construction method, system, device and medium based on visual laser fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210867433.1A CN115342796A (en) 2022-07-22 2022-07-22 Map construction method, system, device and medium based on visual laser fusion

Publications (1)

Publication Number Publication Date
CN115342796A true CN115342796A (en) 2022-11-15

Family

ID=83950025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210867433.1A Withdrawn CN115342796A (en) 2022-07-22 2022-07-22 Map construction method, system, device and medium based on visual laser fusion

Country Status (1)

Country Link
CN (1) CN115342796A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115713530A (en) * 2022-12-19 2023-02-24 北京卓翼智能科技有限公司 Instant positioning and map construction system
CN116202511A (en) * 2023-05-06 2023-06-02 煤炭科学研究总院有限公司 Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint
CN116503566A (en) * 2023-06-25 2023-07-28 深圳市其域创新科技有限公司 Three-dimensional modeling method and device, electronic equipment and storage medium
CN116958842A (en) * 2023-09-19 2023-10-27 北京格镭信息科技有限公司 Underground pipeline inspection method and device based on laser-vision fusion

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115713530A (en) * 2022-12-19 2023-02-24 北京卓翼智能科技有限公司 Instant positioning and map construction system
CN116202511A (en) * 2023-05-06 2023-06-02 煤炭科学研究总院有限公司 Method and device for determining pose of mobile equipment under long roadway ultra-wideband one-dimensional constraint
CN116503566A (en) * 2023-06-25 2023-07-28 深圳市其域创新科技有限公司 Three-dimensional modeling method and device, electronic equipment and storage medium
CN116503566B (en) * 2023-06-25 2024-03-29 深圳市其域创新科技有限公司 Three-dimensional modeling method and device, electronic equipment and storage medium
CN116958842A (en) * 2023-09-19 2023-10-27 北京格镭信息科技有限公司 Underground pipeline inspection method and device based on laser-vision fusion
CN116958842B (en) * 2023-09-19 2024-01-05 北京格镭信息科技有限公司 Underground pipeline inspection method and device based on laser-vision fusion

Similar Documents

Publication Publication Date Title
CN115342796A (en) Map construction method, system, device and medium based on visual laser fusion
CN105825173B (en) General road and lane detection system and method
CN107909612B (en) Method and system for visual instant positioning and mapping based on 3D point cloud
US11482014B2 (en) 3D auto-labeling with structural and physical constraints
US20190387209A1 (en) Deep Virtual Stereo Odometry
CN101996420B (en) Information processing device, information processing method and program
Wang et al. Monocular 3d object detection with depth from motion
CN114424250A (en) Structural modeling
US11721065B2 (en) Monocular 3D vehicle modeling and auto-labeling using semantic keypoints
WO2018205803A1 (en) Pose estimation method and apparatus
KR20180056685A (en) System and method for non-obstacle area detection
US20210304496A1 (en) Method and system for automatically processing point cloud based on reinforcement learning
CN108122245B (en) Target behavior description method and device and monitoring equipment
CN114445265A (en) Equal-rectangular projection stereo matching two-stage depth estimation machine learning algorithm and spherical distortion layer
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN112819864A (en) Driving state detection method and device and storage medium
CN110853085A (en) Semantic SLAM-based mapping method and device and electronic equipment
CN115187941A (en) Target detection positioning method, system, equipment and storage medium
CN115097419A (en) External parameter calibration method and device for laser radar IMU
CN114387576A (en) Lane line identification method, system, medium, device and information processing terminal
Tosi et al. How nerfs and 3d gaussian splatting are reshaping slam: a survey
CN116188550A (en) Self-supervision depth vision odometer based on geometric constraint
CN113409340A (en) Semantic segmentation model training method, semantic segmentation device and electronic equipment
CN116189150B (en) Monocular 3D target detection method, device, equipment and medium based on fusion output
CN112800822A (en) 3D automatic tagging with structural and physical constraints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20221115

WW01 Invention patent application withdrawn after publication