CN111337037B - Mobile laser radar slam drawing device and data processing method - Google Patents

Mobile laser radar slam drawing device and data processing method Download PDF

Info

Publication number
CN111337037B
CN111337037B CN202010422532.XA CN202010422532A CN111337037B CN 111337037 B CN111337037 B CN 111337037B CN 202010422532 A CN202010422532 A CN 202010422532A CN 111337037 B CN111337037 B CN 111337037B
Authority
CN
China
Prior art keywords
data
acquisition
unit
calculation unit
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010422532.XA
Other languages
Chinese (zh)
Other versions
CN111337037A (en
Inventor
张衡
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Digital Green Earth Technology Co.,Ltd.
Original Assignee
Beijing Greenvalley Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Greenvalley Technology Co ltd filed Critical Beijing Greenvalley Technology Co ltd
Priority to CN202010422532.XA priority Critical patent/CN111337037B/en
Publication of CN111337037A publication Critical patent/CN111337037A/en
Application granted granted Critical
Publication of CN111337037B publication Critical patent/CN111337037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a mobile laser radar slam drawing device and a data processing method, comprising a satellite signal receiving unit: the system is used for receiving satellite positioning signals and is respectively connected with the data acquisition sensing unit, the IMU inertial measurement unit and the drawing acquisition computing unit to realize the transmission of positioning and time synchronization information; the data acquisition sensing unit: the system comprises a drawing acquisition and calculation unit, a drawing acquisition and calculation unit and a data processing unit, wherein the drawing acquisition and calculation unit is used for acquiring image information in a certain time space region and sending the acquired image information to the drawing acquisition and calculation unit; IMU inertial measurement unit: the drawing acquisition and calculation unit is connected with the drawing acquisition and calculation unit to send data to the drawing acquisition and calculation unit and acquire accurate GPS time information sent by the satellite signal receiving unit; drawing collection and calculation unit: and providing various data interfaces to acquire the original data of the data acquisition sensing unit in a multithreading mode, so as to realize slam drawing of multi-sensing fusion. The invention can realize modular design, provides a highly integrated acquisition system, and is convenient to carry, disassemble, assemble, maintain and replace.

Description

Mobile laser radar slam drawing device and data processing method
Technical Field
The invention relates to a laser radar navigation positioning processing method, in particular to a mobile laser radar slam drawing device and a data processing method.
Background
The slam (simultaneous Localization And mapping) technology refers to instant positioning And map construction, namely, the position of the current motion system in an unknown environment is fed back in real time by processing surrounding environment data acquired by a sensor, And a surrounding environment map of the motion system is drawn at the same time. With the development of laser radar technology, the mobile SLAM mapping system has shown wide application prospects in the fields of forestry, traffic, mines, buildings and the like.
Although the slam technology is applied to many scenes, the drawing and data processing aspects of multi-sensor acquisition realized by the slam technology are rarely disclosed, and how to stably fuse all sensors in a mobile laser radar system together ensures portability and easy control and stable and reliable acquisition data so as to realize modularization of a slam drawing device is a problem to be solved at present.
The Chinese invention patent application with the application number of 201610799842.7 discloses a method for multi-robot collaborative drawing and positioning aiming at a large-scale environment, which comprises a single-robot laser SLAM algorithm, a multi-robot pose constraint estimation algorithm and a multi-robot map fusion algorithm based on a visual detection closed loop; the single robot laser SLAM algorithm based on the visual detection closed loop utilizes the visual sensor to assist the laser sensor, and the SLAM algorithm with more stable robustness is realized.
The Chinese invention patent application with the application number of 201910378821.1 discloses an indoor personnel autonomous positioning method based on SLAM gait IMU fusion, it comprises indoor personnel wearing and starting a positioning device, the personnel wearing the device freely move in the indoor environment, an SLAM sensor transmitting the measured data to a computer, the computer using an SLAM algorithm to map the indoor environment and position the personnel, meanwhile, the IMU element transmits the measured data to a computer, the computer calculates the movement locus of the personnel through an IMU positioning algorithm based on gait, then the personnel is positioned by fusing the SLAM positioning result and the gait IMU positioning result, the gait IMU positioning technology with small dependence on the environment is used as an auxiliary positioning means in a challenging environment, and the personnel autonomous positioning system with the gait IMU positioning technology and the personnel autonomous positioning system which are combined has good navigation positioning performance in a normal environment and has strong robustness in special indoor environments such as a fire scene.
Although the above technical solutions are all based on the application of SLAM technology, the technical solutions implemented by the above technical solutions cannot solve the aforementioned technical problems.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a mobile laser radar slam drawing device and a data processing method.
The purpose of the invention is realized by the following technical scheme: the utility model provides a mobile laser radar slam drawing device, includes satellite signal receiving element, data acquisition sensing unit, IMU inertia measuring unit and drawing collection computational element:
satellite signal receiving unit: the system is used for receiving satellite positioning signals and is respectively connected with the data acquisition sensing unit, the IMU inertial measurement unit and the drawing acquisition computing unit to realize the transmission of positioning and time synchronization information;
the data acquisition sensing unit: the system comprises a drawing acquisition and calculation unit, a drawing acquisition and calculation unit and a data processing unit, wherein the drawing acquisition and calculation unit is used for acquiring image information in a certain time space region and sending the acquired image information to the drawing acquisition and calculation unit;
IMU inertial measurement unit: the drawing acquisition and calculation unit is connected with the drawing acquisition and calculation unit to send data to the drawing acquisition and calculation unit and acquire accurate GPS time information sent by the satellite signal receiving unit;
drawing collection and calculation unit: and providing various data interfaces to acquire the original data of the data acquisition sensing unit in a multithreading mode, so as to realize slam drawing of multi-sensing fusion.
The data acquisition sensing unit comprises a high-precision panoramic camera and a laser radar scanner; the high-precision panoramic camera is used for sending captured high-precision image information to the drawing acquisition and calculation unit while keeping time synchronization with other sensing equipment in the data acquisition and sensing unit; the laser radar scanner is connected with the satellite signal receiving unit to acquire accurate GPS time, and sends GPS positioning information and time information to the drawing acquisition and calculation unit.
The slam drawing method of the multi-sensor fusion comprises the following steps:
according to the data received by the IMU inertial measurement unit and the satellite signal receiving unit and the calibration information between the IMU inertial measurement unit and the satellite signal receiving unit, the satellite positioning information, the inertial navigation system and the Kalman filtering principle are utilized to solve to obtain the initial positioning information P0 of the motion system at each moment;
generating high-precision motion system positioning information P1 according to a visual slam technology by using the initial positioning information P0 and image data acquired by a high-precision panoramic camera;
generating higher-precision motion system positioning information P2 by adopting a laser slam technology according to the motion system positioning information P1 and point cloud data acquired by a laser radar scanner;
and performing joint optimization by using positioning information provided by each sensing device to obtain the positioning information of the current motion system, and then re-resolving the point position of each moment according to the solved high-precision track and the original point cloud to finally obtain a high-precision map.
The satellite signal receiving unit, the high-precision camera, the laser radar scanner and the IMU inertia measuring unit are sequentially arranged in the device from top to bottom; the drawing acquisition and calculation unit is arranged in a carrying system of the portable backpack.
The data acquisition sensing unit also comprises an inclined laser scanner; the laser radar scanning device is arranged outside the backpack, is connected with the drawing acquisition and calculation unit and the satellite signal receiving unit, and is used for scanning the laser radar scanner to realize auxiliary positioning and ensure the integrity of point cloud when the laser radar scanner cannot scan side or ground point cloud in certain scenes.
The mobile intelligent equipment is in communication connection with the drawing acquisition and calculation unit through a WIFI network or a wired network, and accesses and controls the drawing acquisition and calculation unit and controls data acquisition, data copying and real-time point cloud display.
A data processing method is adopted in the mobile laser radar slam drawing device, and the data processing method comprises the following steps:
data compression and conversion: compressing point cloud, track point and image data generated by the drawing acquisition and calculation unit each time in a preset compression mode and then entering a transmission queue;
and (3) data reduction display: the mobile intelligent client establishes communication connection with the drawing acquisition and calculation unit, restores the acquired compressed two-dimensional data into three-dimensional point cloud and images, and realizes real-time display through a WebGL technology.
The preset compression mode comprises the following steps: the data storage is carried out in a mode of rotating a matrix after the three-dimensional coordinate point of each frame of point cloud after being down sampled is converted into a polar coordinate form.
The drawing acquisition and calculation unit monitors each mobile intelligent client in real time, the mobile intelligent clients send instruction data to control each sensing device in the data acquisition and sensing unit to start data acquisition and processing and data copying, and meanwhile, feedback results of the instruction processing are sent to the mobile intelligent clients.
A storage medium having stored thereon computer control program instructions which, when executed by a processor, implement the steps of the data processing method.
A terminal device comprises a processor and a memory storing computer control program instructions; executing the computer control program instructions within the processor, the steps of the data processing method being implemented when the computer control program instructions are executed.
The invention has the following beneficial effects: a mobile laser radar slam drawing device and a data processing method; the mobile laser radar drawing equipment is subjected to modular design, a high-integration acquisition system is provided, various acquisition sensing equipment can be flexibly customized, the carrying, the dismounting, the maintenance and the replacement processing are convenient, the data acquisition processing stability is high, the failure rate is low, and the replacement is easy;
aiming at external operation collection, other handheld equipment is not required to be carried, data collection can be started only by clicking a start button after the device is connected with a mobile phone or a tablet personal computer, and a stop is clicked after collection is finished;
meanwhile, the data processing method of the slam drawing device of the mobile laser radar is used for ensuring that an intelligent client can receive the preliminarily collected point cloud data, the quality of the collected operation can be judged, meanwhile, the complete data can be rapidly led out through a U disk, the scheme is wide in applicability, and the data processing method is called in a webpage form, can be used in user-owned equipment without other software, and meanwhile, the display card acceleration mode is used, so that the rendering speed can meet the real-time requirement.
Drawings
FIG. 1 is a schematic diagram of the apparatus of the present invention;
FIG. 2 is a flow chart of the method of the present invention;
FIG. 3 is a data processing block diagram of the apparatus of the present invention.
Detailed Description
In order to make the purpose, technical solution and advantages of the embodiments of the present application clearer, the technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, but not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the mobile lidar slam mapping apparatus provided for the embodiment of the present invention includes a satellite signal receiving unit, a data acquisition sensing unit, an IMU (apparatus for measuring three-axis attitude angle and acceleration of an object) inertial measurement unit, and a mapping acquisition calculation unit: the data acquisition sensing unit comprises a high-precision camera, a laser radar scanner and an inclined laser scanner;
satellite signal receiving unit: the system is used for receiving satellite positioning signals and is respectively connected with the data acquisition sensing unit, the IMU inertial measurement unit and the drawing acquisition computing unit to realize the transmission of positioning and time synchronization information;
furthermore, the satellite signal receiving unit can be a customized GNSS (global navigation satellite system positioning) board card and an antenna which are mainly used for receiving positioning signals transmitted by satellites and are arranged on the uppermost layer of the device, and the GNSS board card is respectively connected with the laser radar scanner, the IMU inertial measurement unit and the drawing acquisition and calculation unit to realize the sending of positioning and time synchronization information.
Furthermore, the high-precision camera is arranged on the lower layer of the GNSS board card and is used for capturing high-precision image information; the GPS time can be acquired by a GPS antenna of the high-precision camera to keep time synchronization with other data acquisition sensing equipment, or the IMU of the high-precision camera can be used for time synchronization with the IMU inertial measurement unit in the invention.
Furthermore, the laser radar scanner is arranged on the layer below the high-precision camera, is connected with the drawing acquisition and calculation unit, and is connected with the GNSS board card to acquire accurate GPS time, and is mainly used for realizing positioning.
Furthermore, the inclined laser scanner is arranged outside the backpack, is connected with the drawing acquisition and calculation unit and the GNSS board card to acquire accurate GPS time, and is used for scanning the laser radar scanner to realize auxiliary positioning and ensure the integrity of point cloud when the laser radar scanner cannot scan side or ground point cloud in certain scenes.
Further, the IMU inertial measurement unit: the system is arranged on the lower layer of the laser radar scanner, is connected with the mapping acquisition and calculation unit, sends data to the mapping acquisition and calculation unit, and acquires accurate GPS time information sent by the satellite signal receiving unit;
furthermore, a backpack carrying system is arranged below the IMU inertia measurement unit, so that various data acquisition and transmission components on the IMU inertia measurement unit can be stably carried, and portability is guaranteed; the interior of the drawing machine is provided with a power supply, a drawing acquisition and calculation unit and the like.
Further, the drawing collection calculation unit: and providing various data interfaces to acquire the original data of the data acquisition sensing unit in a multithreading mode, so as to realize slam drawing of multi-sensing fusion.
Specifically, the drawing acquisition and calculation unit provides a USB interface, an Ethernet interface and an HDMI interface. The USB interface is used for copying data; the Ethernet interface is connected with an external tablet computer through a wire, or the mobile phone equipment is conveniently connected through a sent WIFI signal; the HDMI interface is used to debug the device. The drawing acquisition and calculation unit acquires the original data of each sensor in real time in a multithreading mode, ensures stable data transmission and storage, and performs real-time processing or post-processing. The processed result can be exported to the USB flash disk through the USB interface.
Further, the slam mapping method of the multi-sensor fusion comprises the following steps:
s1, solving the initial positioning information P0 of the motion system at each moment t by utilizing GNSS satellite positioning information, an inertial navigation system and a Kalman filtering principle according to the data received by the IMU and the GNSS and the calibration information between the IMU and the GNSS; and obtaining an initial value of the positioning information and providing initial data for subsequent steps.
S2, generating high-precision motion system positioning information P1 according to the vision slam technology by using the initial positioning information P0 and the image data collected by the high-precision panoramic camera; and combining the initial value given by the front positioning information, the matching point can be more effectively extracted, and RANSAC point screening operation is performed.
Specifically, each two adjacent frames of image key frames are extracted, a feature point set is extracted, wherein the feature point set comprises ORB feature points, Harris angular points and the like, RANSAC matching efficiency can be greatly improved by projecting the positions of the feature points estimated in a three-dimensional space under the condition that the initial position information of the current two frames of key frames is known according to the initial positioning information obtained in the first step, effective points are extracted, abnormal points are removed, a point set P ^ t, P ^ (t-1) is obtained, and the space coordinate position of each feature point under a unified world coordinate system is calculated. The transformation relationship of these feature matching point pairs can be expressed as follows:
∀i,P_i^t=RP_i^(t-1)+T
wherein R is a pose rotation transformation matrix of two adjacent frames, T is a translation matrix, and P _ i ^ T and P _ i ^ (T-1) are feature point matching point pairs from time T to time T + 1. Solving the poses R and T by adopting a method of minimizing the reprojection error, wherein the pose R and the pose T are as follows:
Figure 443570DEST_PATH_IMAGE001
wherein P ^ t represents all the feature point sets of the image frame F _1, and P ^ (t-1) represents all the feature point sets of the image frame F _ 1; r is a rotation matrix of the carrier, T is a translation vector of the carrier, and N represents the number of the characteristic point pairs.
And calculating to obtain a rotation and translation matrix of adjacent key frames, and sequentially obtaining the positioning information P1 of all the key frames.
S3, generating higher-precision motion system positioning information P2 by adopting a laser slam technology according to the motion system positioning information P1 and point cloud data acquired by a laser radar scanner;
specifically, for the current frame laser point cloud data Pk, the point feature Fk1, the line feature Fk2 and the surface feature Fk3 can be calculated according to the following formula.
Figure 643607DEST_PATH_IMAGE002
Wherein the content of the first and second substances,ifor one point in the Pk, the position of the first electrode,X i is a pointiIs determined by the coordinate of (a) in the space,pis a pointiThe set of neighborhood points of (a) is,jis composed ofpThe point (b) in (c) is,X i is a pointiIs determined by the coordinate of (a) in the space,fis a characteristic value; thresholds M _1, M _2, M _3, and M _4 are predefined, and the characteristic value for the current point is specifiedfPoints smaller than M _1 belong to the feature F _ k1, points larger than M _2 and smaller than M _3 belong to F _ k2, and points larger than M4 belong to F _ k 3.
According to the initial positioning information P1, converting the characteristic data of each frame into a coordinate system corresponding to the P1 positioning information. Acquiring two adjacent frames of point cloud data P _ t and P _ (t-1), performing field search on all matching pairs F _ (t-1) in F _ t, and determining all candidate feature matching pairs. Solving the rotation and translation parameters of the point cloud data of two adjacent frames according to the matching pairs and by a least square methodRAndT. Specifically, the parameters can be solved by the following formula:
Figure 807610DEST_PATH_IMAGE003
wherein the content of the first and second substances,Yrepresenting features extracted from an adjacent subsequent data frame,Xindicating the frame from the adjacent previous data frameThe features of the image data are extracted from the image data,Ris a matrix of rotations of the carrier and,Tis the translation vector of the carrier. Then, the matching pairs are optimized according to the obtained result, and the characteristic point F _ t' is calculated again. And (5) searching the characteristic point pairs in the point cloud P _ (T-1) again in F _ T', recalculating to obtain new rotation and translation matrixes R and T, and updating the new rotation and translation matrixes R and T. Finally, the rotational and translational position information of two adjacent frames is obtainedR(t-1) andT(t-1) and adding the current optimal feature matching pair to the matching database K.
Finally, the positioning information P2 according to the laser point cloud information is obtained by the transformation matrix according to the adjacent frame.
The method has the advantages that a better feature point extraction method is provided, line features and surface features can be more effectively represented, two frames of point clouds can be effectively matched in the subsequent feature matching step, and matching precision is improved.
S4, performing joint optimization by using the positioning information provided by each sensing device to obtain the positioning information of the current motion system;
specifically, the previously obtained data sets such as the laser point characteristic Z ^ laser, the characteristic point set Z ^ img of the image key frame, the GNSS information Z ^ GNSS, the IMU pre-integration constraint information Z ^ IMU and the like are jointly optimized, and the state set of each moment is obtained
Figure 968464DEST_PATH_IMAGE004
Solving the maximum posterior probability of the joint probability distribution to obtain the optimal state quantity of each time point
Figure 502213DEST_PATH_IMAGE005
Wherein the content of the first and second substances,
Figure 166544DEST_PATH_IMAGE006
further, a high-precision track of the current moment is obtained; and re-resolving the point position of each moment according to the solved high-precision track and the original point cloud to finally obtain a high-precision map.
In the technical scheme, the main advantage is that the laser point cloud characteristics and the image characteristics are added into the joint optimization, and compared with other methods, only one of the laser point cloud characteristics and the image characteristics is added, the scheme provides a set of solution for the problem of poor stability caused by single characteristics. Meanwhile, the method based on the joint optimization can acquire the point cloud represented by the geodetic coordinate system by fusing the positioning information of the GNSS, so that the mapping requirement based on the geodetic coordinate can be met. By combining the pre-integral constraint information of the IMU and the feature points extracted from the image key frame, the stability of the mapping process can be improved, and the problems that the error is large and the mapping precision is low when the feature points pass through narrow corridors and the like are low are solved.
Furthermore, the mobile intelligent device is in communication connection with the drawing acquisition and calculation unit through a WIFI network or a wired network, accesses and controls the drawing acquisition and calculation unit and controls data acquisition, data copying and real-time point cloud display.
Specifically, the mobile intelligent device can control the collection device to start/close collection, control the collection processing unit to restart/close, copy data and the like, check collected sensor information, update system firmware and relevant settings and the like, and the functions can basically guarantee one-key portable completion of collection work, so that field personnel can operate by one person. Due to the convenience of modular design, compared with the prior drawing device, the device can customize equipment according to the requirements of customers more flexibly, and comprises GNSS equipment, a panoramic camera, an inclined laser scanner and the like.
As shown in fig. 2 and fig. 3, a data processing method provided for an embodiment of the present invention adopts the above mobile lidar slam graphics apparatus, and the data processing method includes:
s1, multi-sensing equipment data acquisition: the method comprises the steps that image data are collected in real time through a three-dimensional laser radar scanner and/or an inclined three-dimensional laser scanner and a high-precision panoramic camera and are sent to a drawing collection and calculation unit, and meanwhile, GNSS board cards and IMU inertia measurement units also send the GNSS data and the IMU data to the drawing collection and calculation unit;
s2, data compression and conversion: compressing point cloud, track point and image data generated by the drawing acquisition and calculation unit each time in a preset compression mode and then entering a transmission queue;
specifically, the drawing acquisition and calculation unit generates a frame of point cloud, track points and image data each time according to the received data, and then compresses the data; the compression method comprises the following steps: the data storage is carried out in a mode of rotating a matrix after converting the three-dimensional coordinate point of each frame of point cloud after being down sampled into a polar coordinate form, and the data storage is specifically as shown in the following formula:
Figure 127547DEST_PATH_IMAGE007
wherein, P is data in polar coordinate form, D is original format point cloud data,RandTin order to carry out rotation translation transformation, the original point cloud data is transferred to a coordinate system with a central point as an origin.
Specifically, the coordinates of each point in the three-dimensional image are converted into spherical polar coordinates by adopting the spherical polar coordinates, and then the spherical polar coordinates are corresponding to the corresponding two-dimensional image to further realize the compression of data.
Firstly, three-dimensional image data is converted into spherical polar coordinates, firstly, the origin of coordinates of a spherical polar coordinate system is determined, the center of a point cloud curved surface bounding box is selected as the origin of coordinates, and for any point, the origin of coordinates is determinedp[x,y,z]Polar coordinates of spherical surface thereof[r,Φ,θ]The three-dimensional image coordinate system can be obtained through the conversion relation between the Cartesian coordinate and the spherical polar coordinate, and further spherical polar coordinates of all points of the three-dimensional image are obtained; wherein the content of the first and second substances,rrepresenting a point in spacepTo the originOThe distance of (a) to (b),Φis thatxyOn a plane fromxThe angle obtained by the positive rotation of the shaft, namely the azimuth angle, ranges from 0 pi to 2 pi;θis shown inzyOn a plane fromzThe angle obtained by the positive rotation of the shaft, namely the polar angle, ranges from 0 to pi; scanning all points on the point cloud curved surface to obtainr,Φ,θThe interval of (2).
Then, after obtaining the spherical polar coordinates, corresponding the spherical polar coordinates to the two-dimensional image, and realizing the conversion from the spherical polar coordinates to the two-dimensional image of the three-dimensional image; specifically, the azimuth angleΦAngle of and poleθThe cross product of the interval corresponds to a resolution ofM×NOn the image of (a) a (b),rcorrespond toGThe above step (1); wherein the content of the first and second substances,Mfor the width of the two-dimensional image,Nis the height of the two-dimensional image,Gis gray scale; and further compression conversion from the spherical polar coordinates of the three-dimensional image to the two-dimensional image is realized.
Due to the limitation of a laser scanning structure, each frame of point cloud is within a certain angle and distance range, so that the data consistency can be greatly improved by storing, and the compression ratio can be improved in the later step.
S3, data restoration display: the mobile intelligent client establishes communication connection with the drawing acquisition and calculation unit, restores the acquired compressed two-dimensional data into three-dimensional point cloud and images, and realizes real-time display through a WebGL technology.
Further, the mobile smart client (cell phone client) requests data and establishes a connection. And restoring the acquired compressed two-dimensional data to a three-dimensional point cloud and an image every time, and then using a WebGL technology to call a mobile phone display card to render the three-dimensional point cloud and the image data in real time, thereby ensuring the real-time display of the equipment. When the mobile phone client displays, each frame and previous data are displayed on the mobile phone or the tablet client together, so that the drawing effect is displayed.
And S4, the drawing acquisition and calculation unit monitors each mobile intelligent client in real time, the mobile intelligent client sends instruction data to control each sensing device in the data acquisition and sensing unit to start data acquisition and processing and data copy, and simultaneously sends a feedback result of the instruction processing to the mobile intelligent client.
Another embodiment of the invention includes a storage medium having computer control program instructions stored thereon that, when executed by a processor, implement the steps of the data processing method.
Yet another embodiment of the present invention includes a terminal device comprising a processor and a memory storing computer control program instructions; executing the computer control program instructions within the processor, the steps of the data processing method being implemented when the computer control program instructions are executed.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. The utility model provides a mobile laser radar slam drawing device which characterized in that: the system comprises a satellite signal receiving unit, a data acquisition sensing unit, an IMU inertia measurement unit and a drawing acquisition computing unit:
satellite signal receiving unit: the system is used for receiving satellite positioning signals and is respectively connected with the data acquisition sensing unit, the IMU inertial measurement unit and the drawing acquisition computing unit to realize the transmission of positioning and time synchronization information;
the data acquisition sensing unit: the system comprises a drawing acquisition and calculation unit, a drawing acquisition and calculation unit and a data processing unit, wherein the drawing acquisition and calculation unit is used for acquiring image information in a certain time space region and sending the acquired image information to the drawing acquisition and calculation unit;
IMU inertial measurement unit: the drawing acquisition and calculation unit is connected with the drawing acquisition and calculation unit to send data to the drawing acquisition and calculation unit and acquire accurate GPS time information sent by the satellite signal receiving unit;
drawing collection and calculation unit: providing a plurality of data interfaces to acquire original data of the data acquisition sensing unit in a multithreading mode, and realizing slam drawing of multi-sensing fusion;
the slam drawing method of the multi-sensor fusion comprises the following steps:
according to the data received by the IMU inertial measurement unit and the satellite signal receiving unit and the calibration information between the IMU inertial measurement unit and the satellite signal receiving unit, the satellite positioning information, the inertial navigation system and the Kalman filtering principle are utilized to solve to obtain the initial positioning information P0 of the motion system at each moment; obtaining an initial value of the positioning information and providing initial data for subsequent steps;
generating high-precision motion system positioning information P1 according to a visual slam technology by using the initial positioning information P0 and image data acquired by a high-precision panoramic camera;
specifically, firstly extracting key frames of each two adjacent frames of images, extracting a feature point set which comprises ORB feature points and Harris angular points, according to initial positioning information obtained in the first step, under the condition that the initial position information of the current two frames of key frames is known, by projecting the estimated positions of the feature points in a three-dimensional space, the RANSAC matching efficiency can be greatly improved, effective points are extracted, abnormal points are removed, a point set P ^ t, P ^ (t-1) is obtained, the space coordinate position of each feature point in a unified world coordinate system is calculated, a rotation and translation matrix of the adjacent key frames is calculated, and the positioning information P1 of all the key frames is sequentially obtained;
generating higher-precision motion system positioning information P2 by adopting a laser slam technology according to the motion system positioning information P1 and point cloud data acquired by a laser radar scanner;
specifically, for the current frame laser point cloud data Pk, according to a formula
Figure FDA0002619544750000011
Calculating to obtain point characteristics Fk1, line characteristics Fk2 and surface characteristics Fk 3; converting the feature data of each frame into a coordinate system corresponding to the P1 positioning information according to the initial positioning information P1, acquiring two adjacent frames of point cloud data P _ t and P _ (t-1), performing field search on all matching pairs F _ (t-1) in F _ t, and determining all candidate feature matching pairs; obtaining rotation translation parameters R and T of two adjacent frames of point cloud data according to the matching pairs and by a least square method, optimizing the matching pairs according to obtained results, recalculating characteristic points F _ T ', searching characteristic point pairs again in the characteristic points in the point cloud P _ (T-1) in the F _ T', recalculating to obtain new rotation translation matrixes R and T, updating, finally obtaining rotation translation position information R _ (T-1) and T _ (T-1) of two adjacent frames, adding the current optimal characteristic matching pairs into a matching database K, and obtaining positioning information P2 according to laser point cloud information through a conversion matrix of the adjacent frames;
performing joint optimization by using positioning information provided by each sensing device to obtain positioning information of the current motion system, and then re-resolving the point position of each moment according to the solved high-precision track and the original point cloud to finally obtain a high-precision map;
specifically, the laser point characteristic Z ^ laser, the characteristic point set Z ^ img of the image key frame, the GNSS information Z ^ GNSS and the IMU pre-integration constraint information Z ^ IMU data set which are obtained before are subjected to joint optimization, and the state set S of each moment is obtainedkSolving the maximum posterior probability of the joint probability distribution to obtain the optimal state quantity of each time point
Figure FDA0002619544750000021
Further obtaining a high-precision track of the current moment; and re-resolving the point position of each moment according to the solved high-precision track and the original point cloud to finally obtain a high-precision map.
2. The mobile lidar slam mapping apparatus of claim 1, wherein: the data acquisition sensing unit comprises a high-precision panoramic camera and a laser radar scanner; the high-precision panoramic camera is used for sending captured high-precision image information to the drawing acquisition and calculation unit while keeping time synchronization with other sensing equipment in the data acquisition and sensing unit; the laser radar scanner is connected with the satellite signal receiving unit to acquire accurate GPS time, and sends GPS positioning information and time information to the drawing acquisition and calculation unit.
3. The mobile lidar slam mapping apparatus of claim 2, wherein: the satellite signal receiving unit, the high-precision camera, the laser radar scanner and the IMU inertia measuring unit are sequentially arranged in the device from top to bottom; the drawing acquisition and calculation unit is arranged in a carrying system of the portable backpack;
the data acquisition sensing unit also comprises an inclined laser scanner; the laser radar scanning device is arranged outside the backpack, is connected with the drawing acquisition and calculation unit and the satellite signal receiving unit, and is used for scanning the laser radar scanner to realize auxiliary positioning and ensure the integrity of point cloud when the laser radar scanner cannot scan side or ground point cloud in certain scenes.
4. The mobile lidar slam mapping apparatus of claim 1, wherein: the mobile intelligent equipment is in communication connection with the drawing acquisition and calculation unit through a WIFI network or a wired network, and accesses and controls the drawing acquisition and calculation unit and controls data acquisition, data copying and real-time point cloud display.
5. A data processing method using the mobile lidar slam mapping apparatus of any of claims 1-4, wherein: the data processing method comprises the following steps:
data compression and conversion: compressing point cloud, track point and image data generated by the drawing acquisition and calculation unit each time in a preset compression mode and then entering a transmission queue;
and (3) data reduction display: the mobile intelligent client establishes communication connection with the drawing acquisition and calculation unit, restores the acquired compressed two-dimensional data into three-dimensional point cloud and images, and realizes real-time display through a WebGL technology.
6. The data processing method of claim 5, wherein: the preset compression mode comprises the following steps: the data storage is carried out in a mode of rotating a matrix after the three-dimensional coordinate point of each frame of point cloud after being down sampled is converted into a polar coordinate form.
7. The data processing method of claim 5, wherein: the drawing acquisition and calculation unit monitors each mobile intelligent client in real time, the mobile intelligent clients send instruction data to control each sensing device in the data acquisition and sensing unit to start data acquisition and processing and data copying, and meanwhile, feedback results of the instruction processing are sent to the mobile intelligent clients.
CN202010422532.XA 2020-05-19 2020-05-19 Mobile laser radar slam drawing device and data processing method Active CN111337037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010422532.XA CN111337037B (en) 2020-05-19 2020-05-19 Mobile laser radar slam drawing device and data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010422532.XA CN111337037B (en) 2020-05-19 2020-05-19 Mobile laser radar slam drawing device and data processing method

Publications (2)

Publication Number Publication Date
CN111337037A CN111337037A (en) 2020-06-26
CN111337037B true CN111337037B (en) 2020-09-29

Family

ID=71181155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010422532.XA Active CN111337037B (en) 2020-05-19 2020-05-19 Mobile laser radar slam drawing device and data processing method

Country Status (1)

Country Link
CN (1) CN111337037B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866305A (en) * 2020-08-11 2020-10-30 普达迪泰(天津)智能装备科技有限公司 Image enhancement and environment adaptability method under indoor and outdoor specific conditions
CN114088131A (en) * 2020-08-24 2022-02-25 北京市安全生产科学技术研究院 Monitoring modeling device and forest emergency fire-fighting monitoring system
CN112735135A (en) * 2020-12-31 2021-04-30 东来智慧交通科技(深圳)有限公司 High-speed moving vehicle overrun detection method
CN113140039A (en) * 2021-04-26 2021-07-20 北京天地玛珂电液控制***有限公司 Multi-sensor fusion underground coal mine digital positioning and map construction system
CN117351140B (en) * 2023-09-15 2024-04-05 中国科学院自动化研究所 Three-dimensional reconstruction method, device and equipment integrating panoramic camera and laser radar

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106052691A (en) * 2016-05-17 2016-10-26 武汉大学 Closed ring error correction method in laser ranging mobile drawing
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN111025366A (en) * 2019-12-31 2020-04-17 芜湖哈特机器人产业技术研究院有限公司 Grid SLAM navigation system and method based on INS and GNSS

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106030430A (en) * 2013-11-27 2016-10-12 宾夕法尼亚大学理事会 Multi-sensor fusion for robust autonomous filght in indoor and outdoor environments with a rotorcraft micro-aerial vehicle (MAV)

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106052691A (en) * 2016-05-17 2016-10-26 武汉大学 Closed ring error correction method in laser ranging mobile drawing
CN106525057A (en) * 2016-10-26 2017-03-22 陈曦 Generation system for high-precision road map
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN111025366A (en) * 2019-12-31 2020-04-17 芜湖哈特机器人产业技术研究院有限公司 Grid SLAM navigation system and method based on INS and GNSS

Also Published As

Publication number Publication date
CN111337037A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111337037B (en) Mobile laser radar slam drawing device and data processing method
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
US11037023B2 (en) Terminal device, information processing device, object identifying method, program, and object identifying system
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
Yang et al. A novel approach of efficient 3D reconstruction for real scene using unmanned aerial vehicle oblique photogrammetry with five cameras
CN109767470B (en) Tracking system initialization method and terminal equipment
CN112184786B (en) Target positioning method based on synthetic vision
CN103499341B (en) A kind of using method of electric pole inclination measurement instrument
WO2021093679A1 (en) Visual positioning method and device
CN112348886A (en) Visual positioning method, terminal and server
CN111192321A (en) Three-dimensional positioning method and device for target object
CN112270702A (en) Volume measurement method and device, computer readable medium and electronic equipment
CN114743021A (en) Fusion method and system of power transmission line image and point cloud data
CN113361365A (en) Positioning method and device, equipment and storage medium
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN113483771A (en) Method, device and system for generating live-action map
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN111652276B (en) All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method
CN115773759A (en) Indoor positioning method, device and equipment of autonomous mobile robot and storage medium
CN115307646B (en) Multi-sensor fusion robot positioning method, system and device
CN114429515A (en) Point cloud map construction method, device and equipment
CA3142001A1 (en) Spherical image based registration and self-localization for onsite and offsite viewing
WO2024084925A1 (en) Information processing apparatus, program, and information processing method
WO2024055846A1 (en) Base station antenna pose information exploration method, device and system, and storage medium
EP3555847A1 (en) Method and apparatus for constructing lighting environment representations of 3d scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 2301-2308, third floor, building 2, incubator, Zhongguancun Software Park, Dongbeiwang, Haidian District, Beijing 100094

Patentee after: Beijing Digital Green Earth Technology Co.,Ltd.

Address before: Room 2301-2308, floor 3, building 2, incubator, Dongbeiwang Software Park, Haidian District, Beijing 100094

Patentee before: BEIJING GREENVALLEY TECHNOLOGY Co.,Ltd.