CN116091533B - Laser radar target demonstration and extraction method in Qt development environment - Google Patents

Laser radar target demonstration and extraction method in Qt development environment Download PDF

Info

Publication number
CN116091533B
CN116091533B CN202310002862.7A CN202310002862A CN116091533B CN 116091533 B CN116091533 B CN 116091533B CN 202310002862 A CN202310002862 A CN 202310002862A CN 116091533 B CN116091533 B CN 116091533B
Authority
CN
China
Prior art keywords
target
point cloud
data
blist
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310002862.7A
Other languages
Chinese (zh)
Other versions
CN116091533A (en
Inventor
郭凯
李文海
孙伟超
吴忠德
张家运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Naval Aeronautical University
Original Assignee
Naval Aeronautical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Naval Aeronautical University filed Critical Naval Aeronautical University
Priority to CN202310002862.7A priority Critical patent/CN116091533B/en
Publication of CN116091533A publication Critical patent/CN116091533A/en
Application granted granted Critical
Publication of CN116091533B publication Critical patent/CN116091533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a laser radar target demonstration and extraction method in a Qt development environment, which is characterized by comprising the following steps: s1, subscribing laser radar point cloud data in ROS by utilizing Qt; s2, dynamically demonstrating color three-dimensional point cloud data by utilizing an OPENGL module in the Qt; s3, completing multi-target extraction of single frame data through a voxel connection method; s4, completing multi-target tracking through inter-frame correlation analysis. According to the method, three-dimensional point cloud data can be acquired by subscribing messages issued by a laser radar sensor in the ROS, a three-dimensional color point cloud model is drawn and rendered by utilizing OPENGL, then single-frame multi-target segmentation extraction is completed by using a voxel connection method, and target tracking and real-time speed measurement are realized by comparing the correlation of target voxels between frames. The method has relatively simple steps, avoids using a self-contained data visualization module, optimizes the whole operation process, and is beneficial to the extraction and tracking of the inter-frame level targets.

Description

Laser radar target demonstration and extraction method in Qt development environment
Technical Field
The invention relates to the field of computer vision, in particular to a laser radar target demonstration and extraction method in a Qt development environment.
Background
Qt is a complete cross-platform C++ graphical user interface application program development framework, has a wide development foundation and a good packaging mechanism, is highly modularized in design, is simplified in memory recovery mechanism and is rich in API, and a development environment with strong portability, high usability and high running speed can be provided for users.
The laser radar technology has the characteristics of good directivity and high measurement precision, can generate a real-time high-resolution 3D point cloud of the surrounding environment by utilizing an active detection technology, and is not influenced by external natural light.
Therefore, how to combine the advantages of the two, so as to more intuitively and smoothly complete the demonstration of the point cloud data and the target identification becomes a new subject, and the following problems exist in the current combination of the Qt and the lidar:
First, the lidar can distribute point cloud data through ROS nodes, conventionally, acquiring ROS node data with Qt requires installing ROS Qt Creator plug-ins, configuring environmental variables, creating a workspace (WorkSpace), modifying cmakelists.
Secondly, regarding the drawing of the three-dimensional point cloud image in Qt, the most direct method is to use a self-contained Datavisualization (data visualization) module, however, the module has the problems that the point cloud image demonstration is stuck due to higher CPU occupation rate, and the reflectivity intensity information cannot be represented in pseudo color.
Third, current target extraction by lidar mainly includes Voxel-based (Voxel) and origin cloud-based methods. The object extraction method based on the voxels mostly needs to be abstracted through a 3D convolutional neural network, the operation process is complex, and the inter-frame object extraction and tracking are not facilitated.
Disclosure of Invention
In order to solve the defects of the technology, the invention provides a laser radar target demonstration and extraction method in a Qt development environment.
In order to solve the technical problems, the technical scheme adopted by the invention is that the laser radar target demonstration and extraction method in the Qt development environment comprises the following steps:
S1, subscribing laser radar point cloud data in ROS by utilizing Qt;
S2, dynamically demonstrating color three-dimensional point cloud data by utilizing an OPENGL module in the Qt;
s3, completing multi-target extraction of single frame data through a voxel connection method;
s4, completing multi-target tracking through inter-frame correlation analysis.
Further, the step S1 specifically includes:
s11, installing Qt and ROS melodic in a Ubuntu desktop operating system;
s12, adding a ROS-dependent dynamic link library and a ROS-dependent dynamic link path in the Qt engineering file;
s13, creating a subscription node in Qt, wherein the subscription node is used for subscribing laser radar point cloud data in the ROS;
s14, after the subscription node is created, starting the laser radar publishing node, and obtaining format data of laser radar publishing by rewriting a static callback function of the subscription node.
Further, step S2 specifically includes:
s21, converting a point cloud data format;
s22, data are transferred out;
S23, mapping single-frame point cloud reflectivity gray scale data into color data by utilizing OPENCV;
S24, rendering the point cloud data by using OPENGL;
S25, dynamically updating;
s26, graphic transformation.
Further, the single frame data in step S3 refers to data obtained by single period scanning of the laser radar, and step S3 specifically includes:
s31, establishing voxels;
s32, obtaining background data;
s33, identifying a target;
S34, confirming the target.
Further, the step S4 specifically includes:
S41, recording the position of the center point of each target according to the brightness lattice array of each target of the current frame;
S42, acquiring a brightness lattice array of each target of the next frame, and recording the position of the center point of each target; performing correlation analysis on the brightness lattice arrays of each target of the front frame and the rear frame, and obtaining a later frame array with the maximum correlation of a certain target in the previous frame through a traversal method;
s43, calculating the space distance between two frames of the same target to obtain the target speed;
And S44, setting the next frame as the current frame, and finishing iteration according to the methods of the steps S41, S42 and S43 when the next frame arrives, wherein each target speed is updated in a laser radar scanning period.
Further, the format conversion in step S21 refers to converting the point cloud data type by using the ROS library self function;
the data in step S22 refers to point cloud data in the static callback function in step S1;
The single-frame point cloud reflectivity gray data in the step S23 refers to data obtained by single-period scanning of the laser radar;
Step S24 should include position information (p x、py、pz) and color information (p R、pG、pB) for any point p in the point cloud data; writing all the information of the single-frame point cloud into a vertex buffer object QOpenGLBuffer x VBO, and finishing writing of a vertex shader and a fragment shader by using GLSL language to realize calculation and display of positions and colors of each point;
S25, setting the display duration t P of a single-frame point cloud in a picture, if the interface receiving point cloud time is t 1, displaying the frame point cloud within the range of [ t 1,t1+tP ], and replacing and updating the frame data after the frame point cloud exceeds t 1+tP, so that dynamic display is realized and the memory is released in time;
S26, rewriting a mouse event in Qt by combining a camera, a visual angle and a rotation function in OPENGL, realizing the rotation of a mouse dragging image and the image scaling function of a mouse wheel, and smoothly displaying millions of point cloud data.
Further, the specific process of the data transfer in step S22 is as follows: and a signal slot is built in the static callback function, data is transmitted to the common slot function of the class, and in the common slot function, signals built with the external designer interface class object are transmitted, so that the transmission process of the data of the static function to the external class object through the signal slot can be completed.
Further, the mapping of the single-frame point cloud reflectivity gray scale data to color data using OPENCV in step S23 includes the steps of:
S231, installing OPENCV in the Ubuntu desktop operating system;
s232, adding OPENCV dependent dynamic link libraries in the Qt engineering file.
Further, step S31 specifically includes: setting background sampling time t s =5s, wherein only background point clouds exist in [0, t s ]; firstly, obtaining the maximum value of the absolute value of the coordinates of the background point cloud in the X, Y, Z axis direction, namely x m、ym、zm in meter, and establishing a cuboid to completely cover all the current point cloud in a space rectangular coordinate system, wherein the range is [ -x m,xm],[-ym,ym],[-zm,zm ]; establishing cube voxels by taking 0.1m as a length unit, and dividing a point cloud space into 20.x m·20·ym·20·zm voxels;
The step S32 specifically includes: calculating the number N s of scanning points falling into a certain voxel in t s, and selecting the maximum value r max and the minimum value r min of the reflectivity in N s, wherein the background reflectivity interval of the voxel is [ r min,rmax ]; similarly, the reflectivity interval of all voxels in the outsourcing cuboid is recorded, and the reflectivity interval can be stored in a computer memory as voxel attributes;
the conditions for identifying the target in step S33 are: after the background acquisition is completed, when a moving target appears, laser irradiates the target to generate an echo, and when single-frame echo data meets one of the following conditions, the single-frame echo data can be judged as the target;
(1) Position p i(xi,yi,zi) does not belong to any voxel unit; at this time, the outsourcing cuboid range should be expanded according to the target position coordinates to completely contain the target point cloud;
(2) The position p i(xi,yi,zi of the target point) belongs to a certain voxel, but the reflectivity information r i of the target point is not in the background reflection interval corresponding to the voxel;
The step S34 specifically includes: the point cloud information identified from the background may represent multiple objects, and therefore it is necessary to effectively segment them based on whether voxels containing the objects are crosslinked or not, and extract multiple objects based on the "voxel-join method".
Further, the voxel connecting method extracts multiple targets, and the specific steps are as follows:
S341, for the outsourcing cuboid, marking all voxels containing the target point cloud as 'bright grids', storing the center point coordinates of each bright grid by using QVector D type variables, and counting the center point coordinates into QList < QVector3D > type objects blist; as an alternative pool, blist is a bright lattice sequence where all point clouds of the target are located;
S342, selecting any point M 0(x0,y0,z0 in blist, wherein the point is the center of the voxel M 0; since the number of voxels coplanar with the voxel M 0 is 6 and the number of cubes co-prismatic with each 1 edge of M 0 is 1, the number of other voxels that cross M 0 is 18, and each adjacent voxel is M 0i (i=0, 1,2, …);
S343, calculating the center coordinates M 0i(x0+ui,y0+vi,z0+wi of each adjacent voxel according to the relative position relation (u i,vi,wi) of M 0i and M 0;
S344, searching m 0i in blist, if yes, storing the m 0i in a central point array blist _0 of the target 0, wherein the data type is QList < QVector3D >; to prevent duplicate searches, m 0i needs to be deleted from blist; in other words, m 0i is moved from candidate pool blist into target pool blist —0;
S345, for the first element m 01 in blist _0, searching for 18 adjacent voxels and obtaining a central coordinate value, denoted as m 01i(x01+ui,y01+vi,z01+wi) (i=0, 1,2,..17), if it exists in blist, storing it in blist _0, and deleting m 01i from blist; according to the method, elements blist-0 can be traversed; in addition, blist _0 completes expansion continuously while traversing so as to ensure that bright grids belonging to the current target are added continuously;
S346, when the traversal is finished, namely blist _0 is not increased any more, ending the layer-by-layer bright point selecting process by taking the voxel M 0 as the center; blist _0 constitutes all the bright cells of the target 0;
s347, judging blist the number of elements; if 0, it indicates that only one target exists, and the brightness of the target is blist; if greater than 0, then it is stated that there are also multiple targets; at this time, according to the ideas of steps S342 to S347, the layer-by-layer extraction of the multiple targets blist _1, blist_2, …, blist _n is performed on blist until the number of elements in the candidate pool blist is 0, which indicates that all target extraction is completed.
The invention discloses a laser radar target demonstration and extraction method in a Qt development environment, which can subscribe a message issued by a laser radar sensor in an ROS to obtain three-dimensional point cloud data, draw and render a three-dimensional color point cloud model by using OPENGL, then finish single-frame multi-target segmentation extraction by using a voxel connection method, and realize target tracking and real-time speed measurement by comparing the correlation of target voxels between frames. The method has relatively simple steps, avoids using a self-contained data visualization module, optimizes the whole operation process, and is beneficial to the extraction and tracking of the inter-frame level targets.
Drawings
Fig. 1 is a general flow chart of the present invention.
Fig. 2 is a flowchart of single frame object extraction in the present invention.
FIG. 3 is a flow chart of a "voxel-connecting" object segmentation method in the present invention
Detailed Description
The invention will be described in further detail with reference to the drawings and the detailed description.
As shown in FIG. 1, the method for demonstrating and extracting the laser radar target in the Qt development environment comprises the following implementation processes:
S1, subscribing laser radar point cloud data in ROS by utilizing Qt;
S2, dynamically demonstrating color three-dimensional point cloud data by utilizing an OPENGL module in the Qt;
s3, completing multi-target extraction of single frame data through a voxel connection method;
s4, completing multi-target tracking through inter-frame correlation analysis.
The step S1 specifically comprises the following steps:
S11, in a Ubuntu 18.04 system, qt 5.9.9 and ROS melodic are installed;
S12, adding the following ROS-dependent dynamic link library and paths thereof into the Qt engineering file:
INCLUDEPATH+=/opt/ros/melodic/include
DEPENDPATH+=/opt/ros/melodic/lib
LIBS+=-L$$DEPENDPATH-lrosbag\
-lroscpp\
-lroslib\
-lroslz4\
-lrostime\
-lroscpp_serialization\
-lrospack\
-lcpp_common\
-lrosbag_storage\
-lrosconsole\
-lxmlrpcpp\
-lrosconsole_backend_interface\
-lrosconsole_log4cxx\;
S13, creating a subscribing node class QNodeSub in the Qt for subscribing laser radar data in the ROS, wherein the class inherits to the Qt thread class Qthread; the main program of the class comprises a header file #include < ros/ros.h >, a handle ros is created, a variable ros is defined, subscriber chatter _subscore=node.
And S14, after the subscription node is established, starting a laser radar release (publicher) node, and rewriting a static callback function void QNodeSub: chatterCallback (const sensor_ msgs: pointCloud2& msg) of the subscription node to obtain sensor_msg: pointCloud2 format data released by the laser radar sensor.
The step S2 specifically comprises the following steps:
s21, converting a point cloud data format;
s22, data are transferred out;
S23, mapping single-frame point cloud reflectivity gray scale data into color data by utilizing OPENCV;
S24, rendering the point cloud data by using OPENGL;
S25, dynamically updating;
s26, graphic transformation.
The format conversion in the step S21 refers to converting the sensor_msg: pointCloud2 class 2 point cloud data into the sensor_msg: pointCloud class data by utilizing the ROS library self-contained function sensor_ msgs: convertPointCloud2 ToPointCloud.
The data in the step S22 refers to the point cloud data PointCloud class variable h in the static callback function in the step S14;
The step S22 is carried out by the following specific steps: a signal slot is established in the callback function, and h is transmitted to the common slot function of the class; in the common slot function, a signal established with an external designer interface class object is transmitted, wherein the parameter of the signal is h, and the transmission process of the data of the static function to the external class object through the signal slot can be completed.
The utilization OPENCV in step S23 may be accomplished as follows:
S231, installing OPENCV 4.5.4 in Ubuntu 18.04;
S232, adding OPENCV dependent dynamic link libraries in the Qt engineering file:
INCLUDEPATH+=/usr/local/include\
/usr/local/include/opencv4\
/usr/local/include/opencv4/opencv2\
LIBS+=/usr/local/lib/libopencv_calib3d.so.4.5.4\、
/usr/local/lib/libopencv_core.so.4.5.4\
/usr/local/lib/libopencv_highgui.so.4.5.4\
/usr/local/lib/libopencv_imgcodecs.so.4.5.4\
/usr/local/lib/libopencv_imgproc.so.4.5.4\
/usr/local/lib/libopencv_dnn.so.4.5.4\
In step S23, mapping single-frame point cloud reflectivity gray scale data to color data specifically includes: an image container class (CV:: mat) object mapt is created in the format CV_8UC1, and the image matrix size is 1. Single frame Point cloud data Length N, namely: cv: mat mapt = cv:: mat:: zeros (1, n, cv_8uc1); the reflectivity gray data in the point cloud array h in the single frame PointCloud format is injected into img:
Defining a cv:Mat class object mapc, and mapping the gray map mapt into a JET pseudo color map mapc by using a cv: applyColorMap (mapt, mapc, cv: COLORMAP _JET); for the i-th pixel in mapc, its R, G, B values correspond to mapc.at < Vec3b > (0, i) [2], mapc.at < Vec3b > (0, i) [1], mapc.at < Vec3b > (0, i) [0], respectively.
In step S24, rendering point cloud data specifically includes: for any point p in the point cloud, position information (p x、py、pz) and color information (p R、pG、pB) should be included; if the length of the single-frame point cloud is N, the array dimension representing the single-frame point cloud is N multiplied by 6; writing the array into a vertex buffer object QOpenGLBuffer x VBO, and completing writing of a vertex shader and a fragment shader by using GLSL language to realize calculation and display of positions and colors of each point.
The step S25 specifically includes: setting the display duration t P of the single-frame point cloud in the picture, if the interface receiving point cloud time is t 1, displaying the single-frame point cloud in the range of [ t 1,t1+tP ], and after the single-frame point cloud exceeds t 1+tP, replacing and updating the frame data, thereby realizing dynamic display and timely releasing the memory.
The single frame data in step S3 refers to data obtained by single period scanning of the laser radar.
In combination with the single-frame target extraction flow chart of fig. 2, the steps are to set a loop, traverse all points in a frame of point cloud data, judge whether the point belongs to the background, if not, go to the next point, if yes, judge that the voxel where the point is located is put into a target candidate pool, then acquire a single-frame all-target 'voxel connection method' to complete the segmentation, and the step S3 specifically includes:
s31, establishing voxels;
s32, obtaining background data;
s33, identifying a target;
S34, confirming the target.
The step S31 specifically includes: setting background sampling time t s =5s, wherein only background point clouds exist in [0, t s ]; firstly, the maximum value of the absolute value of the coordinates of the background point cloud in the X, Y, Z axis direction (upwards rounded if the maximum value is a floating point) is recorded as x m、ym、zm, if the maximum value is a meter, the current whole point cloud can be completely wrapped in a rectangular solid in a space rectangular coordinate system, and the range is [ -x m,xm],[-ym,ym],[-zm,zm ]; and (3) establishing cube voxels by taking 0.1m (precision adjustable) as a length unit, and dividing a point cloud space into 20.x m·20·ym·20·zm voxels.
The step S32 specifically includes: calculating the number N s of scanning points falling into a certain voxel in t s, and selecting the maximum value r max and the minimum value r min of the reflectivity in N s, wherein the background reflectivity interval of the voxel is [ r min,rmax ]; similarly, the reflectivity interval of all voxels in the outsourcing cuboid is recorded, and can be stored in a computer memory as voxel attributes.
The conditions for identifying the target in step S33 are: after the background acquisition is completed, when a moving target appears, laser irradiates the target to generate an echo, and when single-frame echo data meets one of the following conditions, the target can be judged.
(1) Position p i(xi,yi,zi) does not belong to any voxel unit; at this time, the outsourcing cuboid range should be expanded according to the target position coordinates to completely contain the target point cloud;
(2) The position p i(xi,yi,zi of the target point) belongs to a voxel, but its reflectivity information r i is not within the background reflection interval corresponding to the voxel.
The step S34 specifically includes: the point cloud information identified from the background may represent a plurality of targets, so that the targets need to be effectively segmented, the segmentation is based on whether voxels containing the targets are crosslinked or not, a multi-target is extracted based on a voxel connection method, and a target segmentation flow chart of the voxel connection method shown in fig. 3 is combined, firstly, a loop is set, one point in a single-frame candidate pool blist is selected, whether a 'bright lattice' exists on the edge of the point, if the next point does not exist, the bright lattice is moved from the candidate pool blist to a target pool blist 0, then blist _0 is traversed, if the bright lattice is found, the sequence is stored until blist elements are not increased any more, when blist 0 indicates that the extraction of voxels corresponding to the target 0 is completed, finally, whether elements contained in blist are 0 is finally judged, if the initial position is not, and if the initial position is returned, the target segmentation is ended.
The method comprises the following specific steps:
S341, for the outsourcing cuboid, marking all voxels containing the target point cloud as 'bright grids', storing the center point coordinates of each bright grid by using QVector D type variables, and counting the center point coordinates into QList < QVector3D > type objects blist; as an alternative pool, blist is a bright lattice sequence where all point clouds of the target are located;
S342, selecting any point M 0(x0,y0,z0 in blist, wherein the point is the center of the voxel M 0; since the number of voxels coplanar with the voxel M 0 is 6 and the number of cubes co-prismatic with each 1 edge of M 0 is 1, the number of other voxels that cross M 0 is 18, and each adjacent voxel is M 0i (i=0, 1,2, …);
S343, calculating the center coordinates M 0i(x0+ui,y0+vi,z0+wi of each adjacent voxel according to the relative position relation (u i,vi,wi) of M 0i and M 0;
S344, searching m 0i in blist, if yes, storing the m 0i in a central point array blist _0 of the target 0, wherein the data type is QList < QVector3D >; to prevent duplicate searches, m 0i needs to be deleted from blist; in other words, m 0i is moved from candidate pool blist into target pool blist —0;
S345, for the first element m 01 in blist _0, searching for 18 adjacent voxels and obtaining a central coordinate value, denoted as m 01i(x01+ui,y01+vi,z01+wi) (i=0, 1,2,..17), if it exists in blist, storing it in blist _0, and deleting m 01i from blist; according to the method, elements blist-0 can be traversed; in addition, blist _0 completes expansion continuously while traversing so as to ensure that bright grids belonging to the current target are added continuously;
S346, when the traversal is finished, namely blist _0 is not increased any more, ending the layer-by-layer bright point selecting process by taking the voxel M 0 as the center; blist _0 constitutes all the bright cells of the target 0;
s347, judging blist the number of elements; if 0, it indicates that only one target exists, and the brightness of the target is blist; if greater than 0, then it is stated that there are also multiple targets; at this time, according to the ideas of steps 342 to 347, the layer-by-layer method is performed on blist to extract the multiple targets blist _1, blist_2, …, blist _n until the number of elements in the candidate pool blist is 0, which indicates that all target extraction is completed.
The step S4 specifically comprises the following steps:
S41, recording the Target i of each Target center point position according to the brightness lattice array of each Target of the current frame;
S42, acquiring a brightness lattice array of each Target of the next frame, and recording a Target j of each Target center point position; performing correlation analysis on the bright lattice arrays of each target of the front frame and the rear frame, and obtaining a later frame array with the maximum target correlation in the previous frame through a traversal method, namely, considering that the two arrays correspond to the same target, thereby realizing target tracking; specifically, with the sequence blist _ i of the target 0 in the previous frame image as a reference, comparing with each target sequence of the target in the next frame, and searching the sequence with the most repeated elements in each target sequence of the target and the sequence blist _ i in the next frame as the frame interval time is extremely short (0.1 s), namely identifying the same target; similarly, inter-frame correlation analysis may be performed for each object in the previous frame of image;
S43, calculating the space distance of a center point Target i、Targetj between two frames of the same Target to obtain the Target speed;
And S44, setting the next frame as the current frame, and finishing iteration according to the methods of the steps S41, S42 and S43 when the next frame arrives, wherein each target speed is updated in a laser radar scanning period.
In summary, the laser radar target demonstration and extraction method in the Qt development environment comprises the steps of establishing ROS subscription nodes in the Qt to obtain point cloud data; dynamically displaying a color point cloud by utilizing an OPENGL module of QT; establishing a voxel model and acquiring a background reflectivity interval; confirming the voxel of a single frame target; dividing the target by a voxel connecting method; and realizing target tracking by utilizing the inter-frame correlation. According to the method, three-dimensional point cloud data can be acquired by subscribing messages issued by a laser radar sensor in the ROS, a three-dimensional color point cloud model is drawn and rendered by utilizing OPENGL, then single-frame multi-target segmentation extraction is completed by using a voxel connection method, and target tracking and real-time speed measurement are realized by comparing the correlation of target voxels between frames.
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above examples, but is also intended to be limited to the following claims.

Claims (6)

1. The laser radar target demonstration and extraction method under the Qt development environment is characterized by comprising the following steps:
S1, subscribing laser radar point cloud data in ROS by utilizing Qt;
S2, dynamically demonstrating color three-dimensional point cloud data by utilizing an OPENGL module in the Qt;
S3, completing multi-target extraction of single frame data through a voxel connection method; the single frame data in step S3 refers to data obtained by single period scanning of the laser radar, and step S3 specifically includes:
S31, establishing voxels; setting background sampling time t s =5s, wherein only background point clouds exist in [0, t s ]; firstly, obtaining the maximum value of the absolute value of the coordinates of the background point cloud in the X, Y, Z axis direction, namely x m、ym、zm in meter, and establishing a cuboid to completely cover all the current point cloud in a space rectangular coordinate system, wherein the range is [ -x m,xm],[-ym,ym],[-zm,zm ]; establishing cube voxels by taking 0.1m as a length unit, and dividing a point cloud space into 20.x m·20·ym·20·zm voxels;
S32, obtaining background data; calculating the number N s of scanning points falling into a certain voxel in t s, and selecting the maximum value r max and the minimum value r min of the reflectivity in N s, wherein the background reflectivity interval of the voxel is [ r min,rmax ]; similarly, the reflectivity interval of all voxels in the outsourcing cuboid is recorded, and the reflectivity interval can be stored in a computer memory as voxel attributes;
s33, identifying a target; after the background acquisition is completed, when a moving target appears, laser irradiates the target to generate an echo, and when single-frame echo data meets one of the following conditions, the single-frame echo data can be judged as the target;
(1) Position p i(xi,yi,zi) does not belong to any voxel unit; at this time, the outsourcing cuboid range should be expanded according to the target position coordinates to completely contain the target point cloud;
(2) The position p i(xi,yi,zi of the target point) belongs to a certain voxel, but the reflectivity information r i of the target point is not in the background reflection interval corresponding to the voxel;
S34, confirming a target; the point cloud information identified from the background may represent a plurality of targets, so that the points cloud information needs to be effectively segmented, the segmentation is based on whether voxels containing the targets are crosslinked or not, and the multi-target is extracted based on a voxel connection method;
The step S34 specifically includes: the voxel connecting method extracts multiple targets, and comprises the following specific steps:
S341, for the outsourcing cuboid, marking all voxels containing the target point cloud as 'bright grids', storing the center point coordinates of each bright grid by using QVector D type variables, and counting the center point coordinates into QList < QVector3D > type objects blist; as an alternative pool, blist is a bright lattice sequence where all point clouds of the target are located;
S342, selecting any point M 0(x0,y0,z0 in blist, wherein the point is the center of the voxel M 0; since the number of voxels coplanar with the voxel M 0 is 6 and the number of cubes co-prismatic with each 1 edge of M 0 is 1, the number of other voxels that cross M 0 is 18, and each adjacent voxel is M 0i (i=0, 1,2,..17);
S343, calculating the center coordinates M 0i(x0+ui,y0+vi,z0+wi of each adjacent voxel according to the relative position relation (u i,vi,wi) of M 0i and M 0;
S344, searching m 0i in blist, if yes, storing the m 0i in a central point array blist _0 of the target 0, wherein the data type is QList < QVector3D >; to prevent duplicate searches, m 0i needs to be deleted from blist; in other words, m 0i is moved from candidate pool blist into target pool blist —0;
S345, for the first element m 01 in blist _0, searching for 18 adjacent voxels and obtaining a central coordinate value, denoted as m 01i(x01+ui,y01+vi,z01+wi) (i=0, 1,2,..17), if it exists in blist, storing it in blist _0, and deleting m 01i from blist; according to the method, elements blist-0 can be traversed; in addition, blist _0 completes expansion continuously while traversing so as to ensure that bright grids belonging to the current target are added continuously;
S346, when the traversal is finished, namely blist _0 is not increased any more, ending the layer-by-layer bright point selecting process by taking the voxel M 0 as the center; blist _0 constitutes all the bright cells of the target 0;
S347, judging blist the number of elements; if 0, it indicates that only one target exists, and the brightness of the target is blist; if greater than 0, then it is stated that there are also multiple targets; at this time, according to the ideas of steps S342 to S347, the blist is subjected to layer-by-layer extraction of the multi-target blist _1, the blank_2, …, blist _n until the number of elements in the candidate pool blist is 0, which indicates that all target extraction is completed;
s4, completing multi-target tracking through inter-frame correlation analysis;
the step S4 specifically comprises the following steps:
S41, recording the position of the center point of each target according to the brightness lattice array of each target of the current frame;
S42, acquiring a brightness lattice array of each target of the next frame, and recording the position of the center point of each target; performing correlation analysis on the brightness lattice arrays of each target of the front frame and the rear frame, and obtaining a later frame array with the maximum correlation of a certain target in the previous frame through a traversal method;
s43, calculating the space distance between two frames of the same target to obtain the target speed;
And S44, setting the next frame as the current frame, and finishing iteration according to the methods of the steps S41, S42 and S43 when the next frame arrives, wherein each target speed is updated in a laser radar scanning period.
2. The method for demonstrating and extracting the lidar target in the Qt development environment of claim 1, wherein the step S1 specifically includes:
s11, installing Qt and ROS melodic in a Ubuntu desktop operating system;
s12, adding a ROS-dependent dynamic link library and a ROS-dependent dynamic link path in the Qt engineering file;
s13, creating a subscription node in Qt, wherein the subscription node is used for subscribing laser radar point cloud data in the ROS;
s14, after the subscription node is created, starting the laser radar publishing node, and obtaining format data of laser radar publishing by rewriting a static callback function of the subscription node.
3. The method for demonstrating and extracting the lidar target in the Qt development environment of claim 1, wherein the step S2 specifically includes:
s21, converting a point cloud data format;
s22, data are transferred out;
S23, mapping single-frame point cloud reflectivity gray scale data into color data by utilizing OPENCV;
S24, rendering the point cloud data by using OPENGL;
S25, dynamically updating;
s26, graphic transformation.
4. The method for demonstrating and extracting a laser radar target in a Qt development environment of claim 3, wherein the method comprises the following steps:
The format conversion in the step S21 refers to converting the point cloud data type by using the ROS library self-contained function;
The data in the step S22 refers to point cloud data in the static callback function in the step S1;
The single-frame point cloud reflectivity gray data in the step S23 refers to data obtained by single-period scanning of the laser radar;
The step S24 should include position information (p x、py、pz) and color information (p R、pG、pB) for any point p in the point cloud data; writing all the information of the single-frame point cloud into a vertex buffer object QOpenGLBuffer x VBO, and finishing writing of a vertex shader and a fragment shader by using GLSL language to realize calculation and display of positions and colors of each point;
S25, setting the display duration t P of a single-frame point cloud in a picture, if the interface receiving point cloud time is t 1, displaying the frame point cloud within the range of [ t 1,t1+tP ], and replacing and updating the frame data after the frame point cloud exceeds t 1+tP, so that dynamic display is realized and the memory is released in time;
S26, rewriting a mouse event in Qt by combining a camera, a visual angle and a rotation function in OPENGL, realizing the rotation of a mouse dragging image and the image scaling function of a mouse wheel, and smoothly displaying millions of point cloud data.
5. The method for demonstrating and extracting a lidar target in a Qt development environment of claim 3, wherein the specific process of transferring the data in step S22 is as follows: and a signal slot is built in the static callback function, data is transmitted to the common slot function of the class, and in the common slot function, signals built with the external designer interface class object are transmitted, so that the transmission process of the data of the static function to the external class object through the signal slot can be completed.
6. The method for demonstrating and extracting lidar targets in the Qt development environment of claim 5, wherein the mapping of single-frame point cloud reflectivity gray-scale data to color data using OPENCV in step S23 includes the steps of:
S231, installing OPENCV in the Ubuntu desktop operating system;
s232, adding OPENCV dependent dynamic link libraries in the Qt engineering file.
CN202310002862.7A 2023-01-03 2023-01-03 Laser radar target demonstration and extraction method in Qt development environment Active CN116091533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310002862.7A CN116091533B (en) 2023-01-03 2023-01-03 Laser radar target demonstration and extraction method in Qt development environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310002862.7A CN116091533B (en) 2023-01-03 2023-01-03 Laser radar target demonstration and extraction method in Qt development environment

Publications (2)

Publication Number Publication Date
CN116091533A CN116091533A (en) 2023-05-09
CN116091533B true CN116091533B (en) 2024-05-31

Family

ID=86205760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310002862.7A Active CN116091533B (en) 2023-01-03 2023-01-03 Laser radar target demonstration and extraction method in Qt development environment

Country Status (1)

Country Link
CN (1) CN116091533B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019023892A1 (en) * 2017-07-31 2019-02-07 SZ DJI Technology Co., Ltd. Correction of motion-based inaccuracy in point clouds
CN110210389A (en) * 2019-05-31 2019-09-06 东南大学 A kind of multi-targets recognition tracking towards road traffic scene
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
CN110853037A (en) * 2019-09-26 2020-02-28 西安交通大学 Lightweight color point cloud segmentation method based on spherical projection
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN111781608A (en) * 2020-07-03 2020-10-16 浙江光珀智能科技有限公司 Moving target detection method and system based on FMCW laser radar
CN113075683A (en) * 2021-03-05 2021-07-06 上海交通大学 Environment three-dimensional reconstruction method, device and system
CN114419152A (en) * 2022-01-14 2022-04-29 中国农业大学 Target detection and tracking method and system based on multi-dimensional point cloud characteristics
CN114746872A (en) * 2020-04-28 2022-07-12 辉达公司 Model predictive control techniques for autonomous systems
CN114862901A (en) * 2022-04-26 2022-08-05 青岛慧拓智能机器有限公司 Road-end multi-source sensor fusion target sensing method and system for surface mine
CN115032614A (en) * 2022-05-19 2022-09-09 北京航空航天大学 Bayesian optimization-based solid-state laser radar and camera self-calibration method
CN115330923A (en) * 2022-08-10 2022-11-11 小米汽车科技有限公司 Point cloud data rendering method and device, vehicle, readable storage medium and chip

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102038856B1 (en) * 2012-02-23 2019-10-31 찰스 디. 휴스턴 System and method for creating an environment and for sharing a location based experience in an environment
US11049266B2 (en) * 2018-07-31 2021-06-29 Intel Corporation Point cloud viewpoint and scalable compression/decompression
US20200074233A1 (en) * 2018-09-04 2020-03-05 Luminar Technologies, Inc. Automatically generating training data for a lidar using simulated vehicles in virtual space

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019023892A1 (en) * 2017-07-31 2019-02-07 SZ DJI Technology Co., Ltd. Correction of motion-based inaccuracy in point clouds
CN110210389A (en) * 2019-05-31 2019-09-06 东南大学 A kind of multi-targets recognition tracking towards road traffic scene
CN110264468A (en) * 2019-08-14 2019-09-20 长沙智能驾驶研究院有限公司 Point cloud data mark, parted pattern determination, object detection method and relevant device
CN110853037A (en) * 2019-09-26 2020-02-28 西安交通大学 Lightweight color point cloud segmentation method based on spherical projection
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow
CN114746872A (en) * 2020-04-28 2022-07-12 辉达公司 Model predictive control techniques for autonomous systems
CN111781608A (en) * 2020-07-03 2020-10-16 浙江光珀智能科技有限公司 Moving target detection method and system based on FMCW laser radar
CN113075683A (en) * 2021-03-05 2021-07-06 上海交通大学 Environment three-dimensional reconstruction method, device and system
CN114419152A (en) * 2022-01-14 2022-04-29 中国农业大学 Target detection and tracking method and system based on multi-dimensional point cloud characteristics
CN114862901A (en) * 2022-04-26 2022-08-05 青岛慧拓智能机器有限公司 Road-end multi-source sensor fusion target sensing method and system for surface mine
CN115032614A (en) * 2022-05-19 2022-09-09 北京航空航天大学 Bayesian optimization-based solid-state laser radar and camera self-calibration method
CN115330923A (en) * 2022-08-10 2022-11-11 小米汽车科技有限公司 Point cloud data rendering method and device, vehicle, readable storage medium and chip

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Arash Kiani.Point Cloud Registration of Tracked Objects and Real-time Visualization of LiDAR Data on Web and Web VR.《Master's Thesis in Informatics》.2020,1-56. *
Qt与MATLAB混合编程设计雷达信号验证软件;吴阳勇 等;《电子测量技术》;20201123;第43卷(第22期);13-18 *
基于激光视觉数据融合的三维场景重构与监控;赵次郎;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150715(第(2015)07期);I138-1060 *
基于激光雷达传感器的三维多目标检测与跟踪技术研究;吴开阳;《中国优秀硕士学位论文全文数据库 信息科技辑》;20220615(第(2022)06期);I136-366 *
移动机器人视觉伺服操作臂控制方法研究;石泽亮;《中国优秀硕士学位论文全文数据库信息科技辑》;20221115(第(2022)11期);I140-111 *

Also Published As

Publication number Publication date
CN116091533A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN108986195B (en) Single-lens mixed reality implementation method combining environment mapping and global illumination rendering
CN111932671A (en) Three-dimensional solid model reconstruction method based on dense point cloud data
US11024077B2 (en) Global illumination calculation method and apparatus
CN108520557B (en) Massive building drawing method with graphic and image fusion
CN112927359B (en) Three-dimensional point cloud completion method based on deep learning and voxels
Feng et al. A parallel algorithm for viewshed analysis in three-dimensional Digital Earth
CN107220372B (en) A kind of automatic laying method of three-dimensional map line feature annotation
CN111784840B (en) LOD (line-of-sight) level three-dimensional data singulation method and system based on vector data automatic segmentation
CN110070488B (en) Multi-angle remote sensing image forest height extraction method based on convolutional neural network
CN113593027B (en) Three-dimensional avionics display control interface device
Liang et al. Visualizing 3D atmospheric data with spherical volume texture on virtual globes
CN112528508B (en) Electromagnetic visualization method and device
US11544898B2 (en) Method, computer device and storage medium for real-time urban scene reconstruction
CN102831634B (en) Efficient accurate general soft shadow generation method
CN115861527A (en) Method and device for constructing live-action three-dimensional model, electronic equipment and storage medium
CN114820975A (en) Three-dimensional scene simulation reconstruction system and method based on all-element parameter symbolization
CN116091533B (en) Laser radar target demonstration and extraction method in Qt development environment
CN110634184B (en) Loading method of mass oblique photography data
CN117422841A (en) Three-dimensional reconstruction method and system based on remote detection data
CN116958367A (en) Method for quickly combining and rendering complex nerve scene
Crues et al. Digital Lunar Exploration Sites (DLES)
CN116958457A (en) OSGEarth-based war misting effect drawing method
Liang et al. Solar3D: A 3D Extension of GRASS GIS r. sun for Estimating Solar Radiation in Urban Environments
Masood et al. A novel method for adaptive terrain rendering using memory-efficient tessellation codes for virtual globes
CN116993894B (en) Virtual picture generation method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant