CN117607894A - Cooperative detection and target identification method based on inherent sensor of robot - Google Patents

Cooperative detection and target identification method based on inherent sensor of robot Download PDF

Info

Publication number
CN117607894A
CN117607894A CN202311644535.8A CN202311644535A CN117607894A CN 117607894 A CN117607894 A CN 117607894A CN 202311644535 A CN202311644535 A CN 202311644535A CN 117607894 A CN117607894 A CN 117607894A
Authority
CN
China
Prior art keywords
point cloud
target
laser radar
robot
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311644535.8A
Other languages
Chinese (zh)
Inventor
卢彩霞
赵熙俊
于华超
崔星
刘雪妍
王旭
光星星
刘萌
程文
梁震烁
陈佳琪
李兆冬
杨雨
王一全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongbing Intelligent Innovation Research Institute Co ltd
Original Assignee
Zhongbing Intelligent Innovation Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongbing Intelligent Innovation Research Institute Co ltd filed Critical Zhongbing Intelligent Innovation Research Institute Co ltd
Priority to CN202311644535.8A priority Critical patent/CN117607894A/en
Publication of CN117607894A publication Critical patent/CN117607894A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a cooperative detection and target identification method based on a robot inherent sensor. Comprising the following steps: performing point cloud preprocessing on the basis of the laser radar point cloud data to obtain preprocessed point cloud information; acquiring real-time positioning information of the robot and constructing a map by using laser SLAM based on the preprocessed point cloud information and IMU measurement data; identifying the point cloud information by using the load detection equipment based on the preprocessed point cloud information and the real-time positioning information of the robot to obtain a target type and a target position; and fusing the map, the target type and the target position to obtain a semantic map. The method realizes the high accuracy and long-distance detection result of the inherent perceived load of the robot, can assist the robot to realize the real-time positioning and the identification and marking of the target in the map building process without adding visual equipment, and can be used for the map building of special robots in underground spaces or dangerous operation areas.

Description

Cooperative detection and target identification method based on inherent sensor of robot
Technical Field
The invention relates to the technical field of cooperative detection and identification of targets, in particular to a cooperative detection and target identification method based on an inherent sensor of a robot.
Background
In recent years, with the rise of artificial intelligence, autonomous navigation of robots has become a hot spot of research. Real-time positioning and map building (simultaneous localization and mapping, SLAM) is one of the core technologies for realizing autonomous navigation, and a common method is realized by using a laser radar on a robot chassis. The laser radar has the characteristics of high precision, large ranging range, no influence of light and the like, and brings great development opportunity to the laser SLAM. However, the map constructed by the laser SLAM lacks semantic information, and the semantic information can enable the robot to better understand the environment and provide information for robot navigation. Semantic information is usually obtained from image information acquired by a camera, semantic information cannot be obtained by a laser radar point cloud, and how to obtain a semantic map by combining laser radar and vision camera information becomes a research key point.
The current common method for solving the problems is to add a vision sensor on a robot chassis so as to increase the semantic result of vision target identification in the real-time positioning and semantic map construction based on a laser radar, and fuse rich vision images and laser SLAM results on a laser point cloud map through the conversion relation of vision and the laser radar. However, for special robots, the following disadvantages still exist in the common methods:
1. the sensing range of a general vision sensor is 100 meters at maximum, the angle of view is 110 degrees, and a remote target cannot be identified, so that the use requirement of a special robot is not met;
2. the special robot realizes target recognition, tracking and striking, is often assembled with high-performance visual perception load, and a common method for performing target recognition by utilizing vision is a learning-based method. For a special robot with visual perception load, if a visual sensor such as a camera is added to a vehicle chassis to perform target recognition, not only the visual information in the existing perception load cannot be fully utilized, but also the calculation force requirement is increased, so that the cost is increased.
Disclosure of Invention
In view of the above analysis, the embodiment of the invention aims to provide a collaborative detection and target identification method based on a robot inherent sensor, which is used for solving the problem that the calculation force is increased due to the fact that the vision sensor is additionally added under the condition that the existing vision sensor has a small sensing range and a special robot is carried with a vision sensing load.
The aim of the invention is mainly realized by the following technical scheme:
the invention provides a cooperative detection and target identification method based on a robot inherent sensor, which is characterized in that the sensor comprises a laser radar, an IMU measurement module and load detection equipment which are arranged on a robot, and the method comprises the following steps:
performing point cloud preprocessing based on the laser radar point cloud data and the IMU measurement data to obtain preprocessed point cloud information;
acquiring real-time positioning information of the robot and constructing a map by using a laser SLAM based on the preprocessed point cloud information;
identifying the point cloud information by using the load detection equipment based on the preprocessed point cloud information and the real-time positioning information of the robot to obtain a target type and a target position;
and fusing the map, the target type and the target position to obtain a semantic map.
Further, the load detection equipment comprises a laser range finder, a sensing camera, an indexing mechanism and a processor, wherein the target in a designated area is clearly shot through posture adjustment of the indexing mechanism and focal length adjustment of the sensing camera, and the type of the target is identified by the processor; and precisely measuring the target position of the designated area by using a laser range finder.
Further, the identifying, by using the load detection device, the point cloud information based on the preprocessed point cloud information and the positioning information of the robot to obtain a target type and a target position includes:
based on the preprocessed point cloud information, the load detection equipment automatically adjusts the posture of the indexing mechanism and the focal length of the sensing camera to obtain a clear image at the point cloud information, and the laser range finder is used for measuring the distance between the robot and the position of the point cloud to obtain a rotation matrix of the target and the initial posture of the sensing load detection equipment;
and carrying out target recognition by using a target recognition model in the processor based on the clear image to obtain the target type of the image.
Further, the rotation matrix of the initial pose of the target and the sensing load detection device is expressed as:
wherein,a rotation matrix for the initial pose of the target and the sensing load detection equipment; />A gesture rotation matrix for the indexing mechanism; d, d 0 And the distance between the robot and the position of the point cloud is the distance.
Further, the performing object recognition by using an object recognition model based on the clear image to obtain an object type of the image includes:
preprocessing the clear image to adapt to the input format of a target recognition model;
inputting the preprocessed image into a trained target recognition model to obtain a target detection result in the image; wherein the target recognition model is a model using YOLOv5 neural network.
Further, the fusing the map information, the target type and the target position to obtain a semantic map includes:
when the load detection equipment identifies target information, the load detection equipment and the laser radar are calibrated in a combined mode;
projecting a laser radar point cloud into an image of the load detection equipment based on the calibration result, and selecting the laser radar point cloud projected to be positioned at the target position from the laser radar point cloud;
and updating the target type to the map of the laser radar point cloud to obtain a semantic map.
Further, the performing joint calibration on the load detection device and the laser radar includes:
calibrating the sensing camera internal parameters of the load detection equipment to obtain a sensing camera internal parameter matrix:
wherein, (u, v) is the image coordinates; (x, y, z) is the load camera coordinate system coordinates; f/dx is the length of the focal length in the x-axis direction using pixels; f/dy is the length of the focal length in the y-axis direction using pixels; f is a functional relationship in the camera about pixel coordinates; (u) 0 ,v 0 ) Is the pixel offset;
establishing external parameters of the perception camera and the laser radar:
wherein R is the direction of the coordinate axis of the laser radar coordinate system relative to the coordinate axis of the load camera; and t is the position of the origin of coordinates of the laser radar under the load camera coordinate system.
Further, based on the calibration result, the laser radar point cloud is projected into the image of the load detection device using the following formula:
wherein, (X L ,Y L ,Z L ) The point cloud coordinates are under a laser radar coordinate system; (u, v) are pixel coordinates projected onto the perceived camera imaging plane;and a rotation matrix for the initial pose of the target and the sensing load detection equipment.
Further, the selecting the laser radar point cloud with the projection located at the target position from the laser radar point clouds includes:
filtering and downsampling the projected laser radar point cloud;
and carrying out point cloud clustering on the downsampled laser radar point clouds, and selecting the class with the largest number of point clouds as the target point cloud.
Further, the point cloud preprocessing includes:
removing discrete radar points from the laser radar point cloud data, and performing filtering downsampling;
and performing point cloud clustering on the filtered point cloud, and then using IMU pre-integration estimation to complete point cloud de-distortion to obtain processed point cloud information.
Compared with the prior art, the invention has at least one of the following beneficial effects:
1. the invention provides an initial detection azimuth for the visual perception load of the robot by utilizing the characteristics of wide visual field, high-frequency scanning and the like of the laser radar, and realizes the identification and labeling of a remote target in the process of constructing a map by utilizing high accuracy and remote detection of the carried visual perception load.
2. The invention utilizes the self-matching high-performance visual perception load of the special robot, can perform high-precision target identification without adding other visual sensors, lightens the load of the robot and saves the cost.
In the invention, the technical schemes can be mutually combined to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, like reference numerals being used to refer to like parts throughout the several views.
FIG. 1 is a schematic flow chart of a cooperative detection and target identification method based on a robot intrinsic sensor in an embodiment of the invention;
fig. 2 is a block diagram of a target collaborative reconnaissance and map target identification system combining laser SLAM and perceived load in an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which form a part hereof, and together with the description serve to explain the principles of the invention, and are not intended to limit the scope of the invention.
The invention discloses a cooperative detection and target identification method based on a robot inherent sensor, wherein the sensor comprises a laser radar, an IMU measurement module and load detection equipment which are arranged on a robot; as shown in fig. 1, the method comprises the following steps:
step S1, performing point cloud preprocessing based on laser radar point cloud data and IMU measurement data to obtain preprocessed point cloud information;
specifically, the laser radar point cloud is a data set formed by a series of three-dimensional coordinate points acquired by a laser radar sensor, each point represents a space position, and the position can be located in a three-dimensional space through coordinate values; the lidar sensor periodically emits a laser beam and measures the time at which the laser beam returns to calculate the distance between the target object and the lidar. By means of the rotation or scanning mechanism, point cloud data of the target object under different angles can be obtained.
Further, the main purpose of the point cloud preprocessing is to facilitate the subsequent point cloud processing and algorithm.
Specifically, the point cloud preprocessing includes the following steps:
s101, removing discrete radar points from the laser radar point cloud data, and performing filtering downsampling;
specifically, the laser radar point cloud is subjected to noise reduction treatment by using a filtering method, so that the influence of noise is reduced; and downsampling the laser radar point cloud, so that the number of data points is reduced, and the subsequent processing speed is increased.
Preferably, the present embodiment uses a radius filtering method to remove discrete radar points and uses voxel filtering to downsample Lei Dadian cloud.
And step S102, performing point cloud clustering on the filtered point cloud data, and then using IMU pre-integral estimation to complete point cloud de-distortion to obtain processed point cloud information.
Specifically, the point cloud clusters are clustered to gather objects in the environment, and the objects in the environment are integrated through clustering; in the multi-sensor fusion system, because the measurement frequency of the IMU is higher, the data acquired by the IMU has errors, and for the moment of each IMU data acquisition, the data needs to be integrated to obtain gesture and speed information, and the gesture and speed information is combined with laser radar data to be optimized so as to obtain more accurate gesture estimation. However, the integration process of IMU data generates an integration error, and integration requires a large amount of computing resources. To address these issues, IMU pre-integration techniques are used to store the IMU velocity and displacement delta for each time period as constants, rather than re-integrating each time. In the laser SLAM optimization process, the IMU pre-integration result is added into the optimization problem as constant constraint, so that the state quantity only needs to be updated when each optimization iteration is performed, and the pre-integration result is kept unchanged. Therefore, the workload of re-integration each time can be reduced, the optimization speed is improved, and the motion compensation of the point cloud is realized.
Preferably, in this embodiment, the KD-Tree is used to perform neighbor search on the filtered point cloud to find the nearest neighbor point, and perform point cloud clustering by calculating the euclidean distance of the point cloud.
S2, acquiring real-time positioning information of the robot and constructing a map by using laser SLAM based on the preprocessed point cloud information;
specifically, the curvature of the preprocessed point cloud information is utilized to extract feature points, the preprocessed point cloud is divided into edge points and plane points, and the feature points are used for judging whether objects in the front frame point cloud and the rear frame point cloud are the same object.
Further, the curvature is calculated by selecting the current point cloud and the distance values of 5 points (the distance from the pointing to the laser radar) before and after the current point cloud by using the following formula:
wherein c is the curvature of the current point cloud; s is the current point cloud i, namely the kth frameA point cloud set consisting of 5 points in front of and behind the laser point i; { L } is the laser radar coordinate;coordinates of a k-th frame laser point i under { L }; />The coordinates of the neighboring laser spot j of the kth frame laser spot i under { L }.
Further, after calculating the curvatures of all points of a frame of point cloud, sequencing each point in ascending order according to the curvatures, and comparing the calculated curvatures with curvature thresholds; when the curvature c is smaller than the threshold value, the laser radar point at the moment is taken as a plane characteristic point; when the curvature c is larger than the threshold value, the lidar point at this time serves as an edge feature point. Preferably, the curvature threshold is set to 0.1.
When the plane characteristic points exist in the front and rear 5 points of the selected laser radar point, skipping the laser radar point, and selecting the plane characteristic point from the points with smaller curvature; and when the edge characteristic points exist in the front 5 points and the rear 5 points of the selected laser radar point, skipping the laser radar point, and selecting the edge characteristic points from the points with larger curvature.
Further, based on the feature points extracted by the laser radar point cloud, a scan-to-scan method is used for achieving feature matching between frames.
Specifically, the point cloud scanned by the kth time of the laser radar is p k The extracted edge characteristic point set is E k The extracted plane characteristic point set is marked as H k The method comprises the steps of carrying out a first treatment on the surface of the The point cloud scanned by the (k+1) th time is p k+1 The extracted edge characteristic point set is E k+1 The extracted plane characteristic point set is marked as H k+1
Further, an edge feature point set E k And E is connected with k+1 The association is performed using the following steps:
at E k+1 Selecting a characteristic point i in E by using kd-tree k Find the nearest point j to i while being in phaseFinding the nearest point l of j on the adjacent scanning line, forming a straight line by the three points of i, j and l, and correlating the data of the edge characteristic points by constructing the distance residual error from the point i of the k+1 frame to the line.
Specifically, the residual function of the edge point line feature is:
wherein,coordinates of the feature point i; x is X (k,j) The coordinates of the feature point j; x is X (k,l) Is the coordinates of the feature point l.
Further, for the plane characteristic point set H k And H is k+1 The association is performed using the following steps:
at H k+1 Selecting a characteristic point in H by using kd-tree k Finding the nearest point j of i, finding the next nearest point l on the same scanning line, finding the nearest points m, j, l and m points of j on adjacent scanning lines to form a plane, and correlating the data of the plane characteristic points by constructing a point i-to-plane distance residual error of k+1 frames.
Specifically, the residual function of the planar point-plane feature is:
wherein,coordinates of the feature point i; x is X (k,j) The coordinates of the feature point j; x is X (k,l) The coordinates of the feature point l; x is X (k,m) Is the coordinates of the feature point m.
Further, the Jacobian matrix related to the transformation matrix is optimized to obtain an accurate robot positioning result for the residual function of the line feature of the edge feature point and the residual function of the plane feature point.
Further, a three-dimensional point cloud map is constructed based on the laser point cloud data by using a mapping algorithm in the laser SLAM.
S3, identifying the point cloud information by using the load detection equipment based on the preprocessed point cloud information and the real-time positioning information of the robot to obtain a target type and a target position;
specifically, as shown in fig. 2, the load detection device includes a laser range finder, a sensing camera, an indexing mechanism and a processor, and the target in the designated area is clearly photographed through posture adjustment of the indexing mechanism and focal length adjustment of the sensing camera, and the type of the target is identified by using the processor; and precisely measuring the target position of the designated area by using a laser range finder.
Further, the preprocessed point cloud information is used as target rough position area information to be sent to the load detection equipment; after the load detection equipment receives the rough target position area information, automatically adjusting the posture of the indexing mechanism and the focal length of the sensing camera to obtain a clear image of the point cloud information, and recording the posture rotation matrix of the indexing mechanism at the momentAnd the perceived camera focal length f 0 Measuring the distance d between the robot and the position of the point cloud by using the laser range finder 0 Obtaining a rotation matrix of the initial pose of the target and the sensing load detection equipment +.>
Further, performing object recognition by using an object recognition model in the processor based on the clear image to obtain an object type of the image, including the following steps:
preprocessing the clear image to adapt to the input format of a target recognition model; wherein the preprocessing includes scaling and normalizing the input picture.
Inputting the preprocessed image into a trained target recognition model to obtain a target detection result in the image; wherein the target recognition model is a model using YOLOv5 neural network.
Further, performing post-processing on the target detection result in the image obtained by the target recognition model to obtain a final target detection result; wherein the post-processing includes: non-maxima suppression and confidence screening.
The non-maximum value suppresses a detection frame for removing redundancy of the YOLOv5 neural network model output.
The confidence refers to the degree to which the algorithm determines that an object is present in the detection box. The confidence formula is used to calculate the probability of the presence of the target in the detection box. Through a confidence coefficient formula, the accuracy of the detection results of the algorithm on different detection frames can be judged, so that the target detection result with high confidence coefficient is screened out.
And S4, fusing the map, the target type and the target position to obtain a semantic map.
Specifically, the laser SLAM checks whether the load detection equipment identifies target information when constructing a point cloud map; when the load detection equipment identifies target information, the load detection equipment and the laser radar are subjected to joint calibration, and the method comprises the following steps:
calibrating the sensing camera internal parameters of the load detection equipment to obtain a sensing camera internal parameter matrix:
wherein, (u, v) is the image coordinates; (x, y, z) is the load camera coordinate system coordinates; f/dx is the length of the focal length in the x-axis direction using pixels; f/dy is the length of the focal length in the y-axis direction using pixels;f is a functional relationship in the camera about pixel coordinates; (u) 0 ,v 0 ) Is the pixel offset.
After the load camera internal parameters are determined, the calibration of the perceived load and the laser radar is realized by using an automatic software in a mode of a calibration plate, namely, external parameters of the camera and the radar are established:
wherein R is the direction of the coordinate axis of the laser radar coordinate system relative to the coordinate axis of the load camera, and is a 3*3-order rotation matrix; and t is the position of the origin of coordinates of the laser radar under the load camera coordinate system, and is a 3*1-order translation vector.
Further, based on the calibration result, projecting a laser radar point cloud into an image of the load detection device, and selecting the laser radar point cloud projected to be positioned at the target position from the laser radar point clouds;
specifically, the projection process is to convert the laser radar point cloud into a coordinate system of the load detection equipment according to the load detection equipment and a calibration parameter matrix of the laser radar, so as to obtain point cloud coordinates under the coordinate system of the load detection equipment; and projecting the laser radar point cloud onto a perception camera imaging plane based on the internal parameters of the load detection device.
Further, the projection process formula is:
wherein, (X L ,Y L ,Z L ) The point cloud coordinates are under a laser radar coordinate system; (u, v) are pixel coordinates projected onto the perceived camera imaging plane;and a rotation matrix for the initial pose of the target and the sensing load detection equipment.
Further, the selecting the laser radar point cloud with the projection located at the target position from the laser radar point clouds includes:
and carrying out voxel filtering downsampling on the projected laser radar point clouds, and controlling the size of voxels can effectively reduce the number of the point clouds and ensure the outline characteristics of the point clouds.
And performing point cloud European clustering on the downsampled laser radar point cloud, preprocessing data by adopting a KD-Tree-based neighbor query algorithm, and accelerating European clustering.
And selecting the class with the largest number of point clouds as the type of the target point cloud.
Further, combining the installation relation between the laser radar and the robot platform and the target point cloud information, obtaining the relative position relation between the target and the robot, and updating the target type to the map of the laser radar point cloud to obtain a semantic map.
In summary, the collaborative detection and target identification method based on the inherent sensor of the robot provided by the embodiment of the invention realizes the real-time positioning and image construction process of the robot, because of the problems of the discreteness and sparsity of laser point cloud, limited sensing distance of a remote control camera and the like, the target cannot be better identified on a map by relying on a laser radar alone, the rough identification position information of the target is provided for the visual sensing load of the robot by utilizing the characteristics of wide view field, high-frequency scanning and the like of the laser radar, and the recognition and labeling of the target in the real-time positioning and image construction process are assisted by the high accuracy and long-distance detection result of the visual sensing load.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.

Claims (10)

1. The cooperative detection and target identification method based on the inherent sensor of the robot is characterized in that the sensor comprises a laser radar, an IMU measurement module and load detection equipment which are arranged on the robot, and the method comprises the following steps:
performing point cloud preprocessing based on the laser radar point cloud data and the IMU measurement data to obtain preprocessed point cloud information;
acquiring real-time positioning information of the robot and constructing a map by using a laser SLAM based on the preprocessed point cloud information;
identifying the point cloud information by using the load detection equipment based on the preprocessed point cloud information and the real-time positioning information of the robot to obtain a target type and a target position;
and fusing the map, the target type and the target position to obtain a semantic map.
2. The method according to claim 1, wherein the load detection device comprises a laser range finder, a sensing camera, an indexing mechanism and a processor, wherein the target in a designated area is clearly photographed through posture adjustment of the indexing mechanism and focal length adjustment of the sensing camera, and the type of the target is identified by the processor; and precisely measuring the target position of the designated area by using a laser range finder.
3. The method of claim 2, wherein the identifying the point cloud information using the load detection device based on the preprocessed point cloud information and the positioning information of the robot to obtain the target type and the target position comprises:
based on the preprocessed point cloud information, the load detection equipment automatically adjusts the posture of the indexing mechanism and the focal length of the sensing camera to obtain a clear image at the point cloud information, and the laser range finder is used for measuring the distance between the robot and the position of the point cloud to obtain a rotation matrix of the target and the initial posture of the sensing load detection equipment;
and carrying out target recognition by using a target recognition model in the processor based on the clear image to obtain the target type of the image.
4. A method according to claim 3, wherein the rotation matrix of the target and the initial pose of the perceived load detection device is expressed as:
wherein,a rotation matrix for the initial pose of the target and the sensing load detection equipment; />A gesture rotation matrix for the indexing mechanism; d, d 0 And the distance between the robot and the position of the point cloud is the distance.
5. A method according to claim 3, wherein said object recognition using an object recognition model based on said sharp image results in an object type of said image, comprising:
preprocessing the clear image to adapt to the input format of a target recognition model;
inputting the preprocessed image into a trained target recognition model to obtain a target detection result in the image; wherein the target recognition model is a model using YOLOv5 neural network.
6. The method of claim 2, wherein the fusing the semantic map based on the map information, the target type, and the target location comprises:
when the load detection equipment identifies target information, the load detection equipment and the laser radar are calibrated in a combined mode;
projecting a laser radar point cloud into an image of the load detection equipment based on the calibration result, and selecting the laser radar point cloud projected to be positioned at the target position from the laser radar point cloud;
and updating the target type to the map of the laser radar point cloud to obtain a semantic map.
7. The method of claim 6, wherein said jointly calibrating said load detection apparatus and said lidar comprises:
calibrating the sensing camera internal parameters of the load detection equipment to obtain a sensing camera internal parameter matrix:
wherein, (u, v) is the image coordinates; (x, y, z) is the load camera coordinate system coordinates; f/dx is the length of the focal length in the x-axis direction using pixels; f/dy is the length of the focal length in the y-axis direction using pixels; f is a functional relationship in the camera about pixel coordinates; (u) 0 ,v 0 ) Is the pixel offset;
establishing external parameters of the perception camera and the laser radar:
wherein R is the direction of the coordinate axis of the laser radar coordinate system relative to the coordinate axis of the load camera; and t is the position of the origin of coordinates of the laser radar under the load camera coordinate system.
8. The method of claim 7, wherein the laser radar point cloud is projected into the image of the load detection device based on the calibration result using the formula:
wherein, (X L ,Y L ,Z L ) The point cloud coordinates are under a laser radar coordinate system; (u, v) are pixel coordinates projected onto the perceived camera imaging plane;and a rotation matrix for the initial pose of the target and the sensing load detection equipment.
9. The method of claim 8, wherein selecting a lidar point cloud from the lidar point clouds that projects a location at the target location comprises:
filtering and downsampling the projected laser radar point cloud;
and carrying out point cloud clustering on the downsampled laser radar point clouds, and selecting the class with the largest number of point clouds as the target point cloud.
10. The method of claim 1, wherein the point cloud preprocessing comprises:
removing discrete radar points from the laser radar point cloud data, and performing filtering downsampling;
and performing point cloud clustering on the filtered point cloud, and then using IMU pre-integration estimation to complete point cloud de-distortion to obtain processed point cloud information.
CN202311644535.8A 2023-12-04 2023-12-04 Cooperative detection and target identification method based on inherent sensor of robot Pending CN117607894A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311644535.8A CN117607894A (en) 2023-12-04 2023-12-04 Cooperative detection and target identification method based on inherent sensor of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311644535.8A CN117607894A (en) 2023-12-04 2023-12-04 Cooperative detection and target identification method based on inherent sensor of robot

Publications (1)

Publication Number Publication Date
CN117607894A true CN117607894A (en) 2024-02-27

Family

ID=89955939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311644535.8A Pending CN117607894A (en) 2023-12-04 2023-12-04 Cooperative detection and target identification method based on inherent sensor of robot

Country Status (1)

Country Link
CN (1) CN117607894A (en)

Similar Documents

Publication Publication Date Title
CN110097553B (en) Semantic mapping system based on instant positioning mapping and three-dimensional semantic segmentation
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
CN111201451B (en) Method and device for detecting object in scene based on laser data and radar data of scene
CN108932736B (en) Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method
CN110032949B (en) Target detection and positioning method based on lightweight convolutional neural network
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
US9990736B2 (en) Robust anytime tracking combining 3D shape, color, and motion with annealed dynamic histograms
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
CN102298070B (en) Method for assessing the horizontal speed of a drone, particularly of a drone capable of hovering on automatic pilot
JP2018124787A (en) Information processing device, data managing device, data managing system, method, and program
CN108332752B (en) Indoor robot positioning method and device
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
US11663808B2 (en) Distance estimating device and storage medium storing computer program for distance estimation
CN114399675A (en) Target detection method and device based on machine vision and laser radar fusion
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program
CN117115784A (en) Vehicle detection method and device for target data fusion
CN114905512A (en) Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
CN115187941A (en) Target detection positioning method, system, equipment and storage medium
CN114370871A (en) Close coupling optimization method for visible light positioning and laser radar inertial odometer
CN115792912A (en) Method and system for sensing environment of unmanned surface vehicle based on fusion of vision and millimeter wave radar under weak observation condition
CN212044739U (en) Positioning device and robot based on inertial data and visual characteristics
CN116403186A (en) Automatic driving three-dimensional target detection method based on FPN Swin Transformer and Pointernet++
CN116385997A (en) Vehicle-mounted obstacle accurate sensing method, system and storage medium
CN117607894A (en) Cooperative detection and target identification method based on inherent sensor of robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination