CN114397638A - Method and system for filtering dynamic data in laser radar data - Google Patents

Method and system for filtering dynamic data in laser radar data Download PDF

Info

Publication number
CN114397638A
CN114397638A CN202210075526.0A CN202210075526A CN114397638A CN 114397638 A CN114397638 A CN 114397638A CN 202210075526 A CN202210075526 A CN 202210075526A CN 114397638 A CN114397638 A CN 114397638A
Authority
CN
China
Prior art keywords
data
coordinate system
laser radar
semantic
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210075526.0A
Other languages
Chinese (zh)
Inventor
栾春雨
陈放
丁磊
柏晓乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Dynamics Co ltd
Original Assignee
Smart Dynamics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Dynamics Co ltd filed Critical Smart Dynamics Co ltd
Priority to CN202210075526.0A priority Critical patent/CN114397638A/en
Publication of CN114397638A publication Critical patent/CN114397638A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application discloses a method and a system for filtering dynamic data in laser radar data, which are implemented by acquiring camera data stream and laser radar data; inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, wherein the first output result is semantic recognition data; performing data classification based on the semantic identification data, and determining dynamic semantic data; converting the coordinates of the dynamic semantic data in a camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data; and inputting the chassis coordinate coefficient data into a laser radar data filtering model for filtering to obtain filtered radar data, wherein the filtered radar data is obtained by filtering data of the dynamic obstacle. The method solves the technical problem that in the prior art, dynamic obstacles appear in a laser radar detection visual field, which causes abnormity in the process of map building path planning and obstacle avoidance.

Description

Method and system for filtering dynamic data in laser radar data
Technical Field
The application relates to the technical field of laser radar data, in particular to a method and a system for filtering dynamic data in laser radar data.
Background
When drawing, route planning and machine are kept away the barrier through laser radar data, only rely on laser radar data, can't judge the dynamic barrier that exists in the detection range, the laser data that produces when having the dynamic barrier can exert an influence to the data of whole collection, if utilize this data to build drawing, route planning or machine and keep away the barrier and can arouse when handling: abnormal noise points occur in the process of drawing construction, so that the positioning precision is influenced; path planning abnormity is caused by noise residue in the costmap refreshing process; the machine obstacle avoidance process is not smooth and is frequently blocked.
The above-mentioned techniques have been found to have at least the following technical problems:
in the prior art, dynamic obstacles appear in a laser radar detection visual field, which causes the technical problems of abnormal situation in the process of building a map path and avoiding obstacles.
Disclosure of Invention
The application aims to provide a method and a system for filtering dynamic data in laser radar data, which are used for solving the technical problem that dynamic obstacles appear in a laser radar detection field in the prior art, which causes abnormity in the process of map building path planning and obstacle avoidance. The dynamic obstacle detection method has the advantages that the filtering processing is carried out on the laser points generated by the dynamic obstacle in the laser radar detection visual field, the reliable detection of the dynamic obstacle in the detection range is realized, the influence on the positioning precision caused by abnormal noise points in the image building process due to the laser points generated by the dynamic obstacle is avoided, the abnormal path planning caused by the residual noise points in the costmap refreshing process is avoided, and the unsmooth and frequent jamming occurs in the machine obstacle avoiding process.
In view of the foregoing problems, embodiments of the present application provide a method and a system for filtering dynamic data in laser radar data.
In a first aspect, the present application provides a method for filtering dynamic data in laser radar data, where the method includes: acquiring a camera data stream and laser radar data; inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, wherein the first output result is semantic recognition data; performing data classification based on the semantic identification data, and determining dynamic semantic data; converting the coordinates of the dynamic semantic data in a camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data; and inputting the chassis coordinate coefficient data into a laser radar data filtering model for filtering to obtain filtered radar data, wherein the filtered radar data is obtained by filtering data of the dynamic obstacle.
In another aspect, the present application further provides a system for filtering dynamic data in lidar data, configured to perform the method for filtering dynamic data in lidar data according to the first aspect, where the system includes:
the first obtaining unit is used for obtaining camera data stream and laser radar data;
the first execution unit is used for inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, and the first output result is semantic recognition data;
a first determination unit, configured to perform data classification based on the semantic identification data, and determine dynamic semantic data;
the second obtaining unit is used for converting the coordinates of the dynamic semantic data in a camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data;
and the first filtering unit is used for inputting the chassis coordinate coefficient data into a laser radar data filtering model for filtering processing to obtain filtered radar data, and the filtered radar data is obtained by filtering data of a dynamic obstacle.
In a third aspect, the present application further provides a system for filtering dynamic data in lidar data, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
1. through the laser radar data filtering model, laser points generated by dynamic obstacles in laser radar data are filtered, so that the technical effects of filtering the laser points generated by the dynamic obstacles and reliably detecting the dynamic obstacles in a detection range are achieved.
2. The first operation is carried out based on the filtered radar data, the first operation comprises drawing construction, positioning and costmap generation, the interference of laser points generated by filtering dynamic obstacles is utilized, the reliability of the drawing construction, positioning and costmap generation operation is ensured, the technical effects that the positioning precision is influenced due to abnormal noise points generated in the drawing construction process caused by the laser points generated by the dynamic obstacles, and the abnormal path planning condition is sent due to the residual noise points in the costmap refreshing process are effectively avoided.
3. By adding the neural network model, the accuracy and the speed of operation are effectively provided, the semantic recognition is quickly and reliably carried out, and the technical effect of powerful guarantee is provided for accurate filtering processing.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only exemplary, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for filtering dynamic data in lidar data according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a lidar data filtering model in an embodiment of the application;
FIG. 3 is a schematic diagram of a laser spot in a lidar coordinate system in an embodiment of the application;
FIG. 4 is a schematic diagram of the conversion of all lidar data to a lidar coordinate system in an embodiment of the application;
FIG. 5 is a schematic structural diagram of a system for filtering dynamic data in lidar data according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a first executing unit 12, a first determining unit 13, a second obtaining unit 14, a first filtering unit 15, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 305.
Detailed Description
The embodiment of the application provides a method and a system for filtering dynamic data in laser radar data, and solves the technical problem that in the prior art, dynamic obstacles appear in a laser radar detection field, which causes abnormity in the process of map building path planning and obstacle avoidance.
In the following, the technical solutions in the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. It should be further noted that, for the convenience of description, only some but not all of the elements relevant to the present application are shown in the drawings.
The technical scheme provided by the application has the following general idea:
acquiring a camera data stream and laser radar data; inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, wherein the first output result is semantic recognition data; performing data classification based on the semantic identification data, and determining dynamic semantic data; converting the coordinates of the dynamic semantic data in a camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data; the chassis coordinate coefficient data are input into a laser radar data filtering model for filtering processing, filtered radar data are obtained, the filtered radar data are radar data obtained by filtering data of dynamic obstacles, filtering processing of laser points generated by the dynamic obstacles is achieved, reliable detection of the dynamic obstacles in a detection range is achieved, the situation that positioning accuracy is affected due to abnormal noise points in the image building process caused by the laser points generated by the dynamic obstacles is avoided, path planning is abnormal due to noise point residues in the costmap refreshing process, and the technical effect that frequent blockage is not smooth in the obstacle avoiding process of the machine is achieved.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
Referring to fig. 1, an embodiment of the present application provides a method for filtering dynamic data in laser radar data, where the method includes:
step S100: acquiring a camera data stream and laser radar data;
specifically, the input data are lidar data, RGB data of the depth camera, and depth data thereof. RGB is the color representing the three channels of red, green and blue.
Step S200: inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, wherein the first output result is semantic recognition data;
further, the step of inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, wherein the first output result is semantic recognition data comprises the step of obtaining a training data set; training a neural network model through the training data set, wherein the training data set consists of a plurality of groups of data, each group of data comprises a camera data stream, laser radar data and identification information for identifying semantic identification data, and the semantic identification model is obtained; and inputting the camera data stream and the laser radar data as input data into the semantic recognition model to obtain the first output result.
Specifically, the camera data stream and the lidar data are input into a semantic recognition model as input data for semantic recognition, and the camera data stream and the lidar data are used for analyzing and recognizing objects in the environment according to data characteristics and outputting the objects. The semantic recognition model is an AI model, a training set is constructed through a large amount of data, the neural network model is trained through the training data set, the training process is a supervised learning process, each group of data in the training data set comprises a camera data stream, laser radar data and a recognition result thereof for marking, and the functional relationship is continuously optimized through a large amount of learning functional relationships among the camera data stream, the laser radar data and the recognition result, so that the semantic recognition model is obtained through continuous optimization and convergence of the mental network model, and the semantic recognition model obtained through training can output the semantic recognition result through inputting the camera data stream and the laser radar data. In the learning supervision process, the output result is verified through the identification information of the training data, when the camera data stream and the laser radar data of the input training data output the identification result through a semantic identification model, if the difference between the semantic identification results of the identification corresponding to the training data is small or the same, the model training is completed, if the semantic identification results are different or the difference is large, the model training is carried out again after the loss function is calculated until the output semantic identification result is the same as the identification result. Therefore, the semantic recognition model can realize accurate operation processing on the input data, and the accuracy of the obtained corresponding semantic recognition result is achieved.
Step S300: performing data classification based on the semantic identification data, and determining dynamic semantic data;
specifically, the data output by the semantic recognition model is classified dynamically and statically, wherein the dynamic state comprises objects which can move together, and the rest are defined as static objects.
Step S400: converting the coordinates of the dynamic semantic data in a camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data;
further, the converting the coordinates of the dynamic semantic data in the camera coordinate system to the chassis coordinate system to obtain the chassis coordinate system data includes: according to the formula:
Figure BDA0003483798300000081
converting the coordinates of the dynamic semantic data under the camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data, wherein,
Figure BDA0003483798300000082
coordinate values, T, representing semantic data in the chassis coordinate systemB->CRepresenting the coordinate transformation of the camera coordinate system to the chassis coordinate system, PCAnd the coordinate values of the semantic data in the camera coordinate system are represented.
Specifically, semantic data of the dynamic object is converted into a base coordinate system from coordinates in a camera coordinate system through a formula, wherein the camera coordinate system is a camera coordinate system, and the base coordinate system is a chassis coordinate system. The converted data should be a data box, as shown in fig. 4, and is not point data because the dynamic object has a certain volume. The range of dynamic obstacles, i.e., the dynamic obstacle range described below, is thus determined based on the data frame.
Step S500: and inputting the chassis coordinate coefficient data into a laser radar data filtering model for filtering to obtain filtered radar data, wherein the filtered radar data is obtained by filtering data of the dynamic obstacle.
Further, inputting the chassis coordinate coefficient data into a laser radar data filtering model for filtering processing to obtain filtered radar data, including: converting the chassis coordinate system data from a chassis coordinate system to a laser radar coordinate system, and determining the range of the dynamic barrier; converting the angle of the laser radar original data into a coordinate value under a laser radar coordinate system by using amplitude data; judging whether the coordinate value of the laser radar original data in a laser radar coordinate system is in the range of the dynamic obstacle or not; if the distance is within the range of the dynamic obstacle, performing first distance assignment on the corresponding laser points, and traversing all original data of the laser radar of a single frame to judge whether the distance is within the range of the dynamic obstacle; performing the first distance assignment on all laser points in the range of the dynamic obstacle; and obtaining a second output result of the laser radar data filtering model based on all the processed laser radar data, wherein the second output result comprises the filtered radar data.
Further, the first distance is a maximum distance value.
Further, the converting the chassis coordinate system data from the chassis coordinate system to the lidar coordinate system includes: according to the formula:
Figure BDA0003483798300000091
converting the chassis coordinate system data from a chassis coordinate system to a lidar coordinate system, wherein,
Figure BDA0003483798300000092
is the coordinate value of the semantic data in the laser radar coordinate system,
Figure BDA0003483798300000093
representing the direct coordinate transformation, P, of the lidar coordinate system into the chassis coordinate systemCCoordinate value, T, representing semantic data in the camera coordinate systemB->CRepresenting the coordinate transformation of the camera coordinate system to the chassis coordinate system.
Further, the converting the laser radar original data angle into a coordinate value under a laser radar coordinate system by using amplitude data includes: obtaining laser radar original data; according to the laser radar original data, determining amplitude data and angle data of the laser radar original data in a laser radar coordinate system; based on the amplitude data, angle data, according to a formula:
Figure BDA0003483798300000094
and converting the angle of the original data of the laser radar into a coordinate value under a laser radar coordinate system by amplitude data, wherein x is an x-axis coordinate value converted into the coordinate system, y is a y-axis coordinate value, theta is an angle between a connecting line of a laser point and an origin and the x axis, and Range is a distance value from the origin to the laser point.
Specifically, the data of the coordinates after conversion is used as input to be filtered by a laser radar data filtering module, and the specific flow refers to a laser radar data filtering flow chart in fig. 2. First, the behavior of the laser spot in the lidar coordinate system is understood, please refer to fig. 3, where the amplitude and angle data of the lidar data are converted by the conversion formula
Figure BDA0003483798300000101
Figure BDA0003483798300000102
A schematic diagram after converting into coordinate values under the laser radar coordinate system and converting all laser radar data into the laser radar coordinate system is shown in fig. 4. Judging whether the laser point is in the dynamic obstacle range of the dynamic obstacle, namely in the data frame, if the laser point is in the database, the laser point is considered to be the laser point generated by the dynamic obstacle, and carrying out the maximum distance value amplitude value on the amplitude value of the laser point, so that the interference of the laser point can be eliminated, the influence of the dynamic obstacle on the reliability of image construction and path planning caused by the fact that the dynamic obstacle appears in the detection visual field of the 2D laser radar is avoided, and the same cycle judgment is carried out on all laser radar data in the original laser radar data, whether the laser point is the laser point generated by the dynamic obstacle or not, and the single-frame training is carried outAll laser radar data are judged, and laser points generated by dynamic obstacles in the laser radar data are filtered, so that the reliability of the data is ensured, and the technical problem that the dynamic obstacles appear in a laser radar detection field in the prior art, which causes abnormity in the mapping path planning and obstacle avoidance process is solved. The dynamic obstacle detection method has the advantages that the filtering processing is carried out on the laser points generated by the dynamic obstacle in the laser radar detection visual field, the reliable detection of the dynamic obstacle in the detection range is realized, the influence on the positioning precision caused by abnormal noise points in the image building process due to the laser points generated by the dynamic obstacle is avoided, the abnormal path planning caused by the residual noise points in the costmap refreshing process is avoided, and the unsmooth and frequent jamming occurs in the machine obstacle avoiding process.
Further, the method further comprises: and performing a first operation based on the filtered radar data, wherein the first operation comprises drawing, positioning and costmap generation.
Specifically, the filtered lidar data is input only for the mapping, positioning and costmap generation processes, and the original lidar data is still used for processing in the obstacle avoidance strategy. It should be understood that costmap is a two or three dimensional map that the robot collects sensor information to build and update.
In summary, the embodiment of the present application has the following technical effects:
1. through the laser radar data filtering model, laser points generated by dynamic obstacles in laser radar data are filtered, so that the technical effects of filtering the laser points generated by the dynamic obstacles and reliably detecting the dynamic obstacles in a detection range are achieved.
2. The first operation is carried out based on the filtered radar data, the first operation comprises drawing construction, positioning and costmap generation, the interference of laser points generated by filtering dynamic obstacles is utilized, the reliability of the drawing construction, positioning and costmap generation operation is ensured, the technical effects that the positioning precision is influenced due to abnormal noise points generated in the drawing construction process caused by the laser points generated by the dynamic obstacles, and the abnormal path planning condition is sent due to the residual noise points in the costmap refreshing process are effectively avoided.
3. By adding the neural network model, the accuracy and the speed of operation are effectively provided, the semantic recognition is quickly and reliably carried out, and the technical effect of powerful guarantee is provided for accurate filtering processing.
Example two
Based on the method for filtering dynamic data in lidar data in the foregoing embodiment, the present invention also provides a system for filtering dynamic data in lidar data, referring to fig. 5, where the system includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain a camera data stream and laser radar data;
a first execution unit 12, where the first execution unit 12 is configured to input the camera data stream and the lidar data into a semantic recognition model, and obtain a first output result of the semantic recognition model, where the first output result is semantic recognition data;
a first determining unit 13, where the first determining unit 13 is configured to perform data classification based on the semantic identification data, and determine dynamic semantic data;
a second obtaining unit 14, where the second obtaining unit 14 is configured to convert the coordinates of the dynamic semantic data in the camera coordinate system to the chassis coordinate system, and obtain chassis coordinate system data;
and the first filtering unit 15 is configured to input the chassis coordinate coefficient data into a laser radar data filtering model for filtering, so as to obtain filtered radar data, where the filtered radar data is radar data obtained by filtering data of a dynamic obstacle.
Further, the system further comprises:
the second determining unit is used for converting the chassis coordinate system data from a chassis coordinate system to a laser radar coordinate system and determining the range of the dynamic obstacle;
the second execution unit is used for converting the angle of the laser radar original data into a coordinate value under a laser radar coordinate system by using amplitude data;
the first judgment unit is used for judging whether the coordinate value of the laser radar original data in a laser radar coordinate system is in the range of the dynamic obstacle;
the third execution unit is used for performing first distance assignment on corresponding laser points if the laser points are within the dynamic obstacle range, and traversing all original data of the laser radar of a single frame to judge whether the laser radar is within the dynamic obstacle range;
a fourth execution unit, configured to perform the first distance assignment on all laser points in the dynamic obstacle range;
a third obtaining unit, configured to obtain a second output result of the laser radar data filtering model based on all processed laser radar data, where the second output result includes the filtered radar data.
Further, the first distance is a maximum distance value.
Further, the system further comprises:
a fifth execution unit, configured to perform a first operation based on the filtered radar data, where the first operation includes mapping, positioning, and generating a costmap.
Further, the system further comprises:
a sixth execution unit to:
Figure BDA0003483798300000131
Figure BDA0003483798300000132
converting the coordinates of the dynamic semantic data under the camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data, wherein,
Figure BDA0003483798300000133
to representCoordinate value of semantic data in the chassis coordinate system, TB->CRepresenting the coordinate transformation of the camera coordinate system to the chassis coordinate system, PCAnd the coordinate values of the semantic data in the camera coordinate system are represented.
Further, the system further comprises:
a seventh execution unit to:
Figure BDA0003483798300000134
Figure BDA0003483798300000135
converting the chassis coordinate system data from a chassis coordinate system to a lidar coordinate system, wherein,
Figure BDA0003483798300000136
is the coordinate value of the semantic data in the laser radar coordinate system,
Figure BDA0003483798300000137
representing the direct coordinate transformation, P, of the lidar coordinate system into the chassis coordinate systemCCoordinate value, T, representing semantic data in the camera coordinate systemB->CRepresenting the coordinate transformation of the camera coordinate system to the chassis coordinate system.
Further, the system further comprises:
a fourth obtaining unit, configured to obtain raw data of the laser radar;
the third determining unit is used for determining amplitude data and angle data of the laser radar original data in a laser radar coordinate system according to the laser radar original data;
an eighth execution unit, configured to, based on the amplitude data and the angle data, according to a formula:
Figure BDA0003483798300000141
converting the angle of the laser radar original data into coordinate values under a laser radar coordinate system by using amplitude data, wherein xIs the coordinate value of x-axis converted to coordinate system, y is the coordinate value of y-axis, theta is the angle between the connecting line of laser point and origin and x-axis with x-axis as 0 degree, and Range is the distance value from origin to laser point.
Further, the system further comprises:
a fifth obtaining unit for obtaining a training data set;
a sixth obtaining unit, configured to train a neural network model through the training data set, where the training data set is composed of multiple groups of data, each group of data includes a camera data stream, lidar data, and identification information that identifies semantic identification data, and the semantic identification model is obtained;
and the seventh obtaining unit is used for inputting the camera data stream and the laser radar data as input data into the semantic recognition model to obtain the first output result.
In the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on the difference of the embodiments that is expected, the aforementioned filtering method for dynamic data in lidar data in the first embodiment of fig. 1 and the specific example are also applicable to the filtering system for dynamic data in lidar data of the present embodiment, and through the foregoing detailed description of the filtering method for dynamic data in lidar data, those skilled in the art can clearly know the filtering system for dynamic data in lidar data in the present embodiment, so for the brevity of the description, detailed description is not repeated here. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Exemplary electronic device
The electronic apparatus of the embodiment of the present application is described below with reference to fig. 6.
Fig. 6 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Based on the inventive concept of the method for filtering dynamic data in lidar data in the foregoing embodiments, the present invention further provides a system for filtering dynamic data in lidar data, wherein the system is stored with a computer program, and the computer program is executed by a processor to implement the steps of any one of the methods for filtering dynamic data in lidar data.
Where in fig. 6 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other apparatus over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
In summary, one or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the application provides a method and a system for filtering dynamic data in laser radar data, which are implemented by acquiring camera data stream and laser radar data; inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, wherein the first output result is semantic recognition data; performing data classification based on the semantic identification data, and determining dynamic semantic data; converting the coordinates of the dynamic semantic data in a camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data; and inputting the chassis coordinate coefficient data into a laser radar data filtering model for filtering to obtain filtered radar data, wherein the filtered radar data is obtained by filtering data of the dynamic obstacle. The dynamic obstacle detection method has the advantages that the filtering processing is carried out on the laser points generated by the dynamic obstacle in the laser radar detection visual field, the reliable detection of the dynamic obstacle in the detection range is realized, the influence on the positioning precision caused by abnormal noise points in the image building process due to the laser points generated by the dynamic obstacle is avoided, the abnormal path planning caused by the residual noise points in the costmap refreshing process is avoided, and the unsmooth and frequent jamming occurs in the machine obstacle avoiding process. Therefore, the technical problems that in the prior art, dynamic obstacles appear in a laser radar detection visual field, and abnormity occurs in the process of building a map path and avoiding obstacles are solved.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, the present application may take the form of an entirely software embodiment, an entirely hardware embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application is in the form of a computer program product that may be embodied on one or more computer-usable storage media having computer-usable program code embodied therewith. And such computer-usable storage media include, but are not limited to: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk Memory, a Compact Disc Read-Only Memory (CD-ROM), and an optical Memory.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the same technology as the present invention, it is intended that the present invention encompass such modifications and variations as well.

Claims (10)

1. A method for filtering dynamic data in laser radar data, the method comprising:
acquiring a camera data stream and laser radar data;
inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, wherein the first output result is semantic recognition data;
performing data classification based on the semantic identification data, and determining dynamic semantic data;
converting the coordinates of the dynamic semantic data in a camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data;
and inputting the chassis coordinate coefficient data into a laser radar data filtering model for filtering to obtain filtered radar data, wherein the filtered radar data is obtained by filtering data of the dynamic obstacle.
2. The method of claim 1, wherein inputting the chassis coordinate data into a lidar data filtering model for filtering to obtain filtered radar data comprises:
converting the chassis coordinate system data from a chassis coordinate system to a laser radar coordinate system, and determining the range of the dynamic barrier;
converting the angle of the laser radar original data into a coordinate value under a laser radar coordinate system by using amplitude data;
judging whether the coordinate value of the laser radar original data in a laser radar coordinate system is in the range of the dynamic obstacle or not;
if the distance is within the range of the dynamic obstacle, performing first distance assignment on the corresponding laser points, and traversing all original data of the laser radar of a single frame to judge whether the distance is within the range of the dynamic obstacle;
performing the first distance assignment on all laser points in the range of the dynamic obstacle;
and obtaining a second output result of the laser radar data filtering model based on all the processed laser radar data, wherein the second output result comprises the filtered radar data.
3. The method of claim 2, wherein the first distance is a maximum distance value.
4. The method of claim 2, wherein the method further comprises:
and performing a first operation based on the filtered radar data, wherein the first operation comprises drawing, positioning and costmap generation.
5. The method of claim 1, wherein the converting the coordinates of the dynamic semantic data under the camera coordinate system to under the chassis coordinate system to obtain chassis coordinate system data comprises:
according to the formula:
Figure FDA0003483798290000021
converting the coordinates of the dynamic semantic data under the camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data, wherein,
Figure FDA0003483798290000022
coordinate values, T, representing semantic data in the chassis coordinate systemB->CRepresenting the coordinate transformation of the camera coordinate system to the chassis coordinate system, PCAnd the coordinate values of the semantic data in the camera coordinate system are represented.
6. The method of claim 2, wherein said converting the chassis coordinate system data from a chassis coordinate system to a lidar coordinate system comprises:
according to the formula:
Figure FDA0003483798290000023
converting the chassis coordinate system data from a chassis coordinate system to a lidar coordinate system, wherein,
Figure FDA0003483798290000031
is the coordinate value of the semantic data in the laser radar coordinate system,
Figure FDA0003483798290000032
representing the direct coordinate transformation, P, of the lidar coordinate system into the chassis coordinate systemCCoordinate value, T, representing semantic data in the camera coordinate systemB->CRepresenting the coordinate transformation of the camera coordinate system to the chassis coordinate system.
7. The method of claim 2, wherein transforming the lidar raw data angle with amplitude data to coordinate values in a lidar coordinate system comprises:
obtaining laser radar original data;
according to the laser radar original data, determining amplitude data and angle data of the laser radar original data in a laser radar coordinate system;
based on the amplitude data, angle data, according to a formula:
Figure FDA0003483798290000033
Figure FDA0003483798290000034
and converting the angle of the original data of the laser radar into a coordinate value under a laser radar coordinate system by amplitude data, wherein x is an x-axis coordinate value converted into the coordinate system, y is a y-axis coordinate value, theta is an angle between a connecting line of a laser point and an origin and the x axis, and Range is a distance value from the origin to the laser point.
8. The method of claim 1, wherein the inputting the camera data stream and the lidar data into a semantic recognition model to obtain a first output result of the semantic recognition model, the first output result being semantic recognition data, comprises:
obtaining a training data set;
training a neural network model through the training data set, wherein the training data set consists of a plurality of groups of data, each group of data comprises a camera data stream, laser radar data and identification information for identifying semantic identification data, and the semantic identification model is obtained;
and inputting the camera data stream and the laser radar data as input data into the semantic recognition model to obtain the first output result.
9. A system for filtering dynamic data in lidar data, the system comprising:
the first obtaining unit is used for obtaining camera data stream and laser radar data;
the first execution unit is used for inputting the camera data stream and the laser radar data into a semantic recognition model to obtain a first output result of the semantic recognition model, and the first output result is semantic recognition data;
a first determination unit, configured to perform data classification based on the semantic identification data, and determine dynamic semantic data;
the second obtaining unit is used for converting the coordinates of the dynamic semantic data in a camera coordinate system into a chassis coordinate system to obtain chassis coordinate system data;
and the first filtering unit is used for inputting the chassis coordinate coefficient data into a laser radar data filtering model for filtering processing to obtain filtered radar data, and the filtered radar data is obtained by filtering data of a dynamic obstacle.
10. A system for filtering dynamic data in lidar data, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any of claims 1-8 when executing the program.
CN202210075526.0A 2022-01-22 2022-01-22 Method and system for filtering dynamic data in laser radar data Pending CN114397638A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210075526.0A CN114397638A (en) 2022-01-22 2022-01-22 Method and system for filtering dynamic data in laser radar data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210075526.0A CN114397638A (en) 2022-01-22 2022-01-22 Method and system for filtering dynamic data in laser radar data

Publications (1)

Publication Number Publication Date
CN114397638A true CN114397638A (en) 2022-04-26

Family

ID=81232402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210075526.0A Pending CN114397638A (en) 2022-01-22 2022-01-22 Method and system for filtering dynamic data in laser radar data

Country Status (1)

Country Link
CN (1) CN114397638A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935132A (en) * 2024-03-22 2024-04-26 深圳市天兴诚科技有限公司 Data processing system for identifying stacked goods by utilizing laser radar

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458112A (en) * 2019-08-14 2019-11-15 上海眼控科技股份有限公司 Vehicle checking method, device, computer equipment and readable storage medium storing program for executing
CN111105495A (en) * 2019-11-26 2020-05-05 四川阿泰因机器人智能装备有限公司 Laser radar mapping method and system fusing visual semantic information
CN111652179A (en) * 2020-06-15 2020-09-11 东风汽车股份有限公司 Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN111860425A (en) * 2020-07-30 2020-10-30 清华大学 Deep multi-mode cross-layer cross fusion method, terminal device and storage medium
CN112859873A (en) * 2021-01-25 2021-05-28 山东亚历山大智能科技有限公司 Semantic laser-based mobile robot multi-stage obstacle avoidance system and method
CN113312983A (en) * 2021-05-08 2021-08-27 华南理工大学 Semantic segmentation method, system, device and medium based on multi-modal data fusion
CN113804182A (en) * 2021-09-16 2021-12-17 重庆邮电大学 Grid map creating method based on information fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458112A (en) * 2019-08-14 2019-11-15 上海眼控科技股份有限公司 Vehicle checking method, device, computer equipment and readable storage medium storing program for executing
CN111105495A (en) * 2019-11-26 2020-05-05 四川阿泰因机器人智能装备有限公司 Laser radar mapping method and system fusing visual semantic information
CN111652179A (en) * 2020-06-15 2020-09-11 东风汽车股份有限公司 Semantic high-precision map construction and positioning method based on dotted line feature fusion laser
CN111860425A (en) * 2020-07-30 2020-10-30 清华大学 Deep multi-mode cross-layer cross fusion method, terminal device and storage medium
CN112859873A (en) * 2021-01-25 2021-05-28 山东亚历山大智能科技有限公司 Semantic laser-based mobile robot multi-stage obstacle avoidance system and method
CN113312983A (en) * 2021-05-08 2021-08-27 华南理工大学 Semantic segmentation method, system, device and medium based on multi-modal data fusion
CN113804182A (en) * 2021-09-16 2021-12-17 重庆邮电大学 Grid map creating method based on information fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935132A (en) * 2024-03-22 2024-04-26 深圳市天兴诚科技有限公司 Data processing system for identifying stacked goods by utilizing laser radar
CN117935132B (en) * 2024-03-22 2024-06-04 深圳市天兴诚科技有限公司 Data processing system for identifying stacked goods by utilizing laser radar

Similar Documents

Publication Publication Date Title
WO2021208143A1 (en) Method and system for planning and sampling mobile robot path in human-machine integration environment
CN112837371A (en) Object grabbing method and device based on 3D matching and computing equipment
EP3620964A1 (en) Lane line processing method and device
US8385649B2 (en) Information processing apparatus and method for detecting object in image data
CN110751012B (en) Target detection evaluation method and device, electronic equipment and storage medium
CN112200884B (en) Lane line generation method and device
EP3620945A1 (en) Obstacle distribution simulation method, device and terminal based on multiple models
CN111259710B (en) Parking space structure detection model training method adopting parking space frame lines and end points
CN114397638A (en) Method and system for filtering dynamic data in laser radar data
US20200088536A1 (en) Method for trajectory planning of a movable object
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN111968102B (en) Target equipment detection method, system, medium and electronic terminal
CN115236696B (en) Method and device for determining obstacle, electronic equipment and storage medium
CN112381034A (en) Lane line detection method, device, equipment and storage medium
CN112308917A (en) Vision-based mobile robot positioning method
CN116664829A (en) RGB-T semantic segmentation method, system, device and storage medium
CN104408720A (en) Image processing method and device
CN113065537B (en) OCR file format conversion method and system based on model optimization
CN114299533A (en) Power grid wiring diagram element and line identification system and method based on artificial intelligence
Sun et al. The study on intelligent vehicle collision-avoidance system with vision perception and fuzzy decision making
CN112966565A (en) Object detection method and device, terminal equipment and storage medium
CN113191279A (en) Data annotation method, device, equipment, storage medium and computer program product
CN111598078A (en) Object detection method and system based on sequence optimization
EP4134870A1 (en) A method for assessment of a neural network suitable for perception of an environment of a vehicle with regard to the safe use, an assistance system, a vehicle and a computer program product
CN114295640B (en) Container weld joint identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination