CN111982124B - Deep learning-based three-dimensional laser radar navigation method and device in glass scene - Google Patents

Deep learning-based three-dimensional laser radar navigation method and device in glass scene Download PDF

Info

Publication number
CN111982124B
CN111982124B CN202010881244.0A CN202010881244A CN111982124B CN 111982124 B CN111982124 B CN 111982124B CN 202010881244 A CN202010881244 A CN 202010881244A CN 111982124 B CN111982124 B CN 111982124B
Authority
CN
China
Prior art keywords
glass
neural network
deep neural
radar
laser beam
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010881244.0A
Other languages
Chinese (zh)
Other versions
CN111982124A (en
Inventor
孟杰
王书亭
谢远龙
蒋立泉
吴天豪
孙浩东
李鹏程
林鑫
刘伦洪
吴昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202010881244.0A priority Critical patent/CN111982124B/en
Publication of CN111982124A publication Critical patent/CN111982124A/en
Application granted granted Critical
Publication of CN111982124B publication Critical patent/CN111982124B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a three-dimensional laser radar navigation method and device based on deep learning in a glass scene. The navigation method comprises the steps of firstly, performing off-line training to obtain a glass recognition deep neural network and an optical characteristic deep neural network; in the navigation process, the glass probability identified by the glass identification deep neural network and the calculated glass direction are inserted into the space voxel map in real time; simulating a laser beam emitted by a laser radar by adopting a ray casting method in a space voxel map containing glass probability and a glass normal vector, and screening radar points based on the probability that the laser beam penetrates through glass obtained by an optical characteristic deep neural network to generate a simulated point cloud set; and finally, calculating the absolute pose of the robot in the space voxel map by adopting normal distribution transformation based on the real three-dimensional radar data and the simulation point cloud set. The navigation method can avoid perception confusion of positioning based on the laser radar, and improve the three-dimensional navigation precision of the mobile robot in a glass scene.

Description

Deep learning-based three-dimensional laser radar navigation method and device in glass scene
Technical Field
The invention belongs to the field of mobile robot navigation, and relates to a three-dimensional laser radar navigation method and equipment based on deep learning in a glass scene, in particular to a three-dimensional laser radar navigation method based on a deep neural network for a mobile robot in the glass scene.
Background
The navigation based on the laser radar is one of the most reliable and accurate navigation methods for acquiring the posture of the robot. Recently, glass walls have become a major element of mobile robot navigation buildings such as museums or shopping centers. However, unlike opaque objects, glass has reflective properties that can lead to ambiguous or erroneous lidar sensing processes. Therefore, it is difficult to apply lidar based positioning to scenes with glass walls. In order to improve the positioning accuracy of the robot under various environmental conditions, a reliable positioning method based on the laser radar must be researched in a glass scene to realize high-precision positioning.
For robot positioning, the mainstream navigation methods usually rely on existing maps that are matched with sensor measurement data and maps to obtain robot pose. However, mapping and localization are closely related to the external environment. In this sense, their performance may be severely degraded by the glass wall scene. In practice, in order to achieve high-precision navigation in a glass environment, both the accuracy of the map construction and the availability of the positioning algorithm must be considered.
First consider the accuracy of the map construction. Due to the complex optical characteristics of glass, the current mainstream mapping method based on laser radar cannot accurately and effectively identify the glass information into the necessary map, which seriously affects the performance of the map-based positioning method under the condition of glass. The deep neural network can accurately identify the glass through the learning of a large amount of laser radar data, so that the navigation algorithm is facilitated to improve the accuracy and the robustness of positioning by utilizing the glass position information. In these methods, glass walls are considered as common obstacles in common maps. But differences between the reflective characteristics of glass and common obstacles may result in a mismatch between the lidar data and the map construction.
Another key issue then is the design of the positioning algorithm. In terms of improving the positioning accuracy, the normal distribution transformation is considered as an effective solution due to higher accuracy and reliability. However, due to perception confusion caused by glass, normal distribution transformation is difficult to match lidar data with a map in a glass environment. However, existing localization systems do not efficiently obtain glass information from voxel maps, thereby degrading localization performance.
In summary, how to realize precise navigation of a robot based on a laser radar in a glass scene is a difficult problem to be solved urgently at present.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides a three-dimensional laser radar navigation method and equipment of a mobile robot in a glass scene, and aims to accurately provide matching point cloud data for normal distribution transformation by constructing an optical characteristic deep neural network and effectively integrate glass information into positioning optimization, thereby solving the technical problem of inaccurate positioning of the robot navigation technology based on the laser radar in the glass scene in the prior art, and meeting the actual requirement of accurate navigation in the glass environment.
In order to achieve the above object, according to an aspect of the present invention, there is provided a method for three-dimensional lidar navigation in a glass scene based on deep learning, comprising the steps of:
an off-line training stage:
s1, building a first deep neural network, and taking the rotation angle, the pitching angle, the length and the intensity of a laser beam corresponding to each radar point in laser radar point cloud data obtained by scanning a three-dimensional laser radar as input data of the first deep neural network; the probability of whether each radar point is glass or not is used as output data of the first deep neural network, and the glass recognition deep neural network is obtained through training;
building a second deep neural network, and taking the rotation angle, the pitch angle and the length of the laser beam as input data of the second deep neural network; training to obtain an optical characteristic deep neural network by taking the probability that each laser beam meets glass and is directly reflected to generate a radar point and the probability of transmitting the glass under the conditions of different lengths, rotation angles and pitching angles as output data of a second deep neural network;
and (3) an online navigation stage:
s2, establishing a spatial voxel map under a glass environment, inputting the rotation angle, the pitch angle, the length and the intensity of a laser beam corresponding to each radar point in laser radar point cloud data obtained by real-time scanning of a three-dimensional laser radar into a trained glass recognition deep neural network, recognizing the probability of whether each radar point is glass, then calculating the direction of the glass where each radar point is located according to the coordinates of the radar points, and inserting the probability of each radar point being glass and the normal vector of the glass into a corresponding voxel on the spatial voxel map in real time;
s3, on the space voxel map with the glass probability and the glass direction information obtained in the step S2, simulating the rotation angle, the pitching angle and the length of a laser beam emitted by a laser radar when the robot is in an initial pose by adopting a ray projection method, inputting the laser beam into an optical characteristic deep neural network to obtain the probability that the laser beam transmits through the glass, and adding a radar point generated by the reflected laser beam into the point cloud so as to generate a simulated point cloud set;
and S4, calculating the absolute pose of the robot in the space voxel map with the glass probability and the glass direction information obtained in the step S2 by adopting normal distribution transformation based on the real three-dimensional radar data and the simulated point cloud set generated in the step S3.
Further, in the step S1, the input data processing and network structure of the first deep neural network and the second deep neural network are set up as follows:
s1.1, collecting multiple groups of laser radar point cloud data under the condition that three-dimensional laser radars are at different yaw angles and different distances from glass in multiple glass scenes to serve as training data of a first deep neural network and a second deep neural network, and adding artificial glass labels to identify which parts of the three-dimensional laser radar point cloud strike the glass; for the second deep neural network, radar points which are directly returned and generated when the transmission glass hits the glass and are reflected and generated when the transmission glass meets the obstacles behind the glass are also required to be marked respectively;
s1.2, constructing a first deep neural network and a second deep neural network, wherein the first deep neural network and the second deep neural network comprise an input layer, a hidden layer and an output layer which are sequentially connected, and the hidden layer is a full-connection layer, wherein:
the input layer of the first deep neural network is used for inputting the rotation angle, the pitching angle, the length and the intensity of the laser beam corresponding to each radar point; the input layer of the second deep neural network is used for inputting the rotating angle, the pitching angle and the length of the laser beam corresponding to each radar point;
the forward propagation algorithm of the hidden layer of the first deep neural network and the hidden layer of the second deep neural network are both al=σ(zl)=σ(wlal-1+bl) Wherein w islAnd blThe weight coefficient and bias of the neuron are respectively, and the superscript l represents the l-th hidden layer; a islIs the output of the l-th layer of the hidden layer, σ () is the activation function;
the output layers of the first deep neural network and the second deep neural network both adopt an activation function softmax activation function, and the activation function of the nth neuron of the output layer is defined as follows:
Figure BDA0002654189540000041
wherein n =1,2,BP1+BP2=1, exp () is an exponential power function,
Figure BDA0002654189540000042
is a normalization factor that is a function of,
Figure BDA0002654189540000043
the nth output of the last layer of the hidden layer; in the output layer of the first deep neural network, BP1Representing the probability that the radar spot is glass, BP2Representing the probability that the radar point is not glass; in the output layer of the second deep neural network, BP1Representing the probability of the laser beam encountering a radar spot generated by direct reflection from glass, BP2Indicating the probability of the laser beam transmitting through the glass.
Further, the first deep neural network and the second deep neural network are updated iteratively according to the following steps:
s1.2.1, dividing training data into a plurality of batches during each gradient descent iteration through dropout regularization, then performing iteration in batches, wherein when each batch of data is iterated, an original deep neural network model needs to be randomly removed from neurons of a partial hidden layer, and a incomplete deep neural network model is used for iteratively updating wlAnd bl(ii) a After each batch of data is updated iteratively, recovering the incomplete deep neural network model into a complete deep neural network model for performing iterative updating of the next batch of data;
s1.2.2, adopting a mini-Batch gradient descent method by a back propagation algorithm, wherein the w of the first layerlAnd blThe update formulas of (a) and (b) are respectively:
Figure BDA0002654189540000051
Figure BDA0002654189540000052
where M is the training of the inputNumber of training samples, α is iteration step, am,l-1The output of the l-1 layer taking the mth training sample as input; delta. For the preparation of a coatingm,lGradient of l-1 layer using mth training sample as input;
s1.2.3, in the forward propagation process, the rotation angle, the pitching angle, the length and the strength of each radar point pass through an input layer, are processed layer by layer through a hidden layer and are transmitted to an output layer;
s1.2.4, recording the reverse iteration times W, taking a cross entropy loss function as a target function under the condition that the output layer result does not accord with the expected output value, and setting an error threshold value WC/W,WCIs the average error value of the first reverse iteration, and as the number of iterations increases, the error threshold value WCW is reduced as long as it is greater than the error threshold WCThe objective function of/W no longer participates in optimizing WlAnd blThereby filtering out anomaly flags;
s1.2.5, carrying out iterative optimization on the loss function by using a Gauss-Newton iteration method to obtain a minimum value, and finding out a linear coefficient matrix w which enables the target function to be minimumlAnd blWhen the error reaches the expected value, the network learning is finished.
Further, the step S2 includes the following sub-steps:
s2.1, adopting a graph optimization method and a three-dimensional laser radar to construct a space voxel map under a glass environment
Figure BDA0002654189540000053
Wherein the ith voxel is mB,i=(Zi,Bi) Q is a spatial voxel map MBMiddle voxel mB,iTotal number of (2), ZiRepresenting a voxel mB,iProbability of being occupied by an obstacle; biRepresenting a voxel mB,iThe probability of glass, which is an unknown quantity in this step, needs to be obtained by a glass recognition deep neural network in step S2.2;
s2.2, inputting the rotation angle, the pitching angle, the length and the intensity of the laser beam corresponding to each radar point into a glass recognition depth neural network to obtain each radar point asProbability of glass and assigning the identified glass probability to the spatial voxel map M of step S2.1 in real timeBB of the corresponding voxel ini
S2.3, calculating a normal vector of the glass wall in the space voxel map obtained in the step S2.2;
s2.3.1, BP in spatial voxel map with glass labels1Sampling three voxels in voxels exceeding a preset confidence level by adopting a random sampling consistency algorithm to construct a hypothetical glass plane equation Bx + Cy + Dz + F =0; B. c, D and F are respectively fitting parameters of the assumed glass plane equation;
s2.3.2, calculating BP1Distance J between all voxels exceeding a predetermined confidence level and the assumed glass planei
Figure BDA0002654189540000061
If Ji<JT,JTIf the threshold is preset, the voxel mB,iValid voxels for the hypothetical glass plane; x is the number ofi、yi、ziIs voxel mB,iCoordinates in a spatial voxel map coordinate system;
s2.3.3, circulating 2.3.2 until all voxels with the glass probability exceeding the preset confidence coefficient are traversed, if the number of effective voxels of the assumed glass plane is larger than the preset value, considering the assumed glass plane as a real glass plane, and calculating the normal vector of the real glass plane
Figure BDA0002654189540000062
Each glass voxel in the real glass plane is represented as
Figure BDA0002654189540000063
S2.3.4, and circulating S2.3.2 and S2.3.3, detecting all glass planes in the space voxel map with the glass labels, and stopping under the condition that all voxels with glass probability are assigned to the real glass planes or reach the maximum superpositionGeneration times so as to obtain a space voxel map with a glass normal vector and a glass position
Figure BDA0002654189540000064
Further, the step S3 is based on the spatial voxel map MBFSimulating three-dimensional point cloud information obtained by a laser radar when the robot is in an initial pose by adopting a ray projection method considering optical characteristics, and specifically comprising the following substeps:
s3.1, initializing light projection method parameters considering optical characteristics, including:
initializing a space pose X (X, y, z, b, c and d) of the three-dimensional laser radar in a space voxel coordinate system, wherein the value is given by global positioning of the three-dimensional laser radar in the space voxel coordinate system, and b, c and d respectively represent a roll angle, a pitch angle and a yaw angle of the laser radar; (x, y, z) is the spatial coordinates of the lidar in a spatial voxel coordinate system;
probability Z of a voxel being occupied by an obstacleiThreshold value T ofo
Probability of glass BiConfidence of (T)B
Number of rotational scans Ns
The number of laser beam lines S of the three-dimensional lidar,
measuring range Rmax
Space voxel map M with glass position and glass normal vectorBF
Increment of rotation angle of AZ=360/(Ns-1),
Increment of pitch angle YZ=Cz,
The cumulative increment of rotation angle a =0,
the cumulative increment of the pitch angle Y =0,
the terminal coordinate of the laser beam generated by the kth scanning is (rx)k,ryk,rzk),
Simulation point cloud set MN
The ray length is incremented by a maximum step size E,
a spatial transformation relation T between the three-dimensional laser radar and the robot;
s3.2, calculating the nth through space coordinate conversion according to X, Y and AsThe rotation angle gamma and the pitch angle kappa of the k-th scanning of the laser beam are calculated, and the length increment (delta x) of the laser beam in the emission direction of the k-th scanning is calculated according to the set maximum step length Ek,Δyk,Δzk);
S3.3,(rxk,ryk,rzk)=(rxk,ryk,rzk)+(Δxk,Δyk,Δzk);
S3.4, calculating the length of the laser beam
Figure BDA0002654189540000071
S3.5, if the current laser beam rl is more than RmaxIf so, indicating that the laser beam does not generate a radar point, and entering step S3.6; if the current laser beam length rl is less than or equal to RmaxThe coordinates (rx) of the end of the laser beamk,ryk,rzk) The corresponding voxels are processed as follows:
case 1: if the current laser beam end is located in the voxel Bi≤TBAnd Z isi≤ToMeaning that the voxel is not occupied by any obstacle, the laser continues to irradiate forward, and the step S3.3 is returned;
case 2: if B of the voxel at the end of the current laser beam is locatedi≤TBAnd Z isi>ToMeaning that the voxel is a common obstacle where the laser beam will create a radar spot that is inserted into MNEntering step S3.6;
case 3: if the current laser beam end is located in the voxel Bi>TBIf the voxel is glass, inputting the rotation angle gamma, the pitch angle kappa and the beam length rl of the current laser beam into the optical characteristic deep neural network to obtain the probability of the beam transmission glass; then, the transmission result is judged according to the probability, if the transmission is carried out, the laser will continue to emitFront irradiation, returning to step S3.3, if reflected, a radar point is generated, which is inserted into MNStep S3.6 is entered;
s3.6, let k = k +1, zero the distance of the laser beam: rl ← 0, rotation angle accumulation a = a + azReturn to step S3.2, loop until k = NsStep S3.7 is entered;
s3.7, let ns=ns+1, zero rotation angle a ← 0, pitch angle accumulation: y = Y + YzRepeating steps S3.2 to S3.6, and circulating until ns= S, end circulation, obtain accurate simulation point cloud set MN
S3.8, calculating a simulated point cloud set M under a robot coordinate systemT=MN·T。
Further, the step S4 specifically includes:
simulation point cloud set M based on real three-dimensional radar data and step S3.8TCalculating the space voxel map M with glass probability and glass direction information obtained by the robot in step S2.3.4 by adopting normal distribution transformationBFAbsolute pose in (1).
According to another aspect of the present invention, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the method for three-dimensional lidar navigation in a glass scene based on deep learning according to any of the previous items.
According to another aspect of the present invention, there is provided a three-dimensional lidar navigation device for a mobile robot in a glass scene, comprising the computer-readable storage medium as described above and a processor for invoking and processing the computer program stored in the computer-readable storage medium.
In general, compared with the prior art, the above technical solution conceived by the present invention can achieve the following beneficial effects:
1. the invention provides a novel space voxel map with glass positions and vectors, which not only reflects the position information of environmental glass, but also contains the direction information of the glass extracted through a plane. The probability that the radar point is glass is identified through the glass identification deep neural network, and the probability and the identified normal vector of the glass are inserted into the space voxel map, so that the problem that glass information in the environment cannot be accurately presented in a traditional map due to the fact that the glass has a reflection characteristic is solved. On the basis of a space voxel map with glass probability and glass direction information, a simulated point cloud set is generated by identifying the laser transmission probability through an optical characteristic deep neural network, accurate matching point cloud data is provided for normal distribution transformation, and the glass information is effectively integrated into positioning optimization, so that the technical problem that the robot navigation technology based on the laser radar is inaccurate in positioning in the glass scene in the prior art is solved.
2. Through the design of a specific deep neural network structure, a transfer function and an off-line training method, the obtained glass recognition deep neural network has better robustness for wrong glass labels, and the optical characteristic deep neural network has better robustness for probability recognition of laser beam transflectance.
3. And a space voxel map is created through the step-by-step identification of the barrier and the glass, and the insertion of glass information in the space voxel map is completed by combining the identification of the effective voxel and the real glass plane, so that the accurate identification of the position and the direction of the glass is realized.
4. On the basis of a space voxel map containing glass information, the projection of laser beams and the acquisition of radar points are simulated by considering parameters of a light projection method with optical characteristics, and the radar points are screened by combining transmission recognition results of the optical characteristic deep neural network on the laser beams, so that a more accurate simulated point cloud set can be obtained, and the positioning accuracy of the robot is greatly improved.
5. The navigation method provided by the invention fully utilizes the glass position and direction information to improve the positioning accuracy and robustness, generates an accurate point cloud picture from a novel occupied grid map with glass information by using improved light projection, and then combines actual laser radar data with accurate point cloud information by applying normal distribution transformation, thereby solving the problem of confusion or error perception of the laser radar and greatly improving the positioning accuracy of the robot in a glass scene.
Drawings
Fig. 1 is a flowchart of a three-dimensional lidar navigation method for a mobile robot in a glass scene based on a deep neural network according to a preferred embodiment of the invention.
Fig. 2 is a glass identification deep neural network structure constructed according to the preferred embodiment of the present invention.
FIG. 3 is an optical characteristic deep neural network structure constructed by the preferred embodiment of the present invention.
FIG. 4 is the distribution of the radar point cloud when the three-dimensional lidar illuminates glass and obstacles in the preferred embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the respective embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 shows the implementation process of the whole navigation algorithm in the present invention: a glass recognition deep neural network and an optical characteristic deep neural network are required to be built and trained in an offline training stage, a space voxel map under a glass environment is required to be built in an online navigation stage, and the recognized glass probability and the calculated glass direction are inserted into corresponding voxels in the space voxel map in real time; then, simulating three-dimensional point cloud information obtained by a laser radar when the robot is in an initial pose by adopting a light projection method in combination with the optical characteristics of the three-dimensional laser beam and the glass; and finally, calculating the absolute pose of the robot in the space voxel map with the glass label by adopting normal distribution transformation.
The following describes the three-dimensional lidar navigation method based on deep learning in a glass scene in more detail by using a more specific practical operation case with reference to fig. 1 to 4.
As shown in fig. 1, a preferred three-dimensional lidar navigation method based on a deep neural network under a glass scene in the invention includes the following steps:
an off-line training stage:
s1, building a first deep neural network and a second deep neural network, and respectively obtaining the first deep neural network and the second deep neural network through training, wherein the two networks comprise an input layer, a hidden layer and an output layer, the hidden layer is of a three-layer structure, and each layer comprises 10 neurons. The number of layers and the number of neurons of the hidden layer can be adjusted empirically, and the network structure with both calculation efficiency and accuracy is provided in the embodiment. Specifically, the method comprises the following steps:
building a first deep neural network, and taking the rotation angle, the pitching angle, the length and the intensity of a laser beam corresponding to each radar point in laser radar point cloud data obtained by scanning a three-dimensional laser radar as input data of the first deep neural network; and training to obtain the glass recognition deep neural network by taking the probability of whether each radar point is glass as output data of the first deep neural network.
Building a second deep neural network, and taking the rotation angle, the pitch angle and the length of the laser beam as input data of the second deep neural network; and (3) taking the probability that each laser beam meets the glass under the conditions of different lengths, rotation angles and pitching angles to generate a radar point and the probability of transmitting the glass as output data of the second deep neural network, and training to obtain the optical characteristic deep neural network.
And (3) an online navigation stage:
s2, establishing a spatial voxel map under a glass environment, inputting the rotation angle, the pitch angle, the length and the intensity of a laser beam corresponding to each radar point in laser radar point cloud data obtained by real-time scanning of the three-dimensional laser radar into a trained glass recognition deep neural network, recognizing whether each radar point is glass, then calculating the direction of the glass where each radar point is located according to the coordinates of the radar points, and inserting the probability that each radar point is glass and the normal vector of the glass into a corresponding voxel on the spatial voxel map in real time.
And S3, combining the principles of the optical characteristics of the three-dimensional laser beam and the glass, wherein the principle is shown in figure 4, the condition that whether the three-dimensional laser radar beam can meet the reflection of the glass is relevant to the sizes of the rotating angle and the pitching angle, and the rotating angle and the pitching angle can be directly reflected when being smaller. Based on the three-dimensional point cloud information, the three-dimensional point cloud information obtained by the laser radar is simulated by adopting an improved ray projection method when the robot is in the initial pose. Specifically, on the space voxel map with the glass probability and the glass direction information obtained in the step S2, when the robot is in the initial pose, the rotation angle, the pitch angle and the length of the laser beam emitted by the laser radar are simulated by adopting a ray casting method, the probability that the laser beam transmits through the glass is obtained by inputting the laser beam into the optical characteristic deep neural network, and a radar point generated by the reflected laser beam is added into the point cloud, so that a simulated point cloud set is generated.
And S4, calculating the absolute pose of the robot in the space voxel map with the glass probability and the glass direction information obtained in the step S2 by adopting normal distribution transformation based on the real three-dimensional radar data and the simulated point cloud set generated in the step S3.
Preferably, in the embodiment, the input data processing and network structure building of the first deep neural network and the second deep neural network in step S1 is as follows:
s1.1, collecting multiple groups of laser radar point cloud data under the condition that three-dimensional laser radars are at different yaw angles and different distances from glass in multiple glass scenes to serve as training data of a first deep neural network and a second deep neural network, and adding artificial glass labels, namely which parts of the three-dimensional laser radar point cloud strike the glass; for the second deep neural network, radar points generated by directly returning to the glass and radar points generated by reflecting back after the transmission glass meets an obstacle behind the glass need to be marked respectively.
S1.2, constructing a first deep neural network and a second deep neural network, wherein the first deep neural network and the second deep neural network comprise an input layer, a hidden layer and an output layer which are sequentially connected, and the hidden layer is a full-connection layer, wherein: the two neural network structures are both 4 multiplied by 10 multiplied by 2, and are fully connected between layers, namely any neuron of the l layer is connected with any neuron of the l +1 layer.
The input layer of the first deep neural network is used for inputting the rotation angle, the pitching angle, the length and the intensity of the laser beam corresponding to each radar point; the input layer of the second deep neural network is used for inputting the rotating angle, the pitching angle and the length of the laser beam corresponding to each radar point;
the forward propagation algorithm of the hidden layer of the first deep neural network and the hidden layer of the second deep neural network are both al=σ(zl)=σ(wlal-1+bl) Wherein w islAnd blThe weight coefficient and bias of the neuron are respectively, and the superscript l represents the l-th hidden layer; a islIs the output of the l-th layer of the hidden layer, σ () is the activation function;
the output layers of the first deep neural network and the second deep neural network both adopt an activation function softmax activation function, and the activation function of the nth neuron of the output layer is defined as:
Figure BDA0002654189540000131
wherein n =1,2,BP1+BP2=1,exp () is an exponential power function,
Figure BDA0002654189540000132
is a normalization factor that is a function of,
Figure BDA0002654189540000133
the nth output of the last layer of the hidden layer; in the output layer of the first deep neural network, BP1Indicating the probability that the radar spot is glass, BP2Representing the probability that the radar spot is not glass; in the output layer of the second deep neural network, BP1Representing the probability of the laser beam encountering a glass to be directly reflected to generate a radar spot, BP2Indicating the probability of the laser beam transmitting through the glass.
After the deep neural network is built, the first deep neural network and the second deep neural network are updated iteratively according to the following steps:
s1.2.1, dividing training data into a plurality of batches during each gradient descent iteration through dropout regularization, then performing iteration in batches, wherein when each batch of data is iterated, an original deep neural network model needs to be randomly removed from neurons of a partial hidden layer, and a incomplete deep neural network model is used for iteratively updating wlAnd bl. And after each batch of data is updated iteratively, recovering the incomplete deep neural network model into a complete deep neural network model for performing iterative updating of the next batch of data.
S1.2.2, the back propagation algorithm adopts a mini-Batch gradient descent method, wherein the w of the first layerlAnd blThe update formulas of (a) are respectively:
Figure BDA0002654189540000134
Figure BDA0002654189540000135
where M is the number of training samples input, α is the iteration step, am,l-1The output of the l-1 layer taking the mth training sample as input; deltam,lIs the gradient of layer l-1 with the mth training sample as input.
And S1.2.3, in the forward propagation process, the rotation angle, the pitching angle, the length and the strength of each radar point pass through the input layer, are processed layer by layer through the hidden layer and are transmitted to the output layer.
S1.2.4, recording the reverse iteration times W, taking a cross entropy loss function as a target function under the condition that the output layer result does not accord with the expected output value, and setting an error threshold value WC/W,WCIs the average error value of the first reverse iteration, with the error threshold value W increasing with the number of iterationsCW is reduced as long as it is greater than the error threshold WCThe objective function of/W no longer participates in optimizing WlAnd blThereby filtering out anomaly flags. The abnormality mark is a result of a recognition error, for example, a radar point that is originally glass is recognized as glass, or a radar point that is originally glass is recognized as not being glass. By filtering the abnormal mark, the w obtained by each iteration of training can be ensuredlAnd blAll the effective values enable the training process to be fast and stable in convergence, and meanwhile robustness of the obtained network in a subsequent application process after the training is finished can be guaranteed.
S1.2.5, carrying out iterative optimization on the loss function by using a Gauss-Newton iteration method to obtain a minimum value, and finding out a linear coefficient matrix w which minimizes the target functionlAnd blWhen the error reaches the expected value, the network learning is finished.
Preferably, in this embodiment, the step S2 specifically includes:
s2.1, adopting a graph optimization method and a three-dimensional laser radar to construct a space voxel map under a glass environment
Figure BDA0002654189540000141
Wherein the ith element is represented as mB,i=(Zi,Bi) Q is a spatial voxel map MBMiddle voxel mB,iTotal number of (2), ZiRepresenting a voxel mB,iProbability of being occupied by an obstacle; biRepresenting a voxel mB,iThe probability of the glass, which is an unknown quantity in this step, needs to be obtained by the glass identification deep neural network in step S2.2.
S2.2, starting a glass monitor based on the glass recognition deep neural network, inputting the rotation angle, the pitch angle, the length and the intensity of the laser beam corresponding to each radar point into the glass recognition deep neural network to obtain the probability that each radar point is glass, and assigning the recognized glass probability to the space voxel map M of the step S2.1 in real timeBB of the corresponding voxel ini
S2.3, calculating the normal vector of the glass wall in the space voxel map obtained in the step S2.2, and comprising the following substeps:
S2.3.1,glass probability BP in spatial voxel map with glass labels1Three voxels are extracted by using a random consensus algorithm (RANSAC) from the voxels exceeding 0.6 (i.e. the preferred confidence of the embodiment) to construct a hypothetical glass plane equation Bx + Cy + Dz + F =0, and B, C, D and F are the fitting parameters of the hypothetical glass plane equation respectively;
s2.3.2, calculating the distance J between all voxels with the glass probability exceeding 0.6 (i.e. the preferred confidence of the embodiment) and the assumed glass planei
Figure BDA0002654189540000151
If Ji<JT,JTIf the threshold is preset, the voxel mB,iValid voxels for the hypothetical glass plane; x is a radical of a fluorine atomi、yi、ziIs voxel mB,iCoordinates in a spatial voxel map coordinate system.
S2.3.3, looping through 2.3.2 until all voxels with glass probability exceeding 0.6 (i.e. the preferred confidence of the present embodiment) are traversed, and if the number of valid voxels of the glass plane is greater than 100 (i.e. the preferred preset value of the present embodiment), considering that the assumed glass plane is a real glass plane, and calculating a normal vector of the real glass plane
Figure BDA0002654189540000152
Each glass voxel in the real glass plane is represented as
Figure BDA0002654189540000153
S2.3.4, and circulating S2.3.2 and S2.3.3, detecting all glass planes in the space voxel map with the glass labels, and stopping under the condition that all voxels with glass probability are attributed to the glass planes or the maximum iteration number is reached. Thus, a space voxel map with a glass normal vector and a glass position is obtained
Figure BDA0002654189540000154
Preferably, in this embodiment, the step S3 is based on the spatial voxel map MBFThe method comprises the following steps of simulating three-dimensional point cloud information obtained by a laser radar when the robot is in an initial pose by adopting a light projection method considering optical characteristics, and specifically comprises the following substeps:
s3.1, initializing light projection method parameters considering optical characteristics, including:
initializing a space pose X (X, y, z, b, c, d) of the three-dimensional laser radar in a space voxel coordinate system, wherein the value is given by global positioning of the three-dimensional laser radar in the space voxel coordinate system, and b, c and d respectively express a roll angle, a pitch angle and a yaw angle of the laser radar; (x, y, z) are the spatial coordinates of the lidar in a spatial voxel coordinate system;
probability Z of a voxel being occupied by an obstacleiThreshold value T ofo
Probability of glass BiConfidence of (T)B
Number of rotational scans Ns
The number of laser beam lines S of the three-dimensional lidar,
measuring range Rmax
Space voxel map M with glass position and glass normal vectorBF
Increment of rotation angle of AZ=360/(Ns-1),
Increment of pitch angle YZ=Cz,
The cumulative increment of the rotation angle a =0,
the cumulative increment of the pitch angle Y =0,
the terminal coordinate of the laser beam generated by the kth scanning is (rx)k,ryk,rzk),
Simulation point cloud set MN
The ray length is incremented by a maximum step size E,
and (3) the spatial transformation relation T of the three-dimensional laser radar and the robot.
S3.2, calculating the nth through space coordinate conversion according to X, Y and AsThe rotation angle gamma and the pitch angle kappa of the k-th scanning of the laser beam are calculated, and the length increment (delta x) of the laser beam in the emission direction of the k-th scanning is calculated according to the set maximum step length Ek,Δyk,Δzk);
S3.3,(rxi,ryi,rzi)=(rxi,ryi,rzi)+(Δxk,Δyk,Δzk)。
S3.4, calculating the length of the laser beam
Figure BDA0002654189540000161
S3.5, if the current laser beam rl is more than RmaxIf so, indicating that the laser beam does not generate a radar point, and entering step S3.6; if the current laser beam length rl is less than or equal to RmaxThe coordinates (rx) of the end of the laser beamk,ryk,rzk) The corresponding voxels are processed in the following cases:
case 1: if B of the voxel at the end of the current laser beam is locatedi≤TBAnd Z isi≤ToMeaning that the voxel is not occupied by any obstacle, the laser continues to shine forward, returning to step S3.3.
Case 2: if B of the voxel at the end of the current laser beam is locatedi≤TBAnd Z isi>ToMeaning that the voxel is a common obstacle, where the laser beam generates a radar spot that is inserted into MNThe flow proceeds to step S3.6.
Case 3: if the current laser beam end is located in the voxel Bi>TBIf the voxel is glass, inputting the rotation angle gamma, the pitch angle kappa and the beam length rl of the current laser beam into the optical characteristic deep neural network to obtain the probability of the beam transmission glass; then judging the transmission result according to the probability, if the transmission is carried out, the laser will continue to irradiate forwards, returning to step S3.3, if the reflection is carried out, a radar point will be generated, and the radar point is inserted into MNThe flow proceeds to step S3.6.
S3.6,Let k = k +1, zero the distance of the laser beam: rl ← 0, rotation angle accumulation a = a + azAnd returning to the step S3.2, and circulating until k = NsThe process proceeds to step S3.7.
S3.7, let ns=ns+1, zero rotation angle a ← 0, pitch angle accumulation: y = Y + YzRepeating steps S3.2 to S3.6, and circulating until ns= S, end circulation, obtain accurate simulation point cloud set MN
S3.8, calculating a simulated point cloud set M under a robot coordinate systemT=MN·T。
Preferably, in this embodiment, the step S4 specifically includes: simulation point cloud set M based on real three-dimensional radar data and step S3.8TCalculating the space voxel map M with glass probability and glass direction information obtained by the robot in step S2.3.4 by adopting normal distribution transformationBFAbsolute pose in (1).
In general, the navigation method accurately represents the glass environment by using the space voxel map with glass probability and glass position information, effectively utilizes the beneficial information of glass to screen radar points to obtain an accurate simulation point cloud set, avoids the perception confusion of positioning based on a laser radar, and improves the three-dimensional navigation precision of the mobile robot in a scene with a glass wall screen.
It will be understood by those skilled in the art that the foregoing is only an exemplary embodiment of the present invention, and is not intended to limit the invention to the particular forms disclosed, since various modifications, substitutions and improvements within the spirit and scope of the invention are possible and within the scope of the appended claims.

Claims (6)

1. A three-dimensional laser radar navigation method based on deep learning in a glass scene is characterized by comprising the following steps:
an off-line training stage:
s1, building a first deep neural network, and taking the rotation angle, the pitching angle, the length and the intensity of a laser beam corresponding to each radar point in laser radar point cloud data obtained by scanning a three-dimensional laser radar as input data of the first deep neural network; the probability of whether each radar point is glass or not is used as output data of the first deep neural network, and the glass recognition deep neural network is obtained through training;
building a second deep neural network, and taking the rotation angle, the pitch angle and the length of the laser beam as input data of the second deep neural network; training to obtain an optical characteristic deep neural network by taking the probability that each laser beam meets glass and is directly reflected to generate a radar point and the probability of transmitting the glass under the conditions of different lengths, rotation angles and pitching angles as output data of a second deep neural network;
and (3) an online navigation stage:
s2, establishing a spatial voxel map under a glass environment, inputting the rotation angle, the pitch angle, the length and the intensity of a laser beam corresponding to each radar point in laser radar point cloud data obtained by real-time scanning of a three-dimensional laser radar into a trained glass recognition deep neural network, recognizing whether each radar point is glass, then calculating the direction of the glass where each radar point is located according to the coordinates of the radar points, and inserting the probability that each radar point is glass and the normal vector of the glass into a corresponding voxel on the spatial voxel map in real time;
the step S2 includes the following substeps:
s2.1, adopting a graph optimization method and a three-dimensional laser radar to construct a space voxel map under a glass environment
Figure RE-FDA0003837131790000011
Wherein the ith element is represented as mB,i=(Zi,Bi) Q is a spatial voxel map MBMiddle voxel mB,iTotal number of (2), ZiRepresenting a voxel mB,iProbability of being occupied by an obstacle; b isiRepresenting a voxel mB,iThe probability of the glass is unknown in the step, and the unknown quantity needs to be obtained through a glass identification deep neural network in the step S2.2;
s2.2, corresponding the rotation angle of the laser beam corresponding to each radar pointDegree, pitch angle, length and intensity are input into a glass recognition deep neural network to obtain the probability that each radar point is glass, and the recognized glass probability is assigned to the space voxel map M in the step S2.1 in real timeBB of the corresponding voxel ini
S2.3, calculating a normal vector of the glass wall in the space voxel map obtained in the step S2.2;
s2.3.1, sampling three voxels by adopting a stochastic consistency algorithm in the voxels with the glass probability exceeding the preset confidence coefficient in the space voxel map with the glass label to construct an assumed glass plane equation Bx + Cy + Dz + F =0; B. c, D and F are respectively fitting parameters of the assumed glass plane equation;
s2.3.2, calculating the distance J between all voxels with the glass probability exceeding the preset confidence coefficient and the assumed glass planei
Figure RE-FDA0003837131790000021
If J isi<JT,JTIf the threshold is preset, the voxel mB,iValid voxels for the hypothetical glass plane; x is a radical of a fluorine atomi、yi、ziIs voxel mB,iCoordinates in a spatial voxel map coordinate system;
s2.3.3, circulating 2.3.2 until all voxels with the glass probability exceeding the preset confidence coefficient are traversed, if the number of effective voxels of the assumed glass plane is larger than the preset value, considering the assumed glass plane as a real glass plane, and calculating a normal vector of the real glass plane
Figure RE-FDA0003837131790000022
Each glass voxel in the real glass plane is represented as
Figure RE-FDA0003837131790000023
S2.3.4, circulating S2.3.2 and S2.3.3, detecting the space voxel with the glass labelAll the glass planes in the map are subjected to stopping conditions that all the voxels with glass probability are attributed to the real glass planes or reach the maximum iteration number, so that the space voxel map with the glass normal vectors and the glass positions is obtained
Figure RE-FDA0003837131790000024
S3, on the space voxel map with the glass probability and the glass direction information obtained in the step S2, simulating the rotation angle, the pitching angle and the length of a laser beam emitted by a laser radar when the robot is in an initial pose by adopting a light projection method considering optical characteristics, inputting the optical characteristics and the depth neural network to obtain the probability that the laser beam transmits the glass, and adding a radar point generated by reflecting the laser beam when the voxel is the glass and a radar point generated when the voxel is a common obstacle into the point cloud so as to generate a simulated point cloud set; the step S3 is based on the space voxel map MBFThe method comprises the following steps of simulating three-dimensional point cloud information obtained by a laser radar when the robot is in an initial pose by adopting a light projection method considering optical characteristics, and specifically comprises the following substeps:
s3.1, initializing light projection method parameters considering optical characteristics, including:
initializing a space pose X of the three-dimensional laser radar under a space voxel coordinate system: (x, y, z, b, c, d), wherein the value is given by global positioning of the three-dimensional laser radar in a space voxel coordinate system, and b, c and d respectively express a roll angle, a pitch angle and a yaw angle of the laser radar; (x, y, z) is the spatial coordinates of the lidar in a spatial voxel coordinate system;
probability Z of a voxel being occupied by an obstacleiThreshold value T ofo
Probability of glass BiConfidence of (T)B
Number of rotational scans Ns
The number of laser beam lines S of the three-dimensional lidar,
measuring range Rmax
Space voxel map M with glass position and glass normal vectorBF
Increment of rotation angle of AZ=360/(Ns-1),
Increment of pitch angle YZ=Cz,
The cumulative increment of rotation angle a =0,
the cumulative increment of the pitch angle Y =0,
the terminal coordinate of the laser beam generated by the k-th scanning is (rx)k,ryk,rzk),
Simulation point cloud set MN
The ray length is incremented by a maximum step size E,
the space conversion relation T of the three-dimensional laser radar and the robot;
s3.2, calculating the nth through space coordinate conversion according to X, Y and AsThe rotation angle gamma and the pitch angle kappa of the kth scanning of the laser beam are calculated, and the length increment (delta x) of the laser beam in the emission direction of the kth scanning is calculated according to the set maximum step length Ek,Δyk,Δzk);
S3.3,(rxk,ryk,rzk)=(rxk,ryk,rzk)+(Δxk,Δyk,Δzk);
S3.4, calculating the length of the laser beam
Figure RE-FDA0003837131790000041
S3.5, if the current laser beam rl is more than RmaxIf so, indicating that the laser beam does not generate a radar point, and entering step S3.6; if the current laser beam length rl is less than or equal to RmaxThe coordinates (rx) of the end of the laser beamk,ryk,rzk) The corresponding voxels are processed in the following cases:
case 1: if the current laser beam end is located in the voxel Bi≤TBAnd Z isi≤ToMeaning that the voxel is not occupied by any obstacle, the laser continues to irradiate forward, and the step S3.3 is returned;
case 2: if it is currentlyB of voxel at the end of laser beami≤TBAnd Z isi>ToMeaning that the voxel is a common obstacle, where the laser beam generates a radar spot that is inserted into MNStep S3.6 is entered;
case 3: if B of the voxel at the end of the current laser beam is locatedi>TBIf the voxel is glass, inputting the rotation angle gamma, the pitch angle kappa and the beam length rl of the current laser beam into the optical characteristic deep neural network to obtain the probability of the light beam transmitting glass; then judging the transmission result according to the probability, if the transmission is carried out, the laser will continue to irradiate forwards, returning to step S3.3, if the reflection is carried out, a radar point will be generated, and the radar point is inserted into MNStep S3.6 is entered;
s3.6, let k = k +1, zero the distance of the laser beam: rl ← 0, rotation angle accumulation a = a + azReturn to step S3.2, loop until k = NsStep S3.7 is entered;
s3.7, let ns=ns+1, zero rotation angle a ← 0, pitch angle accumulation: y = Y + YzRepeating steps S3.2 to S3.6, and circulating until ns= S, end circulation, obtain accurate simulation point cloud set MN
S3.8, calculating a simulated point cloud set M under a robot coordinate systemT=MN·T;
And S4, calculating the absolute pose of the robot in the space voxel map with the glass probability and the glass direction information obtained in the step S2 by adopting normal distribution transformation based on the real three-dimensional radar data and the simulated point cloud set generated in the step S3.
2. The deep learning-based three-dimensional laser radar navigation method under the glass scene according to claim 1, wherein the input data processing and network structure of the first deep neural network and the second deep neural network in the step S1 are set up as follows:
s1.1, collecting multiple groups of laser radar point cloud data under the condition that three-dimensional laser radars are at different yaw angles and different distances from glass in multiple glass scenes to serve as training data of a first deep neural network and a second deep neural network, and adding artificial glass labels to identify which parts of the three-dimensional laser radar point cloud hit the glass; for the second deep neural network, radar points which are directly returned and generated when the transmission glass hits the glass and are reflected and generated when the transmission glass meets the obstacles behind the glass are also required to be marked respectively;
s1.2, constructing a first deep neural network and a second deep neural network, wherein the first deep neural network and the second deep neural network comprise an input layer, a hidden layer and an output layer which are sequentially connected, and the hidden layer is a full-connection layer, wherein:
the input layer of the first deep neural network is used for inputting the rotation angle, the pitching angle, the length and the intensity of the laser beam corresponding to each radar point; the input layer of the second deep neural network is used for inputting the rotating angle, the pitching angle and the length of the laser beam corresponding to each radar point;
the forward propagation algorithm of the hidden layer of the first deep neural network and the hidden layer of the second deep neural network are both al=σ(zl)=σ(wlal-1+bl) Wherein w islAnd blThe weight coefficient and bias of the neuron are respectively, and the superscript l represents the l-th hidden layer; a islIs the output of the l-th layer of the hidden layer, σ () is the activation function;
the output layers of the first deep neural network and the second deep neural network both adopt an activation function softmax activation function, and the activation function of the nth neuron of the output layer is defined as:
Figure RE-FDA0003837131790000051
wherein n =1,2,BP1+BP2=1, exp () is an exponential power function,
Figure RE-FDA0003837131790000052
is a factor for the normalization of the received signal,
Figure RE-FDA0003837131790000053
the nth output of the last layer of the hidden layer; in the output layer of the first deep neural network, BP1Indicating the probability that the radar spot is glass, BP2Representing the probability that the radar point is not glass; in the output layer of the second deep neural network, BP1Representing the probability of the laser beam encountering a glass to be directly reflected to generate a radar spot, BP2Indicating the probability of the laser beam transmitting through the glass.
3. The deep learning-based three-dimensional lidar navigation method in a glass scene as claimed in claim 2, wherein the first deep neural network and the second deep neural network are updated iteratively according to the following steps:
s1.2.1, dividing training data into a plurality of batches during each gradient descent iteration through dropout regularization, then performing iteration in batches, wherein when each batch of data is iterated, an original deep neural network model needs to be randomly removed from neurons of a partial hidden layer, and a incomplete deep neural network model is used for iteratively updating wlAnd bl(ii) a After each batch of data is updated iteratively, recovering the incomplete deep neural network model into a complete deep neural network model for performing iterative updating of the next batch of data;
s1.2.2, adopting a mini-Batch gradient descent method by a back propagation algorithm, wherein the w of the first layerlAnd blThe update formulas of (a) and (b) are respectively:
Figure RE-FDA0003837131790000061
Figure RE-FDA0003837131790000062
where M is the number of training samples input, α is the iteration step, am,l-1The output of the l-1 layer taking the mth training sample as input; deltam,lUsing the mth training sample as inputGradient of layer l-1;
s1.2.3, in the forward propagation process, the rotation angle, the pitching angle, the length and the strength of each radar point pass through an input layer, are processed layer by layer through a hidden layer and are transmitted to an output layer;
s1.2.4, recording the reverse iteration times W, taking a cross entropy loss function as a target function under the condition that the output layer result does not accord with the expected output value, and setting an error threshold value WC/W,WCIs the average error value of the first reverse iteration, and as the number of iterations increases, the error threshold value WCW is reduced as long as it is greater than an error threshold WCThe objective function of/W no longer participates in optimizing WlAnd blThereby filtering out anomaly flags;
s1.2.5, carrying out iterative optimization on the loss function by using a Gauss-Newton iteration method to obtain a minimum value, and finding out a linear coefficient matrix w which minimizes the target functionlAnd blWhen the error reaches the expected value, the network learning is finished.
4. The deep learning-based three-dimensional laser radar navigation method under the glass scene as claimed in claim 1, wherein the step S4 is specifically as follows:
simulation point cloud set M based on real three-dimensional radar data and step S3.8TCalculating the space voxel map M with glass probability and glass direction information obtained by the robot in the step S2.3.4 by adopting normal distribution transformationBFAbsolute pose in (1).
5. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the method for three-dimensional lidar navigation in a glass scene based on deep learning according to any one of claims 1 to 4.
6. A deep learning based three-dimensional lidar navigation device in a glass scene, comprising the computer-readable storage medium of claim 5 and a processor for invoking and processing a computer program stored in the computer-readable storage medium.
CN202010881244.0A 2020-08-27 2020-08-27 Deep learning-based three-dimensional laser radar navigation method and device in glass scene Active CN111982124B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010881244.0A CN111982124B (en) 2020-08-27 2020-08-27 Deep learning-based three-dimensional laser radar navigation method and device in glass scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010881244.0A CN111982124B (en) 2020-08-27 2020-08-27 Deep learning-based three-dimensional laser radar navigation method and device in glass scene

Publications (2)

Publication Number Publication Date
CN111982124A CN111982124A (en) 2020-11-24
CN111982124B true CN111982124B (en) 2022-11-01

Family

ID=73441028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010881244.0A Active CN111982124B (en) 2020-08-27 2020-08-27 Deep learning-based three-dimensional laser radar navigation method and device in glass scene

Country Status (1)

Country Link
CN (1) CN111982124B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467450A (en) * 2021-07-01 2021-10-01 北京小狗吸尘器集团股份有限公司 Unmanned aerial vehicle control method and device, computer equipment and storage medium
CN114089330B (en) * 2022-01-18 2022-05-20 北京航空航天大学 Indoor mobile robot glass detection and map updating method based on depth image restoration
WO2024011557A1 (en) * 2022-07-15 2024-01-18 深圳市正浩创新科技股份有限公司 Map construction method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176185A (en) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 Method and system for detecting road barrier
CN105023287A (en) * 2015-07-08 2015-11-04 西安电子科技大学 Laser radar imaging and coloring method used for dynamic three dimensional scene
EP3206072A1 (en) * 2014-10-07 2017-08-16 Konica Minolta, Inc. Scanning optical system and radar
CN109870705A (en) * 2017-12-01 2019-06-11 武汉万集信息技术有限公司 Boundary target identification method and device based on laser radar
CN111207753A (en) * 2020-02-13 2020-05-29 苏州大学 Method for simultaneously positioning and establishing picture under multi-glass partition environment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016117060A1 (en) * 2015-01-22 2016-07-28 パイオニア株式会社 Driving assistance device and driving assistance method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176185A (en) * 2011-12-26 2013-06-26 上海汽车集团股份有限公司 Method and system for detecting road barrier
EP3206072A1 (en) * 2014-10-07 2017-08-16 Konica Minolta, Inc. Scanning optical system and radar
CN105023287A (en) * 2015-07-08 2015-11-04 西安电子科技大学 Laser radar imaging and coloring method used for dynamic three dimensional scene
CN109870705A (en) * 2017-12-01 2019-06-11 武汉万集信息技术有限公司 Boundary target identification method and device based on laser radar
CN111207753A (en) * 2020-02-13 2020-05-29 苏州大学 Method for simultaneously positioning and establishing picture under multi-glass partition environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Haiyang Mei 等.Don’t Hit Me! Glass Detection in Real-world Scenes.《2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》.2020, *
陈浩.基于激光雷达的室内玻璃三维重建算法.《信息通信》.2019,(第6期), *

Also Published As

Publication number Publication date
CN111982124A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111982124B (en) Deep learning-based three-dimensional laser radar navigation method and device in glass scene
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN110097553A (en) The semanteme for building figure and three-dimensional semantic segmentation based on instant positioning builds drawing system
CN113108773A (en) Grid map construction method integrating laser and visual sensor
CN112347550A (en) Coupling type indoor three-dimensional semantic graph building and modeling method
CN116559928B (en) Pose information determining method, device and equipment of laser radar and storage medium
CN112652003A (en) Three-dimensional point cloud registration method based on RANSAC measure optimization
CN108664860A (en) The recognition methods of room floor plan and device
CN114529615B (en) Radar calibration method, device and storage medium
CN115984637A (en) Time sequence fused point cloud 3D target detection method, system, terminal and medium
CN114966538A (en) Single-station positioning method and system combining scene recognition and ranging and angle measurement error calibration
Zhuang et al. A robust and fast method to the perspective-n-point problem for camera pose estimation
CN111765883B (en) Robot Monte Carlo positioning method, equipment and storage medium
CN113902828A (en) Construction method of indoor two-dimensional semantic map with corner as key feature
CN116844124A (en) Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium
CN117197245A (en) Pose restoration method and device
CN112612788B (en) Autonomous positioning method under navigation-free satellite signal
CN115147561A (en) Pose graph generation method, high-precision map generation method and device
CN113613188A (en) Fingerprint library updating method and device, computer equipment and storage medium
CN117765048B (en) Cross-modal fusion-based underwater target three-dimensional registration method
CN115170663B (en) Cross-space-time authenticity target multi-mode associated ultra-long-range passive ranging method
Rekavandi et al. B-Pose: Bayesian Deep Network for Accurate Camera 6-DoF Pose Estimation from RGB Images
Liu et al. Correlation scan matching algorithm based on multi‐resolution auxiliary historical point cloud and lidar simultaneous localisation and mapping positioning application
CN116342666B (en) Three-dimensional point cloud registration method based on multi-form optimization and electronic equipment
Chernyavskiy A robust scheme of model parameters estimation based on the particle swarm method in the image matching problem

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant