CN112257522B - Multi-sensor fusion environment sensing method based on environment characteristics - Google Patents

Multi-sensor fusion environment sensing method based on environment characteristics Download PDF

Info

Publication number
CN112257522B
CN112257522B CN202011063190.3A CN202011063190A CN112257522B CN 112257522 B CN112257522 B CN 112257522B CN 202011063190 A CN202011063190 A CN 202011063190A CN 112257522 B CN112257522 B CN 112257522B
Authority
CN
China
Prior art keywords
vehicle
sensor
environment
fusion
light intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011063190.3A
Other languages
Chinese (zh)
Other versions
CN112257522A (en
Inventor
王展
王春燕
赵万忠
王一松
刘利锋
秦亚娟
刘晓强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202011063190.3A priority Critical patent/CN112257522B/en
Publication of CN112257522A publication Critical patent/CN112257522A/en
Application granted granted Critical
Publication of CN112257522B publication Critical patent/CN112257522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9322Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles using additional data, e.g. driver condition, road state or weather data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a multi-sensor fusion environment sensing method based on environment characteristics. The environment is then modeled according to the selected scheme and obstacle information in the environment is detected. The multi-feature multi-layer grid map is adopted in the laser radar detection of the obstacle information, the obstacle environment information is layered, the detection accuracy is improved, and the false detection of suspended objects is avoided. The method solves the problem that the robustness of different sensors under different environmental characteristics is different, selects different sensors based on different environmental characteristics, and increases the robustness and accuracy of target detection of the intelligent vehicle under various environmental conditions.

Description

Multi-sensor fusion environment sensing method based on environment characteristics
Technical Field
The invention relates to the field of environmental perception of unmanned technologies, in particular to a multi-sensor fusion environmental perception method based on environmental characteristics.
Background
As an emerging means, intelligent internet-connected vehicles are highly valued and rapidly developed worldwide because they can avoid driving dangerous behaviors, assist drivers in driving normally, improve driving comfort, and alleviate traffic congestion. As one of the important components of the intelligent network-connected automobile, the target detection system senses the environmental information by utilizing the sensor, so that the important information is provided for other subsystems of the intelligent network-connected automobile. The intelligent vehicle is widely applied at present mainly comprises a laser radar, a millimeter wave radar, a vision sensor and the like. Because of the limitations of single sensors, more emphasis is currently placed on joint detection of multiple sensors in the direction of environmental perception. For example, chinese patent application No. CN201821484645.7, entitled "a multi-parameter mobile environment sensing robot" employs a visual sensor, a water level sensor, a temperature and humidity sensor, an ultrasonic sensor, a dust detection sensor, an ultraviolet sensor, and a vital sign detection sensor to detect environmental features, and the joint control module conveys information to be detected in the environment. The Chinese patent application No. CN201711085245.9, named "information fusion method of intelligent vehicle perception system" is fused with laser radar, millimeter wave radar, vision sensor and ultrasonic radar, and the detection thinking is: firstly judging whether an ultrasonic sensor detects an obstacle, if yes, not processing the information of the millimeter wave and the visual sensor, and if not, processing the information of the millimeter wave and the visual sensor; if the millimeter wave sensor and the vision sensor detect the same obstacle, the processor fuses and processes the information of the two sensors, and only one sensor detects the obstacle and only outputs the information of the corresponding detector. None of the above patents adequately considers the robustness of the sensor employed under different environmental characteristics. The robustness of different sensors under the conditions of poor illumination condition, more suspended matters, electromagnetic interference and the like is different, and if the factors are not considered, the detection result is often adversely affected, so that the detection accuracy is reduced, and the safety and the intelligence of the intelligent vehicle in the running process are affected.
Disclosure of Invention
The invention aims to solve the technical problem of providing a multi-sensor fusion environment sensing method based on environment characteristics aiming at the defects related to the background technology.
The invention adopts the following technical scheme for solving the technical problems:
the multi-sensor fusion environment sensing method based on the environment characteristics comprises the following steps:
step 1), arranging a photosensitive sensor, a raindrop sensor, a laser radar, a millimeter wave radar and a vision sensor on a vehicle, wherein the photosensitive sensor is used for detecting the light intensity of the environment where the vehicle is located; the raindrop sensor is used for judging whether the environment where the vehicle is located is rainy or not;
step 2), obtaining the current light intensity of the environment where the vehicle is located, and judging whether the environment where the vehicle is located is rainy or not;
step 2.1), if the vehicle does not rain and the current light intensity is smaller than a preset light intensity threshold value, performing environment modeling and obstacle recognition by using a laser radar;
step 2.2), if the vehicle rains and the current light intensity is smaller than a preset light intensity threshold value, adopting a millimeter wave radar to perform environment modeling and obstacle recognition;
step 2.3), if the vehicle does not rain and the current light intensity is greater than or equal to a preset light intensity threshold value, performing environment modeling and obstacle recognition by adopting fusion of a vision sensor and a laser radar sensor;
and 2.4) if the vehicle rains and the current light intensity is greater than or equal to a preset light intensity threshold value, adopting the combination of the millimeter wave radar and the vision sensor to perform environment modeling and obstacle recognition.
As a further optimization scheme of the multi-sensor fusion environment sensing method based on the environment characteristics, when the laser radar is adopted to conduct environment modeling and obstacle recognition in the step 2.1), the multi-characteristic multi-layer grid map is adopted to divide an environment model, a clustering method based on a depth value to determine a distance threshold value is adopted to cluster grid information, and clustering correction is adopted to avoid over-division to determine obstacle edge information.
As a further optimization scheme of the multi-sensor fusion environment perception method based on the environment characteristics, when the millimeter wave radar is adopted for environment modeling and obstacle recognition in the step 2.2), vehicle target detection is carried out based on a decision tree, and the method specifically comprises millimeter wave radar data preprocessing, model construction and network training:
in the model construction, eliminating information which is not related to the target classification characteristics, and classifying the targets by eight labels, namely junction and distance, speed, transverse distance, reflectivity, transverse acceleration, angular speed and relative acceleration; then constructing a decision tree algorithm based on an ID3 algorithm in MATLAB; training the decision tree algorithm by adopting training data, and gradually reducing the loss value, thereby completing target detection of the vehicle.
As a further optimization scheme of the multi-sensor fusion environment perception method based on the environment characteristics, in the step 2.3), when the laser radar and the vision sensor are fused for environment modeling and obstacle recognition;
step 2.3.1), carrying out joint calibration work based on a sensor model, and realizing space calibration and time synchronization work of laser radar and camera data: in space calibration, converting a visual coordinate system and a laser radar coordinate system into a world coordinate system; constructing a data pool for data of different sources over time;
step 2.3.2), the area is divided by using a hierarchical clustering method, and the vehicle is further identified by using a visual sensor: extracting vehicle bottom shadows by utilizing a local area statistical segmentation method based on YUV space, detecting vehicle symmetry by utilizing a Sobel edge detection operator, and finally determining whether a vehicle exists or not by utilizing texture features; on the fusion structure of the laser radar and the vision sensor, pixel-level fusion and decision-level fusion are adopted.
As a further optimization scheme of the multi-sensor fusion environment sensing method based on the environment characteristics, in the step 2.4), when the millimeter wave radar and the vision sensor are fused to perform environment modeling and obstacle recognition, a convolutional neural network algorithm is adopted to perform vision image processing, a radar signal is processed based on a Kalman filter and a two-degree-of-freedom model of a vehicle, and a weighted average information fusion algorithm is adopted to fuse the processed vision image and the radar signal.
Compared with the prior art, the technical scheme provided by the invention has the following technical effects:
1. the invention provides a multi-sensor fusion algorithm based on environmental characteristics, which enhances the robustness of target detection in an intelligent vehicle system under different environments. Meanwhile, because the adopted sensors can be freely switched based on different weather, the problem that a large amount of data needs to be processed due to the fact that all the sensors work together is avoided, and the real-time requirement of a vehicle in the running process is increased.
2. The invention adopts the multi-layer multi-feature grid map in the laser radar grid data processing, so that different kinds of obstacle information in the environment is more clearly divided, and the false detection phenomenon of the intelligent vehicle in the running process is avoided.
Drawings
FIG. 1 is a schematic diagram of the principles of the present invention;
FIG. 2 is a schematic diagram of a multi-feature multi-layer grid map construction principle in a lidar detection obstacle;
fig. 3 is a laser radar and vision sensor fusion algorithm structure.
Detailed Description
The technical scheme of the invention is further described in detail below with reference to the accompanying drawings:
this invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, the components are exaggerated for clarity.
Referring to fig. 1, the invention discloses a multi-sensor fusion environment sensing method based on environment characteristics, which comprises the following steps:
step 1), arranging a photosensitive sensor, a raindrop sensor, a laser radar, a millimeter wave radar and a vision sensor on a vehicle, wherein the photosensitive sensor is used for light intensity of an environment where the vehicle is located; the raindrop sensor is used for judging whether the environment where the vehicle is located is rainy or not.
The principle of the photosensitive sensor is as follows: the light signal is converted into an electric signal by the photosensitive element, and the electric signal is amplified to represent the light intensity by the current. The principle of the raindrop sensor is as follows: the piezoelectric effect of the piezoelectric vibrator is utilized to convert mechanical displacement generated by raindrops into an electric signal, and the voltage waveform converted by the impact energy of the electric signal is utilized to represent whether raining occurs.
Step 2), obtaining the current light intensity of the environment where the vehicle is located, and judging whether the environment where the vehicle is located is rainy or not:
step 2.1), if the vehicle is not raining and the current light intensity is smaller than a preset light intensity threshold value;
from the current research situation of researchers at home and abroad, it can be known that the illumination condition has a great influence on the precision of the vision sensor. Therefore, when the lighting conditions are poor, we will not employ visual sensors. And the laser radar application can have good robustness when the light condition is poor. Meanwhile, compared with a laser radar, the millimeter wave radar has the advantages of high accuracy, large information quantity and no interference of visible light. Thus, lidar is employed for environmental modeling and obstacle recognition under such environmental features.
Step 2.2), if the vehicle is raining and the current light intensity is smaller than a preset light intensity threshold value;
in such cases the robustness of the vision sensor is impaired by the poor lighting conditions, and the accuracy of the lidar is reduced with increasing density of suspended matter in the air due to the rain. Research shows that the rainy weather is composed of small water drops, and the radius of the rainy drops and the distribution density of the rainy drops in the air directly determine the probability of collision of laser light with the rainy drops in the propagation process. The higher the probability of collision, the more the propagation speed of the laser is affected. Therefore, it is not appropriate to use lidar sensors as an environmental modeling effort in rainy weather. Thus, in such cases, millimeter wave radar is used for environmental modeling and obstacle recognition.
Step 2.3), if the vehicle is not raining and the current light intensity is greater than or equal to a preset light intensity threshold value;
when the illumination condition is good and the rain weather is not involved, the robustness of the vision sensor and the laser radar sensor is good, and the robustness of the millimeter wave radar can be interfered by electromagnetic waves between other communication equipment and the radar, so that the vision sensor and the laser radar sensor are selected to be fused for environment modeling and obstacle recognition.
Step 2.4), if the vehicle is raining and the current light intensity is greater than or equal to a preset light intensity threshold value;
when the lighting conditions are good, the vision sensor can play a great role in environmental perception as the most mature and cheap sensor currently used. However, the rain weather also has a certain influence on the detection accuracy of the vision sensor, so that the use of a single vision sensor is insufficient for achieving the detection accuracy. Meanwhile, because a large number of small water drops exist in the rainwater weather air, the propagation speed of laser can be directly influenced, so that the detection precision of a laser radar sensor is further influenced, and the anti-suspension interference capability of the millimeter wave radar is stronger, so that the millimeter wave radar and a vision sensor are fused to perform environment modeling and obstacle recognition.
When the laser radar is adopted to conduct environment modeling and obstacle recognition in the step 2.1), a multi-feature multi-layer grid map is adopted to divide an environment model, a clustering method for determining a distance threshold value based on a depth value is adopted to cluster grid information, and cluster correction is adopted to avoid excessively determining obstacle edge information.
Referring to fig. 2, the multi-feature multi-layer grid map divides an environmental model into a road layer, an obstacle layer, and a hanger layer, and combines height features and intensity features on the division; two thresholds are set in the height profile: a and b, b being greater than a, a being set to a smaller threshold to prevent false detection of an obstacle as the ground, and b being set to a larger threshold to prevent false detection of the road as an obstacle when the road is rough; when delta H is less than a, judging that the road surface is the road surface; when delta H is less than b, judging that the object is an obstacle or a hanging object; when Δh is equal to or greater than a and equal to or less than b, it is not determined at the height feature stage, and is determined after the intensity feature is introduced. Before the intensity characteristic is introduced, the intensity is firstly corrected according to the distance, and the correction formula is as follows:
wherein focalDistance, K is the data given in the laser radar specification, distance is the distance value of the laser radar from the obstacle, and intennitval is the measured intensity value.
After the intensity correction, the intensity information of the obstacle surface is relatively disordered, which causes a larger variance, considering that the surface properties of clothes and paint and the color difference are larger for pedestrians and vehicles. In the case of the road surface, the surface material and the color are relatively uniform, so that the intensity information of the road surface is relatively uniform and the variance is small. Therefore, the road surface plane blocks can be well separated by the intensity mean value and the intensity variance. The upper and lower threshold values of the set intensity mean value are MeanIa, meanIb, the variance threshold value is VarIt, and when the intensity mean value of the plane block is in the threshold value range and the variance is smaller than the variance threshold value, the plane block is judged to be the road surface plane block.
After detecting the pavement layer, the height average value H of the pavement layer is calculated G As the height of the road layer in the present grid. If no road surface layer is detected, let H G =0. The suspension level height is then solved for the formula:
H F =H G +H V +H S
h in V And H S Representing the height of the vehicle itself plus the laser radar and the safety distance reserved to the roof, respectively. Thus, when the height average of the surface blocks in the grid is at H G To H F And when the surface block is in the middle, judging that the surface block is an obstacle layer. When the height average value of the inner surface blocks of the grid is greater than H F When the surface block is judged to be a suspended object layer. And finally, compressing the data in the grids, only retaining the height difference and the intensity mean value data in the plane blocks of the obstacle, and eliminating the data of other plane blocks.
Clustering the obstacle plane blocks after layering the raster data to obtain the edge information of the obstacle. The method adopted is a method for determining a clustering threshold value based on a distance threshold value, wherein the distance threshold value is determined by a depth value, and the formula is as follows:
middle sigma r Representative of the measurement error of the radar sensor,for horizontal angular resolution, r n-1 And lambda is a threshold parameter for the depth value of the center point of the barrier grid.
Further adopting cluster correction to avoid over-segmentation after clustering, and comparing by using similarity, wherein a similarity formula is as follows:
where the coefficient a, b, c, d is an empirical value obtained from multiple experiments. (x) i ,y i ),(x j ,y j ) Is the coordinates of the center point of the ith and j-th obstacles. V (V) i ,θ i ,T i Representing the speed, the movement direction and the intensity mean value at different moments of the obstacle respectively.
When the millimeter wave radar is adopted to carry out environment modeling and obstacle recognition in the step 2.2), vehicle target detection is carried out based on a decision tree, and the method specifically comprises millimeter wave radar data preprocessing, model construction and network training:
in the model construction, eliminating information which is not related to the target classification characteristics, and classifying the target by combining eight labels of distance, speed, transverse distance, reflectivity, transverse acceleration, angular speed and relative acceleration; then constructing a decision tree algorithm based on an ID3 algorithm in MATLAB; training the decision tree algorithm by adopting training data, and gradually reducing the loss value, thereby completing target detection of the vehicle.
In the step 2.3), when the laser radar and the vision sensor are fused to perform environment modeling and obstacle recognition;
step 2.3.1), carrying out joint calibration work based on a sensor model, and realizing space calibration and time synchronization work of laser radar and camera data: in space calibration, converting a visual coordinate system and a laser radar coordinate system into a world coordinate system; constructing a data pool for data of different sources in time;
step 2.3.2), the classification of the region of interest is realized by using a hierarchical clustering method, and the vehicle is further identified by using a visual sensor: extracting vehicle bottom shadows by utilizing a local area statistical segmentation method based on YUV space, detecting vehicle symmetry by utilizing a Sobel edge detection operator, and finally determining whether a vehicle exists or not by utilizing texture features; referring to fig. 3, pixel-level fusion and decision-level fusion are adopted on a fusion structure of a laser radar and a vision sensor.
When the millimeter wave radar and the vision sensor are fused for environment modeling and obstacle recognition in the step 2.4), a convolutional neural network algorithm is adopted for vision image processing, radar signals are processed based on a Kalman filter and a two-degree-of-freedom model of the vehicle, and a weighted average information fusion algorithm is adopted for fusion of the processed vision images and the radar signals.
The specific steps of realizing detection are that the primary screening of the effective targets with the vehicle motion characteristics is completed through a Kalman filter and a constant acceleration model, and the Fisher characteristic of the image is extracted by adopting a deep learning mode to identify the vehicle.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
While the foregoing is directed to embodiments of the present invention, other and further details of the invention may be had by the present invention, it should be understood that the foregoing description is merely illustrative of the present invention and that no limitations are intended to the scope of the invention, except insofar as modifications, equivalents, improvements or modifications are within the spirit and principles of the invention.

Claims (5)

1. The multi-sensor fusion environment sensing method based on the environment characteristics is characterized by comprising the following steps of:
step 1), arranging a photosensitive sensor, a raindrop sensor, a laser radar, a millimeter wave radar and a vision sensor on a vehicle, wherein the photosensitive sensor is used for detecting the light intensity of the environment where the vehicle is located; the raindrop sensor is used for judging whether the environment where the vehicle is located is rainy or not;
step 2), obtaining the current light intensity of the environment where the vehicle is located, and judging whether the environment where the vehicle is located is rainy or not;
step 2.1), if the vehicle does not rain and the current light intensity is smaller than a preset light intensity threshold value, performing environment modeling and obstacle recognition by using a laser radar;
step 2.2), if the vehicle rains and the current light intensity is smaller than a preset light intensity threshold value, adopting a millimeter wave radar to perform environment modeling and obstacle recognition;
step 2.3), if the vehicle does not rain and the current light intensity is greater than or equal to a preset light intensity threshold value, performing environment modeling and obstacle recognition by adopting fusion of a vision sensor and a laser radar sensor;
and 2.4) if the vehicle rains and the current light intensity is greater than or equal to a preset light intensity threshold value, adopting millimeter wave radar and vision sensor fusion to perform environment modeling and obstacle recognition.
2. The environmental feature-based multi-sensor fusion environmental perception method according to claim 1, wherein in the step 2.1), when the laser radar is adopted to perform environmental modeling and obstacle recognition, the multi-feature multi-layer grid map is adopted to divide an environmental model, the clustering method based on the depth value to determine the distance threshold is adopted to cluster grid information, and the clustering correction is adopted to avoid over-segmentation to determine obstacle edge information.
3. The environmental feature-based multi-sensor fusion environmental perception method according to claim 1, wherein when the millimeter wave radar is adopted to perform environmental modeling and obstacle recognition in the step 2.2), the vehicle target detection is performed based on a decision tree, and specifically comprises millimeter wave radar data preprocessing, model construction and network training:
in the model construction, eliminating information which is not related to the target classification characteristics, and classifying the target by combining eight labels of distance, speed, transverse distance, reflectivity, transverse acceleration, angular speed and relative acceleration; then constructing a decision tree algorithm based on an ID3 algorithm in MATLAB; training the decision tree algorithm by adopting training data, and gradually reducing the loss value, thereby completing target detection of the vehicle.
4. The environmental feature-based multi-sensor fusion environmental perception method according to claim 1, wherein in the step 2.3), both laser radar and vision sensors are used for fusion for environmental modeling and obstacle recognition;
step 2.3.1), carrying out joint calibration work based on a sensor model, and realizing space calibration and time synchronization work of laser radar and camera data: in space calibration, converting a visual coordinate system and a laser radar coordinate system into a world coordinate system; constructing a data pool for data of different sources in time;
step 2.3.2), the area is divided by using a hierarchical clustering method, and the vehicle is further identified by using a visual sensor: extracting vehicle bottom shadows by utilizing a local area statistical segmentation method based on YUV space, detecting vehicle symmetry by utilizing a Sobel edge detection operator, and finally determining whether a vehicle exists or not by utilizing texture features; on the fusion structure of the laser radar and the vision sensor, pixel-level fusion and decision-level fusion are adopted.
5. The environmental feature-based multi-sensor fusion environmental perception method according to claim 1, wherein in the step 2.4), when the millimeter wave radar and the visual sensor are fused to perform environmental modeling and obstacle recognition, a convolutional neural network algorithm is adopted to perform visual image processing, a radar signal is processed based on a kalman filter and a two-degree-of-freedom model of a vehicle, and a weighted average information fusion algorithm is adopted to fuse the processed visual image and the radar signal.
CN202011063190.3A 2020-09-30 2020-09-30 Multi-sensor fusion environment sensing method based on environment characteristics Active CN112257522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011063190.3A CN112257522B (en) 2020-09-30 2020-09-30 Multi-sensor fusion environment sensing method based on environment characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011063190.3A CN112257522B (en) 2020-09-30 2020-09-30 Multi-sensor fusion environment sensing method based on environment characteristics

Publications (2)

Publication Number Publication Date
CN112257522A CN112257522A (en) 2021-01-22
CN112257522B true CN112257522B (en) 2024-02-20

Family

ID=74233752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011063190.3A Active CN112257522B (en) 2020-09-30 2020-09-30 Multi-sensor fusion environment sensing method based on environment characteristics

Country Status (1)

Country Link
CN (1) CN112257522B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113561894A (en) * 2021-08-20 2021-10-29 郑州睿行汽车科技有限公司 Height-limiting detection vehicle control system based on binocular stereo vision and 4D millimeter wave radar and control method thereof
CN113920782B (en) * 2021-10-08 2022-08-09 安徽江淮汽车集团股份有限公司 Multi-sensor fusion method applied to parking space detection
CN115630335B (en) * 2022-10-28 2023-06-27 北京中科东信科技有限公司 Road information generation method based on multi-sensor fusion and deep learning model
CN115639536B (en) * 2022-11-18 2023-03-21 陕西欧卡电子智能科技有限公司 Unmanned ship perception target detection method and device based on multi-sensor fusion
CN116796210B (en) * 2023-08-25 2023-11-28 山东莱恩光电科技股份有限公司 Barrier detection method based on laser radar

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN109166314A (en) * 2018-09-29 2019-01-08 河北德冠隆电子科技有限公司 Road conditions awareness apparatus and bus or train route cooperative system based on omnidirectional tracking detection radar
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160223643A1 (en) * 2015-01-28 2016-08-04 Wenhua Li Deep Fusion of Polystatic MIMO Radars with The Internet of Vehicles for Interference-free Environmental Perception

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN109166314A (en) * 2018-09-29 2019-01-08 河北德冠隆电子科技有限公司 Road conditions awareness apparatus and bus or train route cooperative system based on omnidirectional tracking detection radar
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot

Also Published As

Publication number Publication date
CN112257522A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN112257522B (en) Multi-sensor fusion environment sensing method based on environment characteristics
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN109858460B (en) Lane line detection method based on three-dimensional laser radar
CN109684921B (en) Road boundary detection and tracking method based on three-dimensional laser radar
CN108983219B (en) Fusion method and system for image information and radar information of traffic scene
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
Han et al. Research on road environmental sense method of intelligent vehicle based on tracking check
WO2020135810A1 (en) Multi-sensor data fusion method and device
CN105892471B (en) Automatic driving method and apparatus
US8670592B2 (en) Clear path detection using segmentation-based method
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
CN108460416A (en) A kind of structured road feasible zone extracting method based on three-dimensional laser radar
CN113009453B (en) Mine road edge detection and mapping method and device
CN113345237A (en) Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data
CN112147615B (en) Unmanned perception method based on all-weather environment monitoring system
CN111461048B (en) Vision-based parking lot drivable area detection and local map construction method
CN113192091A (en) Long-distance target sensing method based on laser radar and camera fusion
CN102201054A (en) Method for detecting street lines based on robust statistics
CN112597839B (en) Road boundary detection method based on vehicle-mounted millimeter wave radar
CN114475573B (en) Fluctuating road condition identification and vehicle control method based on V2X and vision fusion
CN114415171A (en) Automobile travelable area detection method based on 4D millimeter wave radar
CN112666553A (en) Road ponding identification method and equipment based on millimeter wave radar
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
CN117237919A (en) Intelligent driving sensing method for truck through multi-sensor fusion detection under cross-mode supervised learning
CN112666573B (en) Detection method for retaining wall and barrier behind mine unloading area vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant