CN112257522A - Multi-sensor fusion environment sensing method based on environment characteristics - Google Patents

Multi-sensor fusion environment sensing method based on environment characteristics Download PDF

Info

Publication number
CN112257522A
CN112257522A CN202011063190.3A CN202011063190A CN112257522A CN 112257522 A CN112257522 A CN 112257522A CN 202011063190 A CN202011063190 A CN 202011063190A CN 112257522 A CN112257522 A CN 112257522A
Authority
CN
China
Prior art keywords
sensor
environment
vehicle
light intensity
environmental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011063190.3A
Other languages
Chinese (zh)
Other versions
CN112257522B (en
Inventor
王展
王春燕
赵万忠
王一松
刘利锋
秦亚娟
刘晓强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202011063190.3A priority Critical patent/CN112257522B/en
Publication of CN112257522A publication Critical patent/CN112257522A/en
Application granted granted Critical
Publication of CN112257522B publication Critical patent/CN112257522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9322Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles using additional data, e.g. driver condition, road state or weather data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a multi-sensor fusion environment sensing method based on environmental characteristics. The environment is then modeled and obstacle information in the environment is detected according to the selected approach. The multi-feature multi-layer grid map is adopted in the laser radar detection of the obstacle information, the obstacle environment information is layered, the detection accuracy is improved, and the hanging object false detection is avoided. The invention solves the problem that different sensors have different robustness under different environmental characteristics, selects different sensors based on different environmental characteristics, and increases the robustness and accuracy of target detection of the intelligent vehicle under various environmental working conditions.

Description

Multi-sensor fusion environment sensing method based on environment characteristics
Technical Field
The invention relates to the field of environment perception of unmanned technology, in particular to a multi-sensor fusion environment perception method based on environmental characteristics.
Background
As a new means, the intelligent internet automobile can avoid dangerous driving behaviors, assist a driver in normal driving, improve driving comfort and relieve traffic congestion, so that high attention is paid to and rapid development is achieved worldwide. As one of the important components of the intelligent networked automobile, the target detection system senses environmental information by using the sensor, so that important information is provided for other subsystems of the intelligent networked automobile. At present, the intelligent vehicle widely applied mainly comprises a laser radar, a millimeter wave radar, a vision sensor and the like. Because of the limitation of a single sensor, the joint detection of multiple sensors is currently focused more in the environmental sensing direction. For example, chinese patent application No. CN201821484645.7, entitled "a multi-parameter mobile environment sensing robot", employs a visual sensor, a water level sensor, a temperature and humidity sensor, an ultrasonic sensor, a dust detection sensor, an ultraviolet sensor, and a vital sign detection sensor to detect environmental features, and a joint control module to transmit information to be detected in the environment. Chinese patent application No. CN201711085245.9, entitled "information fusion method of intelligent vehicle sensing system" integrates laser radar, millimeter wave radar, visual sensor, and ultrasonic radar, and its detection thought is: firstly, judging whether the ultrasonic sensor detects an obstacle, if so, not processing the information of the millimeter wave and visual sensor, and if not, processing the information of the millimeter wave and visual sensor; if the millimeter wave and vision sensors detect the same obstacle, the information of the two sensors is processed through the processor in a fusion mode, and only one sensor detects the obstacle, only the information of the corresponding detector is output. None of the above patents has taken into account the robustness of the sensors used in different environmental characteristics. The robustness that different sensors show is different under the conditions of poorer illumination condition, more suspended matters, electromagnetic wave interference and the like, and if the factors are not considered, the detection result is often influenced badly, so that the detection accuracy is reduced, and the safety and the intelligence of the intelligent vehicle in the driving process are influenced.
Disclosure of Invention
The invention aims to solve the technical problem of providing a multi-sensor fusion environment sensing method based on environmental characteristics aiming at the defects involved in the background technology.
The invention adopts the following technical scheme for solving the technical problems:
the multi-sensor fusion environment perception method based on the environment characteristics comprises the following steps:
step 1), arranging a photosensitive sensor, a raindrop sensor, a laser radar, a millimeter wave radar and a vision sensor on a vehicle, wherein the photosensitive sensor is used for detecting the light intensity of the environment where the vehicle is located; the raindrop sensor is used for judging whether the environment where the vehicle is located is rainy;
step 2), obtaining the current light intensity of the environment where the vehicle is located, and judging whether the environment where the vehicle is located is rainy;
step 2.1), if the rain does not appear and the current light intensity is smaller than a preset light intensity threshold value, environment modeling and obstacle identification are carried out by adopting a laser radar;
step 2.2), if it rains and the current light intensity is smaller than a preset light intensity threshold value, adopting a millimeter wave radar to carry out environment modeling and obstacle identification;
step 2.3), if the rain does not appear and the current light intensity is greater than or equal to a preset light intensity threshold value, performing environment modeling and obstacle identification by fusing a vision sensor and a laser radar sensor;
and 2.4) if the rain falls and the current light intensity is larger than or equal to a preset light intensity threshold value, adopting the millimeter wave radar and the vision sensor to be fused for environment modeling and obstacle identification.
As a further optimization scheme of the multi-sensor fusion environment sensing method based on the environmental characteristics, when the laser radar is adopted for environment modeling and obstacle identification in the step 2.1), a multi-characteristic multilayer grid map is adopted for environment model division, a clustering method for determining a distance threshold value based on a depth value is adopted for grid information clustering, and clustering correction is adopted to avoid over-segmentation to determine obstacle edge information.
As a further optimization scheme of the multi-sensor fusion environment sensing method based on the environmental characteristics, when the millimeter wave radar is used for environment modeling and obstacle recognition in the step 2.2), vehicle target detection is performed based on a decision tree, and the method specifically comprises the following steps of millimeter wave radar data preprocessing, model construction and network training:
in the model construction, information irrelevant to the classification characteristics of the targets is removed, and the targets are classified according to eight labels of the sum distance, the speed, the transverse distance, the reflectivity, the transverse acceleration, the angular velocity and the relative acceleration; then, a decision tree algorithm is built based on an ID3 algorithm in MATLAB; training the decision tree algorithm by adopting the training data, and gradually reducing the loss value so as to finish the target detection of the vehicle.
As a further optimization scheme of the multi-sensor fusion environment perception method based on the environmental characteristics, when the laser radar and the vision sensor are fused in the step 2.3) for environment modeling and obstacle identification;
step 2.3.1), performing combined calibration work based on the sensor model to realize space calibration and time synchronization work of the laser radar and the camera data: in the aspect of space calibration, a visual coordinate system and a laser radar coordinate system are converted into a world coordinate system; constructing a data pool for data of different sources in time;
step 2.3.2), the areas are divided by utilizing a hierarchical clustering method, and the vehicles are further identified by utilizing a visual sensor: extracting vehicle bottom shadows by using a partial region statistical segmentation method based on a YUV space, detecting vehicle symmetry by using a Sobel edge detection operator, and finally determining whether a vehicle exists by using texture features; and on the fusion structure of the laser radar and the vision sensor, pixel-level fusion and decision-level fusion are adopted.
As a further optimization scheme of the multi-sensor fusion environment perception method based on the environmental characteristics, when the millimeter wave radar and the visual sensor are fused for environment modeling and obstacle identification in the step 2.4), a convolutional neural network algorithm is adopted for visual image processing, a radar signal is processed based on a Kalman filter and a vehicle two-degree-of-freedom model, and a weighted average information fusion algorithm is adopted for fusing the processed visual image and the radar signal.
Compared with the prior art, the invention adopting the technical scheme has the following technical effects:
1. the invention provides a multi-sensor fusion algorithm based on environmental characteristics, and the robustness of target detection in an intelligent vehicle system under different environments is enhanced. Meanwhile, the adopted sensors can be freely switched based on different weather, so that the problem that a large amount of data needs to be processed due to the fact that all the sensors work together is avoided, and the real-time requirement of the vehicle in the running process is increased.
2. According to the invention, a multi-layer multi-feature grid map is adopted in the laser radar grid data processing, so that the information of different types of obstacles in the environment is more clearly divided, and the phenomenon of false detection of the intelligent vehicle in the driving process is avoided.
Drawings
FIG. 1 is a schematic diagram of the principles of the present invention;
FIG. 2 is a schematic diagram of a multi-feature multi-layer grid map construction principle in the laser radar detection of obstacles;
fig. 3 is a fusion algorithm structure of the laser radar and the vision sensor.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, components are exaggerated for clarity.
Referring to fig. 1, the invention discloses a multi-sensor fusion environment sensing method based on environmental characteristics, comprising the following steps:
step 1), arranging a photosensitive sensor, a raindrop sensor, a laser radar, a millimeter wave radar and a vision sensor on a vehicle, wherein the photosensitive sensor is used for the light intensity of the environment where the vehicle is located; the raindrop sensor is used for judging whether the environment where the vehicle is located is rainy.
The principle of the photosensitive sensor is as follows: the light signal is converted into an electric signal by the photosensitive element, and the electric signal is amplified to represent the light intensity through the current magnitude. The principle of the raindrop sensor is as follows: the mechanical displacement generated by raindrops is converted into an electric signal by utilizing the piezoelectric effect of the piezoelectric vibrator, and whether raining occurs or not is represented by utilizing the voltage waveform converted by the impact energy of the electric signal.
Step 2), obtaining the current light intensity of the environment where the vehicle is located, and judging whether the environment where the vehicle is located rains:
step 2.1), if the rain does not fall and the current light intensity is smaller than a preset light intensity threshold value;
from the current research situation of researchers at home and abroad, it can be known that the illumination condition has great influence on the precision of the vision sensor. Therefore, we will not employ a vision sensor when the lighting conditions are poor. And the laser radar can have good robustness when applied to poor light conditions. Meanwhile, compared with a laser radar, the millimeter wave radar has the advantages of high precision, large information quantity and no visible light interference. Therefore, under the environment characteristics, the laser radar is adopted for environment modeling and obstacle identification.
Step 2.2), if it rains and the current light intensity is smaller than a preset light intensity threshold value;
in such cases the robustness of the vision sensor is degraded due to poor lighting conditions, and the accuracy of the lidar decreases with increasing density of airborne matter due to rain weather. Research shows that rain weather is composed of small water drops, and the radius of the rain drops and the distribution density of the rain drops in the air directly determine the probability of collision of laser with the rain drops in the propagation process. The higher the probability of collision, the greater the influence on the propagation speed of the laser. Therefore, it is not suitable to use lidar sensors as the work for environmental modeling in rainy weather. In such cases, therefore, millimeter-wave radar is used for environment modeling and obstacle recognition.
Step 2.3), if the rain does not appear and the current light intensity is more than or equal to a preset light intensity threshold value;
when the illumination condition is good and rain weather is not involved, the robustness of the vision sensor and the laser radar sensor is good, and the robustness of the millimeter wave radar can be interfered by electromagnetic waves between other communication equipment and the radar, so that the vision sensor and the laser radar sensor are selected to be fused for environment modeling and obstacle identification.
Step 2.4), if the rain falls and the current light intensity is more than or equal to a preset light intensity threshold value;
when the lighting conditions are good, the visual sensor, which is the most mature and inexpensive sensor currently used, can play a great role in environmental perception. However, the rain weather also has a certain influence on the detection accuracy of the vision sensor, so that the detection accuracy cannot be achieved by using a single vision sensor. Meanwhile, a large amount of water drops exist in rainwater weather air, the propagation speed of laser can be directly influenced, so that the detection precision of the laser radar sensor is further influenced, and the suspended matter interference resistance of the millimeter wave radar is high, so that the millimeter wave radar and the visual sensor are fused to perform environment modeling and obstacle recognition.
And 2.1) when the laser radar is adopted for environment modeling and obstacle identification in the step 2.1), dividing an environment model by adopting a multi-feature multilayer grid map, clustering grid information by adopting a clustering method for determining a distance threshold value based on a depth value, and performing clustering correction to avoid over-segmentation to determine obstacle edge information.
Referring to fig. 2, the multi-feature multi-layer grid map divides the environment model into a road surface layer, a barrier layer and a suspension layer, and combines height features and strength features on the division; two thresholds are set in the height feature: a and b, b being greater than a, a being set to a smaller threshold value to prevent false detection of an obstacle as a ground surface, and b being set to a larger threshold value to prevent false detection of a road surface as an obstacle when the road surface undulates unevenly; when the delta H is less than a, the road surface is judged; when the delta H is less than b, judging that the delta H is an obstacle or a suspension object; and when the delta H is larger than or equal to a and smaller than or equal to b, in the height characteristic stage, judging the delta H, and after introducing the strength characteristic, judging. Before introducing the intensity characteristic, the intensity is firstly required to be corrected according to the distance, and the correction formula is as follows:
Figure BDA0002712959940000051
wherein focalDistance and K are data given by the laser radar specification, distance is the distance value of the laser radar from the obstacle, and intensityVal is the measured intensity value.
After the intensity correction, it is considered that the intensity information of the surface of the obstacle is relatively disturbed due to the large difference in surface properties and colors of the clothes and the paint for pedestrians and vehicles, resulting in a large variance. For the road surface, the surface material and the color are relatively uniform, so that the strength information of the road surface is relatively consistent, and the variance is small. Therefore, the road surface blocks can be well separated by the strength mean value and the strength variance. Setting the upper threshold and the lower threshold of the intensity mean value as MeanIa and MeanIb respectively, setting the variance threshold as VarIt, and judging as the road surface plane block when the intensity mean value of the plane block is in the threshold range and the variance is less than the variance threshold.
After detecting the road surface layer, the height average value H is calculatedGAs the height of the pavement layer in the grid. If no road layer is detected, let HG0. The suspension level height is then solved, the formula is as follows:
HF=HG+HV+HS
in the formula HVAnd HSRespectively representing the height of the vehicle itself plus the lidar and the safety distance reserved to the roof of the vehicle. Thus, when the mean height of the surface blocks in the grid is at HGTo HFIn time, the surface block is determined to be a barrier layer. When the height average value of the grid inner surface blocks is more than HFAnd if so, judging that the surface block is the suspension layer. And finally, compressing the data in the grid, only keeping height difference and intensity mean value data in the obstacle plane block, and removing the data of other plane blocks.
And after layering the grid data, clustering the obstacle plane blocks to obtain the edge information of the obstacles. The method is a method for determining a clustering threshold based on a distance threshold, wherein the distance threshold is determined by a depth value, and the formula is as follows:
Figure BDA0002712959940000052
in the formula sigmarRepresented is the measurement error of the radar sensor,
Figure BDA0002712959940000053
for horizontal angular resolution, rn-1And lambda is a threshold parameter, and is the depth value of the center point of the barrier grid.
Further adopting cluster correction after clustering to avoid over-segmentation, and comparing by using similarity, wherein a similarity formula is as follows:
Figure BDA0002712959940000054
in the formula, the coefficients a, b, c and d are empirical values obtained through multiple experiments. (x)i,yi),(xj,yj) Is the coordinate of the center point of the ith and jth obstacle. Vi,θi,TiRespectively representing the speed, the moving direction and the intensity mean value of the obstacle block at different moments.
When the millimeter wave radar is adopted for environment modeling and obstacle recognition in the step 2.2), vehicle target detection is carried out based on a decision tree, and the method specifically comprises millimeter wave radar data preprocessing, model construction and network training:
in the model construction, information irrelevant to the classification characteristics of the targets is removed, and the targets are classified by combining eight labels of distance, speed, transverse distance, reflectivity, transverse acceleration, angular velocity and relative acceleration; then, a decision tree algorithm is built based on an ID3 algorithm in MATLAB; training the decision tree algorithm by adopting the training data, and gradually reducing the loss value so as to finish the target detection of the vehicle.
In the step 2.3), the laser radar and the vision sensor are fused to carry out environment modeling and obstacle identification;
step 2.3.1), performing combined calibration work based on the sensor model to realize space calibration and time synchronization work of the laser radar and the camera data: in the aspect of space calibration, a visual coordinate system and a laser radar coordinate system are converted into a world coordinate system; constructing a data pool for data of different sources in time;
step 2.3.2), the region of interest is divided by a hierarchical clustering method, and the vehicle is further identified by a visual sensor: extracting vehicle bottom shadows by using a partial region statistical segmentation method based on a YUV space, detecting vehicle symmetry by using a Sobel edge detection operator, and finally determining whether a vehicle exists by using texture features; referring to fig. 3, pixel-level fusion and decision-level fusion are adopted in the fusion structure of the laser radar and the vision sensor.
And 2.4) when the millimeter wave radar and the vision sensor are fused for environment modeling and obstacle identification, performing vision image processing by using a convolutional neural network algorithm, processing radar signals based on a Kalman filter and a vehicle two-degree-of-freedom model, and fusing the processed vision image and radar signals by using a weighted average information fusion algorithm.
The method specifically comprises the steps of finishing preliminary screening of effective targets with vehicle motion characteristics through a Kalman filter and a normal acceleration model, and extracting Fisher characteristics of images by adopting a deep learning mode to identify vehicles.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only illustrative of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. The multi-sensor fusion environment sensing method based on the environment characteristics is characterized by comprising the following steps:
step 1), arranging a photosensitive sensor, a raindrop sensor, a laser radar, a millimeter wave radar and a vision sensor on a vehicle, wherein the photosensitive sensor is used for detecting the light intensity of the environment where the vehicle is located; the raindrop sensor is used for judging whether the environment where the vehicle is located is rainy;
step 2), obtaining the current light intensity of the environment where the vehicle is located, and judging whether the environment where the vehicle is located is rainy;
step 2.1), if the rain does not appear and the current light intensity is smaller than a preset light intensity threshold value, environment modeling and obstacle identification are carried out by adopting a laser radar;
step 2.2), if it rains and the current light intensity is smaller than a preset light intensity threshold value, adopting a millimeter wave radar to carry out environment modeling and obstacle identification;
step 2.3), if the rain does not appear and the current light intensity is greater than or equal to a preset light intensity threshold value, performing environment modeling and obstacle identification by fusing a vision sensor and a laser radar sensor;
and 2.4) if the rain falls and the current light intensity is larger than or equal to a preset light intensity threshold value, performing environment modeling and obstacle identification by fusing a millimeter wave radar and a vision sensor.
2. The environmental feature-based multi-sensor fusion environmental perception method according to claim 1, wherein in the step 2.1), when the laser radar is used for environmental modeling and obstacle recognition, a multi-feature multi-layer grid map is used for partitioning an environmental model, a clustering method for determining a distance threshold value based on a depth value is used for clustering grid information, and clustering correction is used to avoid over-segmentation to determine obstacle edge information.
3. The environmental feature-based multi-sensor fusion environment sensing method according to claim 1, wherein in the step 2.2), when the millimeter wave radar is used for environment modeling and obstacle recognition, vehicle target detection is performed based on a decision tree, and specifically includes millimeter wave radar data preprocessing, model construction and network training:
in the model construction, information irrelevant to the classification characteristics of the targets is removed, and the targets are classified by combining eight labels of distance, speed, transverse distance, reflectivity, transverse acceleration, angular velocity and relative acceleration; then, a decision tree algorithm is built based on an ID3 algorithm in MATLAB; training the decision tree algorithm by adopting the training data, and gradually reducing the loss value so as to finish the target detection of the vehicle.
4. The environmental feature-based multi-sensor fusion environmental perception method according to claim 1, wherein in the step 2.3), the laser radar and the vision sensor are fused for environmental modeling and obstacle identification;
step 2.3.1), performing combined calibration work based on the sensor model to realize space calibration and time synchronization work of the laser radar and the camera data: in the aspect of space calibration, a visual coordinate system and a laser radar coordinate system are converted into a world coordinate system; constructing a data pool for data of different sources in time;
step 2.3.2), the areas are divided by utilizing a hierarchical clustering method, and the vehicles are further identified by utilizing a visual sensor: extracting vehicle bottom shadows by using a partial region statistical segmentation method based on a YUV space, detecting vehicle symmetry by using a Sobel edge detection operator, and finally determining whether a vehicle exists by using texture features; and on the fusion structure of the laser radar and the vision sensor, pixel-level fusion and decision-level fusion are adopted.
5. The environmental feature-based multi-sensor fusion environmental perception method according to claim 1, wherein in the step 2.4), when the millimeter wave radar and the visual sensor are fused for environmental modeling and obstacle recognition, a convolutional neural network algorithm is used for visual image processing, a radar signal is processed based on a Kalman filter and a two-degree-of-freedom model of the vehicle, and a weighted average information fusion algorithm is used for fusing the processed visual image and the radar signal.
CN202011063190.3A 2020-09-30 2020-09-30 Multi-sensor fusion environment sensing method based on environment characteristics Active CN112257522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011063190.3A CN112257522B (en) 2020-09-30 2020-09-30 Multi-sensor fusion environment sensing method based on environment characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011063190.3A CN112257522B (en) 2020-09-30 2020-09-30 Multi-sensor fusion environment sensing method based on environment characteristics

Publications (2)

Publication Number Publication Date
CN112257522A true CN112257522A (en) 2021-01-22
CN112257522B CN112257522B (en) 2024-02-20

Family

ID=74233752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011063190.3A Active CN112257522B (en) 2020-09-30 2020-09-30 Multi-sensor fusion environment sensing method based on environment characteristics

Country Status (1)

Country Link
CN (1) CN112257522B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113561894A (en) * 2021-08-20 2021-10-29 郑州睿行汽车科技有限公司 Height-limiting detection vehicle control system based on binocular stereo vision and 4D millimeter wave radar and control method thereof
CN113920782A (en) * 2021-10-08 2022-01-11 安徽江淮汽车集团股份有限公司 Multi-sensor fusion method applied to parking space detection
CN115630335A (en) * 2022-10-28 2023-01-20 北京中科东信科技有限公司 Road information generation method based on multi-sensor fusion and deep learning model
CN115639536A (en) * 2022-11-18 2023-01-24 陕西欧卡电子智能科技有限公司 Unmanned ship perception target detection method and device based on multi-sensor fusion
CN116796210A (en) * 2023-08-25 2023-09-22 山东莱恩光电科技股份有限公司 Barrier detection method based on laser radar
CN118062016A (en) * 2024-04-25 2024-05-24 深圳市天之眼高新科技有限公司 Vehicle environment sensing method, apparatus and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160223643A1 (en) * 2015-01-28 2016-08-04 Wenhua Li Deep Fusion of Polystatic MIMO Radars with The Internet of Vehicles for Interference-free Environmental Perception
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN109166314A (en) * 2018-09-29 2019-01-08 河北德冠隆电子科技有限公司 Road conditions awareness apparatus and bus or train route cooperative system based on omnidirectional tracking detection radar
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160223643A1 (en) * 2015-01-28 2016-08-04 Wenhua Li Deep Fusion of Polystatic MIMO Radars with The Internet of Vehicles for Interference-free Environmental Perception
CN107609522A (en) * 2017-09-19 2018-01-19 东华大学 A kind of information fusion vehicle detecting system based on laser radar and machine vision
CN109166314A (en) * 2018-09-29 2019-01-08 河北德冠隆电子科技有限公司 Road conditions awareness apparatus and bus or train route cooperative system based on omnidirectional tracking detection radar
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113561894A (en) * 2021-08-20 2021-10-29 郑州睿行汽车科技有限公司 Height-limiting detection vehicle control system based on binocular stereo vision and 4D millimeter wave radar and control method thereof
CN113920782A (en) * 2021-10-08 2022-01-11 安徽江淮汽车集团股份有限公司 Multi-sensor fusion method applied to parking space detection
CN115630335A (en) * 2022-10-28 2023-01-20 北京中科东信科技有限公司 Road information generation method based on multi-sensor fusion and deep learning model
CN115630335B (en) * 2022-10-28 2023-06-27 北京中科东信科技有限公司 Road information generation method based on multi-sensor fusion and deep learning model
CN115639536A (en) * 2022-11-18 2023-01-24 陕西欧卡电子智能科技有限公司 Unmanned ship perception target detection method and device based on multi-sensor fusion
CN115639536B (en) * 2022-11-18 2023-03-21 陕西欧卡电子智能科技有限公司 Unmanned ship perception target detection method and device based on multi-sensor fusion
CN116796210A (en) * 2023-08-25 2023-09-22 山东莱恩光电科技股份有限公司 Barrier detection method based on laser radar
CN116796210B (en) * 2023-08-25 2023-11-28 山东莱恩光电科技股份有限公司 Barrier detection method based on laser radar
CN118062016A (en) * 2024-04-25 2024-05-24 深圳市天之眼高新科技有限公司 Vehicle environment sensing method, apparatus and storage medium

Also Published As

Publication number Publication date
CN112257522B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN112257522B (en) Multi-sensor fusion environment sensing method based on environment characteristics
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN109858460B (en) Lane line detection method based on three-dimensional laser radar
CN109684921B (en) Road boundary detection and tracking method based on three-dimensional laser radar
CN105892471B (en) Automatic driving method and apparatus
CN110178167B (en) Intersection violation video identification method based on cooperative relay of cameras
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
CN108334819B (en) Ground classifier system for automated vehicles
CN111369541B (en) Vehicle detection method for intelligent automobile under severe weather condition
WO2020135810A1 (en) Multi-sensor data fusion method and device
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
US20100098297A1 (en) Clear path detection using segmentation-based method
CN113009453B (en) Mine road edge detection and mapping method and device
CN112147615B (en) Unmanned perception method based on all-weather environment monitoring system
CN113345237A (en) Lane-changing identification and prediction method, system, equipment and storage medium for extracting vehicle track by using roadside laser radar data
CN111461048B (en) Vision-based parking lot drivable area detection and local map construction method
CN112597839B (en) Road boundary detection method based on vehicle-mounted millimeter wave radar
CN102201054A (en) Method for detecting street lines based on robust statistics
CN112674646B (en) Self-adaptive welting operation method based on multi-algorithm fusion and robot
CN114898296A (en) Bus lane occupation detection method based on millimeter wave radar and vision fusion
CN112101316B (en) Target detection method and system
CN114475573B (en) Fluctuating road condition identification and vehicle control method based on V2X and vision fusion
CN114332647B (en) River channel boundary detection and tracking method and system for unmanned ship
CN114415171A (en) Automobile travelable area detection method based on 4D millimeter wave radar
CN106529404A (en) Imaging principle-based recognition method for pilotless automobile to recognize road marker line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant