CN113432615A - Detection method and system based on multi-sensor fusion drivable area and vehicle - Google Patents

Detection method and system based on multi-sensor fusion drivable area and vehicle Download PDF

Info

Publication number
CN113432615A
CN113432615A CN202110876811.8A CN202110876811A CN113432615A CN 113432615 A CN113432615 A CN 113432615A CN 202110876811 A CN202110876811 A CN 202110876811A CN 113432615 A CN113432615 A CN 113432615A
Authority
CN
China
Prior art keywords
lane
data point
ass
target data
visual data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110876811.8A
Other languages
Chinese (zh)
Other versions
CN113432615B (en
Inventor
王皓
谭余
刘金彦
张帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202110876811.8A priority Critical patent/CN113432615B/en
Publication of CN113432615A publication Critical patent/CN113432615A/en
Application granted granted Critical
Publication of CN113432615B publication Critical patent/CN113432615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a detection method, a detection system and a detection vehicle based on a multi-sensor fusion drivable area. The method can detect the drivable area in real time, continuously output the maximum range of each lane which can be run, effectively overcome the defect caused by the missing detection of a single target detection module, improve the safety of the whole vehicle and provide an effective judgment basis for the planning and decision of an unmanned driving system.

Description

Detection method and system based on multi-sensor fusion drivable area and vehicle
Technical Field
The invention relates to the technical field of automatic vehicle driving, in particular to a detection method and system based on a multi-sensor fusion drivable area and a vehicle.
Background
The automatic driving system is an active safety system, can automatically control the running of vehicles, including running, lane changing, parking and the like, improves the driving experience and the comfort, and simultaneously ensures the driving safety. The sensing system is a key component of the automatic driving system, realizes environment sensing by using sensors installed on a vehicle, such as a camera, a millimeter wave radar, a laser radar, ultrasonic waves and the like, and automatically drives efficiently and safely on the premise of guaranteeing to comply with traffic rules by identifying lane lines, vehicle and pedestrian targets, traffic signs, calculating a feasible area and the like. In the unmanned system, the detection of surrounding vehicles, pedestrians and obstacles is an important subsystem of the unmanned sensing system, and directly influences the planning, decision and control of the unmanned system. In recent years, the use of the feasible region has begun to form some trends and tendencies, and the architecture of the master perception system + the security perception system is formed. In particular, the main perception system is a common target level scene perception, and target states can be described by length, width, height, speed, category and the like, and prediction of intentions or trajectories is carried out, particularly obstacles in a scene are represented by an explicit result. The safety perception system (equivalent to the travelable area) de-emphasizes the object level perception, especially for dynamic objects, with a hidden representation of the result. The two sensing systems can be mutually verified in the fusion module.
At present, a camera, a laser radar and the like are mainly used for detecting a drivable area, and two representing modes are mainly used, one mode is vector envelope representation, and a certain number of points are arranged under a polar coordinate or a rectangular coordinate system, so that the advantage is that the data volume is small, the defect is that the passable situation behind an obstacle is difficult to express, and the expression mode is more in visual use; the other one is grid representation, the grid can be represented by a fixed grid or a variable grid, and the like, so that the method has the advantages of expressing the passing condition of the obstacle, and has the defects of large data volume and more expression modes of the laser radar. In an unmanned system, visual-based target level detection has the defects of transverse vehicle missing detection or large size error, static short and small object missing detection and the like, and similar defects of target detection have great negative effects on a planning decision system and further influence the safety of the whole vehicle.
Disclosure of Invention
The invention aims to provide a detection method, a detection system and a vehicle based on a multi-sensor fusion drivable area, which can detect the drivable area in real time, continuously output the maximum range of each lane which can be passed, effectively improve the defect caused by the missing detection of a single target detection module, improve the safety of the whole vehicle and provide an effective judgment basis for the planning and decision of an unmanned driving system.
In order to achieve the purpose, the invention provides a detection method based on a multi-sensor fusion drivable area, which comprises the following steps:
acquiring a visual data point collected by a vehicle-mounted visual sensor and a target data point collected by a vehicle-mounted radar in real time; wherein the visual data points comprise types of obstacles including vehicles, pedestrians and curbs and the position coordinates, the target data points comprise types, positions and speeds of targets including vehicles and pedestrians;
screening out the visual data in each lane and generating a visual data point set Camera _ lane of the corresponding laneiA _ dataset; and filtering the target point data in each lane and generating a target data point set Mmw _ lane of the corresponding laneiA _ dataset; wherein i is the current lane, the left lane and the right lane; or, i is the current lane and the left lane; or, i is the current lane and the right lane;
calculating a minimum visual data point P _ C in the set of visual data points for each laneiMin and the minimum target data point P _ M in the target point data point set corresponding to each laneiMin, wherein the smallest visual data point represents the coordinate point in the set of visual data points closest to the vehicle, and the smallest target data point represents the coordinate point in the set of target data points closest to the vehicle;
the minimum visual data point P _ C of each laneiMin is respectively corresponding to the visual data point set Camera _ lane of the corresponding laneiDataset and target data point set Mmw laneiCorrelating _ dataset, and judging a visual data point set Camera _ lane corresponding to each laneiDivide minimum visual data Point P _ C in _ datasetiWhether there is at least one visual data point outside min and the smallest visual data point P _ C of the corresponding laneiThe distance of min is less than a first preset threshold value PiIf so, the association is successful, instruction Ass _ CiC; otherwise, the association fails, let Ass _ CiD; wherein, Ass _ CiDC represents the minimum visual data point P _ C corresponding to each laneiMin and visual data point set Camera LaneiA correlation result of _ dataset;
judging Mmw _ lane target data point set corresponding to each laneiWhether there is at least one target data point in dataset with the smallest visual data point P _ C of its corresponding laneiThe distance of min is less than a second preset threshold QiIf so, the association is successful, instruction Ass _ CiDM is C; otherwise, the association fails, let Ass _ CiDM ═ D; wherein, Ass _ CiDM represents the minimum visual data point P _ C corresponding to each laneiMin and target data point set Mmw LaneiA correlation result of _ dataset;
through Ass _ CiDC and Ass _ CiValue confirmation P _ C of _ DMiMin final correlation result Ass _ CiDecision Ass _ CiDC and Ass _ CiIf there is at least one DM equal to C, if so, the association is successful, Ass _ CiOtherwise, association fails, let Ass _ Ci=D;
The minimum target data point P _ M of each laneiMin is respectively corresponding to the visual data point set Camera _ lane of the corresponding laneiDataset and target data point set Mmw laneiCorrelating _ dataset, and judging a visual data point set Camera _ lane corresponding to each laneiWhether there is a minimum target data point P _ M of at least one visual data point and its corresponding lane in _ datasetiThe distance of min is less than a third preset threshold FiIf so, the association is successful, let Ass _ MiC; otherwise, the association fails, let Ass _ MiD; wherein, Ass _ MiDC represents the minimum target data point P _ M corresponding to each laneiMin and visual data point set Camera LaneiA correlation result of _ dataset;
judging Mmw _ lane target data point set corresponding to each laneiDivide minimum target data point P _ M in _ datasetiWhether there is at least one target data point outside min and the minimum target data point P _ M of the corresponding laneiThe distance of min is less than the fourth preset threshold value GiIf so, the association is successful, let Ass _ Mi_DM=C; otherwise, the association fails, let Ass _ MiDM ═ D; wherein, Ass _ MiDM represents the minimum target data point P _ M corresponding to each laneiMin and target data point set Mmw LaneiA correlation result of _ dataset;
through Ass _ MiDC and Ass MiValue confirmation P _ M for _ DMiMin final correlation result Ass _ MiDecision Ass _ MiDC and Ass MiIf there is at least one of DM's is equal to C, if so, the association is successful, Ass _ MiOtherwise, association fails, let Ass _ Mi=D;
Comparison Ass _ CiAnd Ass _ Mi
If Ass _ Ci=C,Ass_MiIf D, then P _ C is outputiMin is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=D,Ass_MiIf C, then P _ M is outputiMin is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=Ass_MiIf it is C, then P _ CiMin and P _ MiMin is compared if P _ CiMin is less than P _ MiMin, then output P _ CiMin is used as the cut-off point of the drivable area of the lane, otherwise, P _ M is outputiMin is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=Ass_MiD, no output.
Further, the vehicle-mounted vision sensor is a camera, and the vehicle-mounted radar is a millimeter wave radar.
Further, the smallest visual data point P _ CiThe formula for min is:
Figure BDA0003190615970000031
Figure BDA0003190615970000032
Figure BDA0003190615970000033
indicates i lane is
Figure BDA0003190615970000034
Coordinates of individual visual data points;
minimum target data point P _ MiThe formula for min is:
Figure BDA0003190615970000035
Figure BDA0003190615970000036
Figure BDA0003190615970000037
indicates i lane is
Figure BDA0003190615970000038
Coordinates of each target data point.
An X _ Y coordinate system is established by taking the center of a front bumper of the vehicle as an origin, the X coordinate axis points to the advancing direction of the vehicle, and the Y coordinate system points to the right side of the vehicle.
Further, the set of visual data points Camera _ laneiDataset and target data point set Mmw laneiThe screening steps of _ dataset are as follows:
calculating two adjacent lane lines corresponding to each lane according to the lane line parameters, judging whether each visual data point and each target data point are between the two adjacent lane lines corresponding to each lane, respectively storing the visual data point and the target data point on each lane after confirming the lane where each visual data point and each target data point are located, and generating a visual data point set Camera _ Lane corresponding to each laneiDataset and target data point set Mmw lanei_dataset。
The invention also provides a multi-sensor fusion drivable area-based detection system, which comprises an on-board vision sensor for collecting vision data points, an on-board radar for collecting target data points and a data processing module, wherein the data processing module is configured to execute the steps of the multi-sensor fusion drivable area-based detection method.
The invention also provides a vehicle comprising the detection system based on the multi-sensor fusion drivable area.
Compared with the prior art, the invention has the following advantages:
according to the detection method, the detection system and the detection vehicle based on the multi-sensor fusion drivable area, the drivable area is detected in real time by using the current mainstream vehicle-mounted vision sensor and the vehicle-mounted radar, the maximum range of the trafficable of each lane is continuously output, the defect caused by the missing detection of a single target detection module is effectively overcome, the safety of the whole vehicle is improved, and an effective judgment basis is provided for the planning and decision of an unmanned system.
Drawings
FIG. 1 is a flow chart of a method for multi-sensor fusion based detection of drivable zones in accordance with the present invention;
FIG. 2 is a schematic structural diagram of a multi-sensor fusion pilotable region-based detection system according to the present invention.
In the figure:
the system comprises a 1-vehicle-mounted vision sensor, a 2-vehicle-mounted radar and a 3-data processing module.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings.
Referring to fig. 1, the embodiment discloses a detection method based on multi-sensor fusion of drivable areas, which includes the steps of:
acquiring a visual data point collected by a vehicle-mounted visual sensor and a target data point collected by a vehicle-mounted radar in real time; wherein the visual data points comprise types of obstacles including vehicles, pedestrians and curbs and the position coordinates, the target data points comprise types, positions and speeds of targets including vehicles and pedestrians;
screened out in each laneAnd generates a set of visual data points Camera _ lane for the corresponding laneiA _ dataset; and filtering the target point data in each lane and generating a target data point set Mmw _ lane of the corresponding laneiA _ dataset; wherein i is the current lane, the left lane and the right lane; or, i is the current lane and the left lane; or, i is the current lane and the right lane; when a left lane and a right lane are respectively arranged on two sides of a current lane where the vehicle is located, the vehicle-mounted vision sensor and the vehicle-mounted radar can collect data points of the three lanes; when only one side of the current lane where the vehicle is located is provided with a lane (a left lane or a right lane), the vehicle-mounted vision sensor and the vehicle-mounted radar can collect data points of the two lanes.
Calculating a minimum visual data point P _ C in the set of visual data points for each laneiMin and the minimum target data point P _ M in the target point data point set corresponding to each laneiMin, wherein the smallest visual data point represents the coordinate point in the set of visual data points closest to the vehicle, and the smallest target data point represents the coordinate point in the set of target data points closest to the vehicle;
the minimum visual data point P _ C of each laneiMin is respectively corresponding to the visual data point set Camera _ lane of the corresponding laneiDataset and target data point set Mmw laneiCorrelating _ dataset, and judging a visual data point set Camera _ lane corresponding to each laneiDivide minimum visual data Point P _ C in _ datasetiWhether there is at least one visual data point outside min and the smallest visual data point P _ C of the corresponding laneiThe distance of min is less than a first preset threshold value PiIf so, the association is successful, instruction Ass _ CiC; otherwise, the association fails, let Ass _ CiD; wherein, Ass _ CiDC represents the minimum visual data point P _ C corresponding to each laneiMin and visual data point set Camera LaneiA correlation result of _ dataset;
judging Mmw _ lane target data point set corresponding to each laneiWhether there is at least one target data point in dataset with the smallest visual data point P _ C of its corresponding laneiDistance of _minLess than a second predetermined threshold QiIf so, the association is successful, instruction Ass _ CiDM is C; otherwise, the association fails, let Ass _ CiDM ═ D; wherein, Ass _ CiDM represents the minimum visual data point P _ C corresponding to each laneiMin and target data point set Mmw LaneiA correlation result of _ dataset;
through Ass _ CiDC and Ass _ CiValue confirmation P _ C of _ DMiMin final correlation result Ass _ CiDecision Ass _ CiDC and Ass _ CiIf there is at least one DM equal to C, if so, the association is successful, Ass _ CiOtherwise, association fails, let Ass _ Ci=D;
The minimum target data point P _ M of each laneiMin is respectively corresponding to the visual data point set Camera _ lane of the corresponding laneiDataset and target data point set Mmw laneiCorrelating _ dataset, and judging a visual data point set Camera _ lane corresponding to each laneiWhether there is a minimum target data point P _ M of at least one visual data point and its corresponding lane in _ datasetiThe distance of min is less than a third preset threshold FiIf so, the association is successful, let Ass _ MiC; otherwise, the association fails, let Ass _ MiD; wherein, Ass _ MiDC represents the minimum target data point P _ M corresponding to each laneiMin and visual data point set Camera LaneiA correlation result of _ dataset;
judging Mmw _ lane target data point set corresponding to each laneiDivide minimum target data point P _ M in _ datasetiWhether there is at least one target data point outside min and the minimum target data point P _ M of the corresponding laneiThe distance of min is less than the fourth preset threshold value GiIf so, the association is successful, let Ass _ MiDM is C; otherwise, the association fails, let Ass _ MiDM ═ D; wherein, Ass _ MiDM represents the minimum target data point P _ M corresponding to each laneiMin and target data point set Mmw LaneiA correlation result of _ dataset;
through Ass _ MiDC and Ass MiValue confirmation P _ M for _ DMiMin final correlation result Ass _ MiDecision Ass _ MiDC and Ass MiIf there is at least one of DM's is equal to C, if so, the association is successful, Ass _ MiOtherwise, association fails, let Ass _ Mi=D;
Comparison Ass _ CiAnd Ass _ Mi
If Ass _ Ci=C,Ass_MiIf D, then P _ C is outputiMin is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=D,Ass_MiIf C, then P _ M is outputiMin is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=Ass_MiIf it is C, then P _ CiMin and P _ MiMin is compared if P _ CiMin is less than P _ MiMin, then output P _ CiMin is used as the cut-off point of the drivable area of the lane, otherwise, P _ M is outputiMin is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=Ass_MiD, no output.
In this embodiment, C is 1, D is 0; in certain embodiments, C ═ 0, D ═ 1; the values of C and D are set according to practical situations, and are not limited thereto.
The drivable area is an area through which the vehicle can pass, and no object is in the drivable area.
Due to the smallest visual data point P _ CiMin in the associated set of visual data points Camera _ laneiOf _ dataset, the smallest target data point P _ MiMin at the associated target data point set Mmw _ laneiIn _ dataset, so that P _ C is avoided at the time of associationiMin and visual data point set Camera LaneiSelf data point association in _dataset; avoiding P _ MiMin avoidance target data point set Mmw _ laneiSelf data point association in _ dataset.
In the present embodiment, the first preset threshold value PiA second predetermined threshold QiA third predetermined thresholdValue FiAnd a fourth preset threshold GiThe parameter is determined according to the accuracy of the sensor.
The minimum visual data point P _ C of each laneiMin is respectively corresponding to the visual data point set Camera _ lane of the corresponding laneiDataset and target data point set Mmw laneiDataset is associated, i.e.:
Ass_Ci_DC=Asso_lanei(P_Ci_min,Camera_lanei_dataset);
Ass_Ci_DM=Asso_lanei(P_Ci_min,Mmw_lanei_dataset)。
the minimum target data point P _ M of each laneiMin is respectively corresponding to the visual data point set Camera _ lane of the corresponding laneiDataset and target data point set Mmw laneiDataset is associated, i.e.:
Ass_Mi_DC=Asso_lanei(P_Mi_min,Camera_lanei_dataset);
Ass_Mi_DM=Asso_lanei(P_Mi_min,Mmw_lanei_dataset)。
Asso_lanei() The function is a correlation function, and the calculation method of the correlation function is to sequentially calculate P _ CiMin or P _ MiMin coordinates and visual data point set Camera _ LaneiDataset and target data point set Mmw laneiDistance dis _ r of data points in _ dataset.
To calculate P _ CiMin and visual data point set Camera LaneiDistance dis _ r of data points in _ dataset is illustrated, assuming P _ CiThe coordinate of _ min is (x _ min, y _ min), and the visual data point set Camera _ laneiThe coordinates of any data point in _ dataset are (x _ r, y _ r), then the distance dis _ r is calculated as:
dis_r=sqrt((x_r_x_min)2+(y_r_y_min)2) And sqrt is the square of the square.
In this embodiment, the vehicle-mounted vision sensor is a camera, and the vehicle-mounted radar is a millimeter-wave radar.
In the present embodiment, the smallest visual data point P _ CiThe formula for min is:
Figure BDA0003190615970000071
Figure BDA0003190615970000072
Figure BDA0003190615970000073
indicates i lane is
Figure BDA0003190615970000074
Coordinates of individual visual data points;
Figure BDA0003190615970000075
Figure BDA0003190615970000076
according to the number of the vehicle-mounted vision sensors collected in different lanes
Figure BDA0003190615970000077
Can be the same or different and have different lanes
Figure BDA0003190615970000078
The values of (A) may be the same or different.
Minimum target data point P _ MiThe formula for min is:
Figure BDA0003190615970000079
Figure BDA00031906159700000710
Figure BDA00031906159700000711
indicates i lane is
Figure BDA00031906159700000712
Coordinates of each target data point.
Figure BDA00031906159700000713
According to the quantity of target data points collected by the vehicle-mounted radar in different lanes
Figure BDA00031906159700000714
Can be the same or different and have different lanes
Figure BDA00031906159700000715
The values of (A) may be the same or different.
An X _ Y coordinate system is established by taking the center of a front bumper of the vehicle as an origin, the X coordinate axis points to the advancing direction of the vehicle, and the Y coordinate system points to the right side of the vehicle.
In this embodiment, the set of visual data points Camera _ laneiDataset and target data point set Mmw laneiThe screening steps of _ dataset are as follows:
calculating two adjacent lane lines corresponding to each lane according to the lane line parameters, judging whether each visual data point and each target data point are between the two adjacent lane lines corresponding to each lane, respectively storing the visual data point and the target data point on each lane after confirming the lane where each visual data point and each target data point are located, and generating a visual data point set Camera _ Lane corresponding to each laneiDataset and target data point set Mmw lanei_dataset。
The coordinates of the data points are represented by (x, y), and the lane line equation is represented by function _ lane (x) a0+A1x+A2x2+A3x3Wherein A0, A1, A2 and A3 are lane marking parameters;
for example, the following steps are carried out: if the left lane line function is function _ leftlane (x) and the right lane line function is function _ right (x), for any data point, e.g., (x0, y0), the point must satisfy the condition between lane lines:
y0-function_leftlane(x0)>=0;
and y0-function _ right (x0) < ═ 0;
the data point set is represented as follows:
camera _ lanei _ dataset { p1, p2, p3, … }, which represents a set of visual data points of the in-vehicle vision sensor in lane i;
mmw _ lanei _ dataset { p1, p2, p3, … }, which represents a target data point set of the vehicle-mounted radar in the i lane.
The embodiment discloses a multi-sensor fusion drivable area-based detection system, which comprises an on-vehicle vision sensor 1 for collecting vision data points, an on-vehicle radar 2 for collecting target data points and a data processing module 3, wherein the data processing module 3 is configured to execute the steps of the multi-sensor fusion drivable area-based detection method.
The embodiment discloses a vehicle, which comprises the detection system based on the multi-sensor fusion drivable area.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (6)

1. A detection method based on multi-sensor fusion of drivable areas is characterized by comprising the following steps:
acquiring a visual data point collected by a vehicle-mounted visual sensor and a target data point collected by a vehicle-mounted radar in real time; wherein the visual data points comprise types of obstacles including vehicles, pedestrians and curbs and the position coordinates, the target data points comprise types, positions and speeds of targets including vehicles and pedestrians;
screening out the visual data in each lane and generating a visual data point set Camera _ lane of the corresponding lanei-Set; and filtering the target point data in each lane and generating a target data point set Mmw _ lane of the corresponding lanei-Set; wherein i is the current lane, the left lane and the right lane; or, i is the current lane and the left lane; or, i is the current lane and the right lane;
calculating a minimum visual data point P _ C in the set of visual data points for each lanei-min and the smallest target data point P _ M in the target data point seti-min, wherein the smallest visual data point represents the coordinate point in the visual data point set closest to the vehicle, and the smallest target data point represents the coordinate point in the target data point set closest to the vehicle;
the minimum visual data point P _ C of each lanei-min respectively corresponds to the visual data point set Camera _ lane of the corresponding lanei-dataset and target data point set Mmw _ lanei-dataset is correlated, and a visual data point set Camera _ lane corresponding to each lane is judgedi-divide minimum visual data point P _ C in dataseti-Whether at least one visual data point and the minimum visual data point P _ C of the corresponding lane exist outside mini-min is less than a first preset threshold value PiIf so, the association is successful, instruction Ass _ Ci-DC ═ C; otherwise, the association fails, let Ass _ Ci-D ═ DC; wherein, Ass _ Ci-DC represents the minimum visual data point P _ C corresponding to each lanei-min and visual data point set Camera _ lanei-The correlation result of dataset;
judging Mmw _ lane target data point set corresponding to each lanei-Whether there is at least one visual data point P _ C of the target data point and its corresponding lane in dataseti-min is less than a second preset threshold QiIf so, the association is successful, instruction Ass _ Ci-DM is C; otherwise, the association fails, let Ass _ Ci-DM is D; wherein, Ass _ Ci-DM represents a minimum visual data point P _ C corresponding to each lanei-min and target data point set Mmw _ lanei-Association result of dataset;
Through Ass _ Ci-DC and Ass _ Ci-Value confirmation P _ C of DMi-min final correlation result Ass _ CiDecision Ass _ Ci-DC and Ass _ Ci-If at least one of the DM's is equal to C, if so, the association is successful, Ass _ CiOtherwise, association fails, let Ass _ Ci=D;
The minimum target data point P _ M of each lanei-min respectively corresponds to the visual data point set Camera _ lane of the corresponding lanei-dataset and target data point set Mmw _ lanei-dataset is correlated, and a visual data point set Camera _ lane corresponding to each lane is judgedi-Whether there is a minimum target data point P _ M in dataset for which there is at least one visual data point and its corresponding lanei-min is less than a third preset threshold FiIf so, the association is successful, let Ass _ Mi-DC ═ C; otherwise, the association fails, let Ass _ Mi-D ═ DC; wherein, Ass _ Mi-DC represents the minimum target data point P _ M corresponding to each lanei-min and visual data point set Camera _ lanei-The correlation result of dataset;
judging Mmw _ lane target data point set corresponding to each lanei-divide minimum target data point P _ M in dataseti-Whether at least one target data point and the minimum target data point P _ M of the corresponding lane exist outside mini-min is less than a fourth preset threshold GiIf so, the association is successful, let Ass _ Mi-DM is C; otherwise, the association fails, let Ass _ Mi-DM is D; wherein, Ass _ Mi-DM represents a minimum target data point P _ M corresponding to each lanei-min and target data point set Mmw _ lanei-The correlation result of dataset;
through Ass _ Mi-DC and Ass _ Mi-Value confirmation P _ M of DMi-min final correlation result Ass _ MiDecision Ass _ Mi-DC and Ass _ Mi-If at least one of the DM's is equal to C, if so, the association is successful, Ass _ MiOtherwise, association fails, let Ass _ Mi=D;
Comparison Ass _ CiAnd Ass _ Mi
If Ass _ Ci=C,Ass_MiIf D, then P _ C is outputi-min is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=D,Ass_MiIf C, then P _ M is outputi-min is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=Ass_MiIf it is C, then P _ Ci-min and P _ Mi-min is compared with the value if P _ Ci-min is less than P _ Mi-min, then output P _ Ci-min is used as the cut-off point of the drivable area of the lane, otherwise, P _ M is outputi-min is used as a cut-off point of a drivable area of the lane;
if Ass _ Ci=Ass_MiD, no output.
2. The multi-sensor fusion driveable region-based detection method of claim 1, wherein the vehicle-mounted vision sensor is a camera and the vehicle-mounted radar is a millimeter wave radar.
3. The multi-sensor fusion drivable zone-based detection method according to claim 1 or 2, characterized in that the minimum visual data point P _ Ci-The formula for min is:
Figure FDA0003190615960000021
Figure FDA0003190615960000022
Figure FDA0003190615960000023
indicates i lane is
Figure FDA0003190615960000024
A visual data pointThe coordinates of (a);
minimum target data point P _ Mi-The formula for min is:
Figure FDA0003190615960000025
Figure FDA0003190615960000026
Figure FDA0003190615960000027
indicates i lane is
Figure FDA0003190615960000028
Coordinates of each target data point.
An X-Y coordinate system is established by taking the center of a front bumper of the vehicle as an origin, the X coordinate axis points to the advancing direction of the vehicle, and the Y coordinate system points to the right side of the vehicle.
4. The multi-sensor fusion drivable area-based detection method of claim 3, characterized in that said set of visual data points Camera _ lanei-dataset and target data point set Mmw _ lanei-The screening steps of dataset are as follows:
calculating two adjacent lane lines corresponding to each lane according to the lane line parameters, judging whether each visual data point and each target data point are between the two adjacent lane lines corresponding to each lane, respectively storing the visual data point and the target data point on each lane after confirming the lane where each visual data point and each target data point are located, and generating a visual data point set Camera _ Lane corresponding to each lanei-dataset and target data point set Mmw _ lanei-dataset。
5. A multi-sensor fusion drivable zone-based detection system, characterized in that it comprises an onboard vision sensor (1) for the acquisition of visual data points, an onboard radar (2) for the acquisition of target data points and a data processing module (3), said data processing module (3) being configured to carry out the steps of the multi-sensor fusion drivable zone-based detection method according to any one of claims 1 to 4.
6. A vehicle comprising a multi-sensor fusion drivable zone-based detection system as claimed in claim 5.
CN202110876811.8A 2021-07-31 2021-07-31 Detection method and system based on multi-sensor fusion drivable area and vehicle Active CN113432615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110876811.8A CN113432615B (en) 2021-07-31 2021-07-31 Detection method and system based on multi-sensor fusion drivable area and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876811.8A CN113432615B (en) 2021-07-31 2021-07-31 Detection method and system based on multi-sensor fusion drivable area and vehicle

Publications (2)

Publication Number Publication Date
CN113432615A true CN113432615A (en) 2021-09-24
CN113432615B CN113432615B (en) 2024-02-13

Family

ID=77762749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110876811.8A Active CN113432615B (en) 2021-07-31 2021-07-31 Detection method and system based on multi-sensor fusion drivable area and vehicle

Country Status (1)

Country Link
CN (1) CN113432615B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113886634A (en) * 2021-09-30 2022-01-04 重庆长安汽车股份有限公司 Lane line offline data visualization method and device
CN114354209A (en) * 2021-12-07 2022-04-15 重庆长安汽车股份有限公司 Automatic driving lane line and target combined simulation method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
KR101998298B1 (en) * 2018-12-14 2019-07-09 위고코리아 주식회사 Vehicle Autonomous Driving Method Using Camera and LiDAR Sensor
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
CN110949395A (en) * 2019-11-15 2020-04-03 江苏大学 Curve ACC target vehicle identification method based on multi-sensor fusion
US20200156631A1 (en) * 2018-11-15 2020-05-21 Automotive Research & Testing Center Method for planning a trajectory for a self-driving vehicle
WO2020135772A1 (en) * 2018-12-29 2020-07-02 长城汽车股份有限公司 Generation method and generation system for dynamic target line during automatic driving of vehicle, and vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
US20200156631A1 (en) * 2018-11-15 2020-05-21 Automotive Research & Testing Center Method for planning a trajectory for a self-driving vehicle
KR101998298B1 (en) * 2018-12-14 2019-07-09 위고코리아 주식회사 Vehicle Autonomous Driving Method Using Camera and LiDAR Sensor
WO2020135772A1 (en) * 2018-12-29 2020-07-02 长城汽车股份有限公司 Generation method and generation system for dynamic target line during automatic driving of vehicle, and vehicle
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
CN110949395A (en) * 2019-11-15 2020-04-03 江苏大学 Curve ACC target vehicle identification method based on multi-sensor fusion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113886634A (en) * 2021-09-30 2022-01-04 重庆长安汽车股份有限公司 Lane line offline data visualization method and device
CN113886634B (en) * 2021-09-30 2024-04-12 重庆长安汽车股份有限公司 Lane line offline data visualization method and device
CN114354209A (en) * 2021-12-07 2022-04-15 重庆长安汽车股份有限公司 Automatic driving lane line and target combined simulation method and system

Also Published As

Publication number Publication date
CN113432615B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN109556615B (en) Driving map generation method based on multi-sensor fusion cognition of automatic driving
US10528055B2 (en) Road sign recognition
CN110764108B (en) Obstacle detection method and device for port automatic driving scene
CN110487288B (en) Road estimation method and road estimation system
Aycard et al. Intersection safety using lidar and stereo vision sensors
CN107862287A (en) A kind of front zonule object identification and vehicle early warning method
CN111382768A (en) Multi-sensor data fusion method and device
DE102014114827A1 (en) Path planning for evasive steering maneuvers in the presence of a target vehicle and surrounding objects
CN103455144A (en) Vehicle-mounted man-machine interaction system and method
JP6313081B2 (en) In-vehicle image processing apparatus and vehicle system using the same
Kim et al. Probabilistic threat assessment with environment description and rule-based multi-traffic prediction for integrated risk management system
Kim et al. Design of integrated risk management-based dynamic driving control of automated vehicles
CN110807412B (en) Vehicle laser positioning method, vehicle-mounted equipment and storage medium
CN109871787A (en) A kind of obstacle detection method and device
CN105684039B (en) Condition analysis for driver assistance systems
CN113432615B (en) Detection method and system based on multi-sensor fusion drivable area and vehicle
CN114442101A (en) Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
US11403951B2 (en) Driving assistance for a motor vehicle when approaching a tollgate
CN114537374A (en) Vehicle front anti-collision system based on travelable area
US11087147B2 (en) Vehicle lane mapping
CN116872921A (en) Method and system for avoiding risks of vehicle, vehicle and storage medium
CN114537447A (en) Safe passing method and device, electronic equipment and storage medium
CN114084129A (en) Fusion-based vehicle automatic driving control method and system
CN115223131A (en) Adaptive cruise following target vehicle detection method and device and automobile
CN117227714A (en) Control method and system for turning avoidance of automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant