CN115222779B - Vehicle cut-in detection method and device and storage medium - Google Patents

Vehicle cut-in detection method and device and storage medium Download PDF

Info

Publication number
CN115222779B
CN115222779B CN202111094997.8A CN202111094997A CN115222779B CN 115222779 B CN115222779 B CN 115222779B CN 202111094997 A CN202111094997 A CN 202111094997A CN 115222779 B CN115222779 B CN 115222779B
Authority
CN
China
Prior art keywords
vehicle
target vehicle
frame
frame number
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111094997.8A
Other languages
Chinese (zh)
Other versions
CN115222779A (en
Inventor
祁玉晓
王振男
蔡璐珑
何俏君
李梓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN202111094997.8A priority Critical patent/CN115222779B/en
Publication of CN115222779A publication Critical patent/CN115222779A/en
Application granted granted Critical
Publication of CN115222779B publication Critical patent/CN115222779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle cut-in detection method, a device and a storage medium, wherein the method comprises the following steps: carrying out real-time tracking detection on a target vehicle entering the surrounding area of the vehicle; when the target vehicle is continuously located in the early warning area of the vehicle and the target vehicle has a tendency to continuously cut into the lane of the vehicle, determining whether a first frame number of the target vehicle continuously located in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle continuously cut into the lane of the vehicle is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle; if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into a lane where the vehicle is located; the invention realizes the self-adaptability of the judgment threshold value and reduces the influence of the motion noise of the vehicle, thereby improving the accuracy of the cut-in detection result of the vehicle.

Description

Vehicle cut-in detection method and device and storage medium
Technical Field
The present invention relates to the field of vehicle control technologies, and in particular, to a vehicle cut-in detection method, device, and storage medium.
Background
In the driving process of the vehicle, the vehicle may meet the situation that other vehicles suddenly switch from adjacent lanes to lanes where the vehicle is located, and in the traditional driving mode, a driver can judge the cutting trend of the other vehicles, so that corresponding control operation is performed on the vehicle, and safety accidents are reduced. In the conventional automatic driving control, since there is no driver to manually judge the cutting trend of other vehicles, the automatic driving system needs to detect and judge the cutting situation of other vehicles, so as to execute corresponding operations according to the detection result.
However, in the prior art, the vehicle cut-in detection method obtains the curvature radius of the vehicle fixed curvature running path based on the motion information of the vehicle, then obtains the relative motion information of other vehicles relative to the vehicle fixed curvature running path through the curvature radius, and further determines whether the other vehicles have cut-in trend based on the relative motion information. However, in the vehicle cut-in detection method, the judgment basis is relative motion information, and the relative motion information is easily affected by the motion noise of the vehicle (such as the relative motion information is changed due to the motion information change of the vehicle), so that errors exist in the relative motion information, and further the vehicle cut-in detection result is inaccurate.
Disclosure of Invention
The invention provides a vehicle cut-in detection method, a device and a storage medium, which are used for solving the problem that the existing vehicle cut-in detection method is easily influenced by the motion noise of a vehicle, so that the vehicle cut-in detection result is not accurate enough.
Provided is a vehicle cut-in detection method, including:
carrying out real-time tracking detection on a target vehicle entering the surrounding area of the vehicle;
when the target vehicle is continuously located in the early warning area of the vehicle and the target vehicle has a tendency to continuously cut into the lane of the vehicle, determining whether a first frame number of the target vehicle continuously located in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle continuously cut into the lane of the vehicle is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle;
if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located.
Provided is a vehicle cut-in detection device, including:
the detection module is used for carrying out real-time tracking detection on the target vehicle entering the surrounding area of the vehicle;
The first determining module is used for determining whether a first frame number of the target vehicle in the early warning area is larger than a first frame number threshold value when the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into a lane of the vehicle, and determining whether a second frame number of the target vehicle in the trend of continuously cutting into the lane of the vehicle is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values which are adjusted according to the transverse speed of the target vehicle;
and the second determining module is used for determining that the target vehicle cuts into the lane where the vehicle is located if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value.
There is provided a vehicle cut-in detection apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the vehicle cut-in detection method described above when executing the computer program.
There is provided a readable storage medium storing a computer program which, when executed by a processor, performs the steps of the vehicle cut-in detection method described above.
The vehicle cut-in detection method, the device, the computer equipment and the storage medium perform real-time tracking detection on the target vehicle entering the surrounding area of the vehicle; when the target vehicle is continuously located in the early warning area of the vehicle and the target vehicle has a tendency to continuously cut into the lane of the vehicle, determining whether a first frame number of the target vehicle continuously located in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle continuously cut into the lane of the vehicle is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle; if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into a lane where the vehicle is located; according to the invention, the frame number threshold value is adjusted according to the transverse speed of the target vehicle, so that the self-adaption of the judgment threshold value is realized, the cutting trend judgment of the lane can be flexibly carried out according to the actual situation of the target vehicle, the influence of the motion noise of the vehicle is reduced, the accuracy of the cutting detection algorithm of the vehicle is improved, and the accuracy of the cutting detection result of the vehicle is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting a cut-in of a vehicle according to an embodiment of the invention;
FIG. 2 is a schematic diagram of different coordinate systems in a top view of the vehicle according to an embodiment of the invention;
FIG. 3 is a schematic diagram of different coordinate systems of a side view of a vehicle according to an embodiment of the invention;
FIG. 4 is a schematic diagram of different coordinate systems in front view of the vehicle according to an embodiment of the invention;
FIG. 5 is a schematic diagram of lane lines and pre-warning areas according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a vehicle cut-in detecting device according to an embodiment of the invention;
FIG. 7 is a schematic diagram of another embodiment of a vehicle cut-in detection device.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The vehicle cut-in detection method provided by the embodiment of the invention can be applied to a vehicle cut-in detection system, and the vehicle cut-in detection system comprises a vehicle, a target vehicle (a plurality of target vehicles can be used) and a vehicle cut-in detection device. The vehicle and the vehicle cut-in detection device communicate through a vehicle bus. The vehicle cut-in detection device carries out real-time tracking detection on a target vehicle entering the surrounding area of the vehicle through a sensor; when the target vehicle is continuously located in the early warning area of the vehicle and the target vehicle has a tendency to continuously cut into the lane of the vehicle, determining whether a first frame number of the target vehicle continuously located in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle continuously cut into the lane of the vehicle is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle; if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into a lane where the vehicle is located; according to the invention, the frame number threshold value is adjusted according to the transverse speed of the target vehicle, so that the self-adaption of the judgment threshold value is realized, the cutting trend judgment of the lane can be flexibly carried out according to the actual situation of the target vehicle, the influence of the motion noise of the vehicle is reduced, the accuracy of the cutting detection algorithm of the vehicle is improved, and the accuracy of the cutting detection result of the vehicle is improved.
The vehicle cut-in detection system includes the host vehicle, the target vehicle and the vehicle cut-in detection device for illustration only, and in other embodiments, the vehicle cut-in detection system may further include other devices, which are not described herein.
In one embodiment, as shown in fig. 1, a vehicle cut-in detection method is provided, and the method is applied to the vehicle cut-in detection device described above, and includes the following steps:
s10: and carrying out real-time tracking detection on the target vehicles entering the surrounding area of the vehicle.
In this embodiment, a sensor is mounted on the body of the host vehicle. During the running process of the host vehicle, the sensor on the host vehicle can detect other vehicles around the host vehicle to determine whether the other vehicles enter the surrounding area of the host vehicle, if the other vehicles enter the surrounding area of the host vehicle, the other vehicles entering the surrounding area of the host vehicle are used as target vehicles, and the sensor carries out real-time tracking detection on the target vehicles at a certain detection frame rate, so that detection data of the target vehicles of each frame are obtained. If other vehicles do not enter the surrounding area of the vehicle, the vehicles are not tracked and detected, so that the data processing amount is reduced, the load of the vehicle cut-in detection device is reduced, the calculation efficiency is improved, the noise influence of the vehicle cut-in detection algorithm caused by the vehicle detection data at a far distance can be reduced to a certain extent, and the accuracy of the vehicle cut-in detection algorithm is ensured. And carrying out accumulated record on each frame of detection data obtained by tracking the target vehicle by the sensor, so as to obtain the detection data of the multi-frame target vehicle.
The surrounding area of the host vehicle may be a three-dimensional area determined by the host vehicle coordinate system. Wherein, as shown in fig. 2, 3 and 4, the origin o of the coordinates of the own vehicle coordinate system (own vehicle coordinate system) v The vehicle head mark is the center of a vehicle head mark of the vehicle, the direction of the vehicle head is taken as the positive direction of an x axis, the left direction of the vehicle body (one side of a driver) of the vehicle is taken as the positive direction of a y axis, and the upward direction of a roof vertical to the vehicle is taken as the positive direction of a z axis; the x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the target vehicle under the own vehicle coordinate system are respectively x v 、y v 、z v . The surrounding area of the host vehicle can be set according to the host vehicle coordinate system, and the surrounding area of the host vehicle is (x) vmax ,x vmin ,y vmax ,y vmin ,z vmax ,zv min ) For example, the host vehicle surrounding area may be (100, -50, 51, -51,3.8, -0.2).
S20: and determining whether the target vehicle is continuously in the early warning area or not, and determining whether the target vehicle has a tendency of continuously cutting into a lane where the vehicle is located or not.
After real-time tracking detection is performed on the target vehicle entering the surrounding area of the host vehicle, whether the target vehicle is continuously located in the early warning area of the host vehicle or not is determined according to detection data of the real-time tracking detection on the target vehicle, and whether the target vehicle continuously cuts into a lane where the host vehicle is located or not is determined.
In the running process of the vehicle, the vehicle cut-in detection device needs to track and detect the target vehicle entering the surrounding area of the vehicle in real time so as to obtain multi-frame detection data of the target vehicle, and therefore coordinate position judgment and cut-in trend judgment are carried out on the target vehicle according to the received multi-frame detection data. The coordinate position of the target vehicle is needed to be judged for the detection data received by each frame so as to determine whether the target vehicle is in an early warning area of the vehicle, and if the target vehicle is in the early warning area in the continuous multi-frame detection data, the target vehicle is indicated to be continuously in the early warning area of the vehicle; otherwise, if the detection data of the target vehicle in the early warning area is discontinuous, the detection data indicates that the target vehicle is not continuously in the early warning area of the vehicle. Meanwhile, the cut-in trend of the target vehicle is needed to be judged for the detection data received by each frame so as to determine whether the target vehicle continuously approaches the lane where the vehicle is located, and if the continuous multi-frame detection data show that the target vehicle approaches the lane where the vehicle is located, the trend that the target vehicle continuously cuts into the lane where the vehicle is located is indicated; otherwise, if the detection data of the approach of the target vehicle to the lane where the vehicle is located is discontinuous, the trend that the target vehicle does not continuously cut into the lane where the vehicle is located is indicated.
The early warning area of the host vehicle is a preset host vehicle surrounding area, and when the target vehicle is in the early warning area of the host vehicle, the possibility that the target vehicle may cut into a lane where the host vehicle is located is indicated, and the host vehicle needs to cut into the warning to the target vehicle.
S30: if the target vehicle is continuously located in the early warning area of the vehicle and the target vehicle has a tendency of continuously cutting into the lane where the vehicle is located, determining a first frame number of the target vehicle continuously located in the early warning area and determining a second frame number of the target vehicle having a tendency of continuously cutting into the lane where the vehicle is located.
After determining whether the target vehicle is continuously located in the early warning area of the host vehicle and determining whether the target vehicle has a tendency to continuously cut into the lane where the host vehicle is located, if the target vehicle is continuously located in the early warning area of the host vehicle and the target vehicle has a tendency to continuously cut into the lane where the host vehicle is located, determining a first frame number of the target vehicle continuously located in the early warning area and determining a second frame number of the target vehicle having a tendency to continuously cut into the lane where the host vehicle is located.
And determining the number of continuous frames taking the latest frame as an end frame as a first frame number, wherein the target vehicle is in an early warning area of the vehicle in the multi-frame detection data. The method comprises the steps of firstly determining detection data of a target vehicle in an early warning area of the vehicle in multi-frame detection data, determining whether continuous frames taking the latest frame as an end frame exist in the multi-frame detection data meeting the condition, and taking the frame number of the continuous frames as a second frame number if the continuous frames exist.
For example, after the sensor performs real-time tracking detection on the target vehicle, N frame detection data are obtained, and according to the latest frame detection data of the N frame detection data, it is determined that the target vehicle is in the early warning area of the vehicle, among the N frame detection data, the detection data of the target vehicle in the early warning area of the vehicle are taken as the first target data, 10 frames of first target data are shared in the N frame detection data, and are respectively 1 st frame, 2 nd frame, 3 rd frame, 4 th frame, 7 th frame, 8 th frame, N-3 rd frame, N-2 nd frame, N-1 st frame and N-frame, wherein N-3 is not equal to 9, and if the N-frame is the latest frame, among the 10 frames of first target data, the N-3 th frame, the N-2 nd frame, the N-1 th frame and the N-th frame are consecutive frames taking the latest frame as the end frames, the first frame is determined to be 4.
In this embodiment, 10 frames of first target data are shared in the N frames of detection data, the N-3 frame, the N-2 frame, the N-1 frame and the N frame are continuous detection data frames with the latest frame as an end frame, the first frame number is 4, which are all exemplified, and in other embodiments, the number of first target data, the continuous detection data frames and the first frame number may be other ones which are not described herein.
And determining the number of continuous frames taking the latest frame as an end frame as a second frame number, wherein the target vehicle in the multi-frame detection data has a tendency of cutting into a lane where the vehicle is located. Firstly, determining detection data of a trend that a target vehicle cuts into a lane where the vehicle is located in multi-frame detection data, determining whether continuous frames taking the latest frame as an end frame exist in the multi-frame detection data meeting the condition, and if so, taking the number of the continuous frames as a second frame number.
The target vehicle has a tendency to cut into the lane where the vehicle is located, and various judging modes can be adopted. For example, if it is determined according to the multi-frame detection data that the relative distance between the geometric center of the target vehicle and the road center line of the lane where the host vehicle is located is reduced, the trend that the target vehicle cuts into the lane where the host vehicle is located is indicated, and at this time, the number of detection data frames in the multi-frame detection data, in which the target vehicle continuously approaches the road center line of the lane where the host vehicle is located, is determined as the second number of frames. The method takes the continuous approach of the target vehicle to the central line of the lane where the vehicle is located as a judging index, reduces misjudgment caused by vehicle control change, and reduces the influence of vehicle motion noise.
For example, in the N-frame target data, if 6 frames of detection data satisfy the second preset condition, it is determined that the consecutive frames in the 6 frames of detection data satisfying the second preset condition are respectively the 2 nd frame, the 3 rd frame, the 6 th frame, the 7 th frame, the N-1 st frame, and the N-th frame, where N-1 is equal to 8, and the N-th frame is the latest frame, and it is determined that the second frame number is 2.
In this embodiment, in the N-frame target data, the case that the second preset condition is satisfied is only an exemplary illustration, and in the 6-frame target data that satisfies the second preset condition, consecutive frames are 2 nd, 3 rd, 6 th, 7 th, N-1 st, and N-th frames are only exemplary illustrations, and the second frame is 2 is only an exemplary illustration, and in other embodiments, the above data may be other, which is not described herein.
In this embodiment, determining that the target vehicle has a tendency to cut into the lane where the host vehicle is located by determining that the relative distance between the geometric center of the target vehicle and the road center line of the lane where the host vehicle is located is reduced is merely an exemplary illustration, and in other embodiments, it may also be determined that the target vehicle has a tendency to cut into the lane where the host vehicle is located by other manners, which will not be described herein.
S40: it is determined whether the first frame number is greater than a first frame number threshold and whether the second frame number is greater than a second frame number threshold.
After determining the first frame number and the second frame number, determining whether the first frame number is greater than a first frame number threshold and determining whether the second frame number is greater than a second frame number threshold. The first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle.
In this embodiment, the detection data of each frame includes information such as the length, width and height of the target vehicle, heading angle, longitudinal speed, transverse speed, and speed variance. After the detection data of the multi-frame target vehicle is obtained, a first frame number threshold value and a second frame number threshold value are determined according to the transverse speed of the target vehicle in the latest frame detection data.
S50: if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located.
After determining whether the first frame number is greater than a first frame number threshold value and determining whether the second frame number is greater than a second frame number threshold value, if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into the lane where the vehicle is located, indicating that the target vehicle is continuously located in the early warning area, and determining that the target vehicle has a tendency to cut into the lane where the vehicle is located, and if the target vehicle cuts into the lane where the vehicle is located. The two cut-in judging indexes judge the cut-in trend of the target vehicle by judging whether the target vehicle is continuously in the early warning area or not and whether the target vehicle is continuously close to the central line of the lane where the vehicle is located, so that the accuracy of a cut-in judging result is ensured, meanwhile, the self-adaptive adjusting frame number threshold value is adopted, the frame number threshold value conforming to the condition of the self is determined according to the actual transverse speed of the target vehicle, and compared with the traditional fixed threshold value, the method has better adaptability, the accuracy of a vehicle cut-in detecting algorithm is improved, and the accuracy of the vehicle cut-in detecting result is improved.
In the embodiment, through real-time tracking detection of the target vehicle entering the surrounding area of the host vehicle, when the target vehicle is continuously located in the early warning area of the host vehicle and the target vehicle has a trend of continuously cutting into the lane of the host vehicle, determining whether a first frame number of the target vehicle continuously located in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle having a trend of continuously cutting into the lane of the host vehicle is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle; if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into a lane where the vehicle is located; according to the invention, the frame number threshold value is adjusted according to the transverse speed of the target vehicle, so that the self-adaption of the judgment threshold value is realized, the cutting trend judgment of the lane can be flexibly carried out according to the actual situation of the target vehicle, the influence of the motion noise of the vehicle is reduced, the accuracy of the cutting detection algorithm of the vehicle is improved, and the accuracy of the cutting detection result of the vehicle is improved.
In one embodiment, in step S50, the first frame number threshold and the second frame number threshold are determined by:
S51: and acquiring detection data for carrying out real-time tracking detection on the target vehicle.
After real-time tracking detection is performed on a target vehicle entering a surrounding area of the vehicle, detection data for real-time tracking detection is required to be acquired, so that a first frame number threshold value and a second frame number threshold value are determined according to the transverse speed of the target vehicle in the latest frame of detection data.
S52: and determining the transverse speed of the target vehicle in the latest frame of detection data, and determining the detection frame rate for carrying out real-time tracking detection on the target vehicle.
After acquiring multi-frame detection data of the target vehicle, it is necessary to determine the lateral speed of the target vehicle in the latest frame detection data and determine the detection frame rate of the sensor for performing real-time tracking detection on the target vehicle.
S53: a preset lateral speed threshold, a first time threshold, and a second time threshold are determined.
Meanwhile, a preset transverse speed threshold, a first time threshold and a second time threshold are required to be determined, wherein the first time threshold is a time threshold that the target vehicle is continuously in an early warning area, and the second time threshold is a threshold that the target vehicle is continuously approaching to a lane where the vehicle is located. For example, the second time threshold may be a time threshold at which the relative distance between the geometric center of the target vehicle and the center line of the roadway of the lane in which the host vehicle is located continues to decrease.
The first time threshold is a time threshold of a target vehicle continuously located in an early warning area under a preset vehicle speed, and the first time threshold can be 3s. The second time threshold is a time threshold that the target vehicle continuously approaches to the road center line of the lane where the host vehicle is located under the preset vehicle speed (the pre-calibrated standard vehicle speed), that is, a time threshold that the relative distance between the geometric center of the target vehicle and the road center line of the lane where the host vehicle is located continuously decreases, and the second time threshold may be 3s. The lateral velocity threshold is a preset lateral velocity (pre-calibrated standard lateral velocity), wherein the lateral velocity threshold may be 0.6m/s.
In this embodiment, the first time threshold is 3s, the second time threshold is 3s, and the lateral speed threshold is 0.6m/s, which are merely illustrative, and in other embodiments, the first time threshold, the second time threshold, and the lateral speed threshold may be other values, which are not described herein.
S54: and calculating to obtain a first frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the first time threshold of the target vehicle.
Wherein the first frame number threshold may be calculated by the following formula:
F th_1 =f*th 1_std *(v lateral_std /v lateral );
wherein F is th_1 A first frame number threshold value, f is the detection frame rate of the sensor to the target vehicle, th 1_std V is the first time threshold lateral_std Is the transverse velocity threshold, v lateral Is the lateral speed of the target vehicle.
S55: and calculating to obtain a second frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the second time threshold of the target vehicle.
Wherein the second frame number threshold may be calculated by the following formula:
F th_2 =f*th 2_std *(v lateral_std /v lateral );
wherein F is th_2 A second frame number threshold value, f is the detection frame rate of the sensor to the target vehicle, th 2_std V is the second time threshold lateral_std Is the transverse velocity threshold, v lateral Is the lateral speed of the target vehicle.
In an embodiment, to ensure that the judgment of the relevant frame number threshold is performed later, the first frame number threshold and the positive integer of the first frame number threshold need to be ensured, and in this embodiment, the rounding method is used to round the first frame number threshold and the first frame number threshold.
Then, the first frame number threshold is calculated by the following formula:
F th_1 =round(f*th 1_std *(v lateral_std /v lateral ));
wherein F is th_1 For the first frame number threshold, round is rounding, f is the detected frame rate, th 1_std V is the first time threshold lateral_std Is the transverse velocity threshold, v lateral Is the lateral speed of the target vehicle.
Then, the second frame number threshold is calculated by the following formula:
F th_1 =round(f*th 2_std *(v lateral_std /v lateral ));
wherein F is th_2 For the second frame number threshold, round is rounding, f is the detected frame rate, th 2_std V is the second time threshold lateral_std Is the transverse velocity threshold, v lateral Is the lateral speed of the target vehicle.
In this embodiment, the first frame number threshold is obtained by obtaining detection data for real-time tracking detection of the target vehicle, then determining the transverse speed of the target vehicle in the latest frame detection data, determining the detection frame rate for real-time tracking detection of the target vehicle, determining the preset transverse speed threshold, the preset first time threshold and the preset second time threshold, and then calculating according to the transverse speed, the detection frame rate, the transverse speed threshold and the first time threshold of the target vehicle, and calculating according to the transverse speed, the detection frame rate, the transverse speed threshold and the second time threshold of the target vehicle, so as to obtain the second frame number threshold, and define the specific determination process according to the first frame number threshold and the second frame number threshold, thereby providing an accurate basis for the subsequent vehicle cut-in judgment according to the adaptive frame number threshold.
In one embodiment, after performing real-time tracking detection on a target vehicle entering a preset area around the host vehicle, in step S51, detection data for performing real-time tracking detection on the target vehicle is obtained, and the method specifically includes the following steps:
S511: first tracking data of the first sensor for carrying out real-time tracking detection on the target vehicle is obtained.
In this embodiment, two types of sensors are mounted on the body of the host vehicle: the first sensor and the second sensor have the same detection frame rate.
In the running process of the vehicle, first tracking data of the first sensor for carrying out real-time tracking detection on the target vehicle is required to be acquired. The first sensor performs real-time tracking detection on the target vehicle to obtain tracking data of the target vehicle under a first sensor coordinate system (such as a camera coordinate system), and then converts each obtained frame of tracking data into the own vehicle coordinate system to obtain first tracking data of the target vehicle under the own vehicle coordinate system.
The first tracking data comprise coordinate information of the target vehicle under a vehicle coordinate system, and further comprise attribute information such as the length, width and height of the target vehicle, tracking ID (identification), course angle, longitudinal speed, transverse speed, speed variance and the like of the target vehicle.
Because the length, width and height of the target vehicle and the attribute information such as the tracking ID of the target vehicle are irrelevant to the coordinate systems and are unchanged under different coordinate systems, in addition, according to the definition of the first sensor coordinate system and the own vehicle coordinate system, the indication directions of corresponding coordinate axes in the two coordinate systems are the same, and only the positions of the coordinate origins are different, so that the heading angle, the longitudinal speed, the transverse speed, the speed variance and other attributes of the target are unchanged under the two coordinate systems. Therefore, each frame of tracking data is converted into the own vehicle coordinate system, and the coordinate information of the target vehicle in the first sensor coordinate system is mainly converted into the coordinate information of the target vehicle in the own vehicle coordinate system.
As shown in fig. 2 to 4, taking the first sensor as an example of the intelligent camera, the camera coordinate system takes the center of the installation position of the intelligent camera on the vehicle as the origin o of coordinates c The direction of the head of the vehicle is taken as the positive direction of the x axis, the left side direction of the body of the vehicle is taken as the positive direction of the y axis, and the upward direction of the roof vertical to the vehicle is taken as the positive direction of the z axis. The x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the target vehicle under the camera coordinate system are respectively x c 、y c 、z c That is, the coordinate information of the target vehicle in the camera coordinate system is (x c ,y c ,z c ) The coordinate information of the target vehicle under the camera coordinate system is converted into the coordinate information (x v ,y v ,z v ) The conversion formula is:
wherein x is v 、y v 、z v Respectively an x-axis coordinate, a y-axis coordinate and a z-axis coordinate of the target vehicle under the own vehicle coordinate system; r is R v R is a rotation matrix v Is a 3*3 matrix; t (T) v For translating the matrix, T v Is a 3*1 matrix; r is R v And R is v The calibration matrix is a pre-calibrated matrix, and can be calibrated according to the relative position between the position of the first sensor (such as an intelligent camera) and the coordinate origin position (such as the center of the vehicle head logo of the vehicle) in the vehicle coordinate system.
S512: and acquiring second tracking data of the second sensor for carrying out real-time tracking detection on the target vehicle.
In the running process of the vehicle, second tracking data of the target vehicle, which is tracked and detected in real time by the second sensor, need to be acquired. The second sensor performs real-time tracking detection on the target vehicle to obtain tracking data of the target vehicle under a second sensor coordinate system (such as a laser radar coordinate system), and then converts each obtained frame of tracking data into the own vehicle coordinate system to obtain second tracking data of the target vehicle under the own vehicle coordinate system.
The second tracking data comprise coordinate information of the target vehicle under the own vehicle coordinate system, and further comprise attribute information such as the length, width and height of the target vehicle, tracking ID (identity), course angle, longitudinal speed, transverse speed, speed variance and the like of the target vehicle.
As shown in fig. 2 to 4, taking the second sensor as a laser radar as an example, the laser radar coordinate system takes the center of the installation position of the laser radar on the host vehicle as the origin o of coordinates i The direction of the head of the vehicle is taken as the positive direction of the x axis, the left side direction of the body of the vehicle is taken as the positive direction of the y axis, and the upward direction of the roof vertical to the vehicle is taken as the positive direction of the z axis. The x-axis coordinate, the y-axis coordinate and the z-axis coordinate of the target vehicle under the laser radar coordinate system are respectively x i 、y i 、z i That is, the coordinate information of the target vehicle in the lidar coordinate system is (x) i ,y i ,z i ) The coordinate information of the target vehicle under the laser radar coordinate system is converted into the coordinate information (x v ,y v ,z v ) The conversion formula is:
wherein x is v 、y v 、z v Respectively an x-axis coordinate, a y-axis coordinate and a z-axis coordinate of the target vehicle under the own vehicle coordinate system; r is R v R is a rotation matrix v Is a 3*3 matrix; t (T) v For translating the matrix, T v Is a 3*1 matrix; r is R v And R is v The calibration matrix is a pre-calibrated matrix, and can be calibrated according to the relative position between the position of the second sensor (such as a laser radar) and the coordinate origin position (such as the center of the vehicle head logo) in the vehicle coordinate system.
S513: and carrying out matching fusion on the first tracking data and the corresponding second tracking data of each frame based on the Hungary matching algorithm and the Kalman filtering algorithm so as to obtain multi-frame detection data.
When the first sensor and the second sensor track and detect the target vehicle in real time, first tracking data and second tracking data of one frame are obtained respectively, namely the first tracking data and the second tracking data of each frame are sent to the vehicle cut-in detection device so that the vehicle cut-in detection device can carry out matching fusion on the first tracking data of each frame, the corresponding second tracking data and fusion data of the previous frame based on a Hungary matching algorithm and a Kalman filtering algorithm, and therefore detection data of each frame are obtained, and more accurate multi-frame detection data are obtained.
In an embodiment, the first sensor may be a smart camera and the second sensor may be a lidar. In order to facilitate data acquisition, the camera may be installed at a central position of a boundary between a front windshield and a roof of the host vehicle, and the laser radar may be installed at a central position of the roof of the host vehicle, as shown in fig. 2 to 4. The intelligent camera and the laser radar comprise detection tracking algorithms, in the running process of the vehicle, the intelligent camera and the laser radar can directly acquire image information and point cloud information of the target vehicle, track and detect the target vehicle through the self-contained detection tracking algorithm, and then directly output coordinate information, length, width and height of the target vehicle, and attribute information such as heading angle, longitudinal speed, transverse speed, speed variance and the like of the target vehicle.
Taking the first sensor as an intelligent camera and the second sensor as a laser radar as an example, the first tracking data and the second tracking data both comprise tracking IDs of the target vehicles. The vehicle cut-in detection device is used for matching the target vehicle by adopting a Hungary matching algorithm based on the tracking ID of the target vehicle so as to determine first tracking data and second tracking data corresponding to the fused data in the previous frame in the current frame. Because the laser radar is more accurate in position and size detection and more accurate and visual in data, the first frame second tracking data acquired by the laser radar is used as first frame detection data, and then a Kalman filtering algorithm is adopted to perform data fusion (including prediction and updating) on the next frame first tracking data and the second tracking data corresponding to the next frame first tracking data, wherein the method specifically comprises the following steps: predicting the first frame detection data by adopting a Kalman filtering algorithm to obtain prediction data corresponding to the second frame; and based on the tracking ID of the target vehicle and a Hungary matching algorithm, matching the first tracking data of the second frame and the corresponding second tracking data with the predicted data of the second frame respectively, after matching, adopting a Kalman filtering algorithm to fuse the first tracking data corresponding to the matched target vehicle, the second tracking data corresponding to the first tracking data and the predicted data corresponding to the second frame so as to update the predicted data of the second frame, thereby obtaining the second frame detection data, and the like, and completing matching and fusing of the first tracking data, the corresponding second tracking data and the previous frame detection data of each frame, thereby obtaining more accurate multi-frame detection data.
In this embodiment, the first sensor is an intelligent camera, the second sensor is a laser radar, and the positions of the camera and the laser radar are only exemplary, and in other embodiments, the first sensor and the second sensor may be other two types of different sensors, and the mounting positions of the first sensor and the second sensor may be other positions where data are convenient to collect, which are not described herein.
In the embodiment, the Hungary matching algorithm is adopted to match the target vehicle, so that the accuracy of the target vehicle data is ensured, the Kalman filtering algorithm is further utilized to denoise the tracking data, the accuracy of the obtained detection data is ensured, the two types of different sensor tracking data are adopted to fuse and obtain the detection data, the problems of target omission, low accuracy and the like which are easily influenced by the environment and are caused by detection based on the vehicle-mounted camera or other single sensors in the traditional vehicle cut-in detection algorithm are reduced, the accuracy of the detection data can be effectively improved, and the accuracy of the cut-in detection result of the subsequent vehicle is further ensured.
In this embodiment, by acquiring first tracking data of real-time tracking detection of a target vehicle by a first sensor and acquiring second tracking data of real-time tracking detection of a target vehicle by a second sensor, the first sensor and the second sensor are two types of different sensors, and then based on a hungarian matching algorithm and a kalman filtering algorithm, matching and fusing the first tracking data and the corresponding second tracking data of each frame to obtain multi-frame detection data, the specific steps of real-time tracking detection of the target vehicle entering the surrounding area of the vehicle are clarified to obtain detection data of multi-frame target vehicles, the accuracy of the detection data is ensured, and an accurate data basis is provided for subsequent calculation.
In one embodiment, after performing real-time tracking detection on the target vehicle entering the surrounding area of the host vehicle to obtain detection data of multiple frames of target vehicles, in step S30, the first frame number of the target vehicle continuously located in the early warning area is determined, which specifically includes the following steps:
SA31: and determining whether the target vehicle is in the early warning area according to the coordinate information of the target vehicle in the detection data.
In the running process of the vehicle, the vehicle cut-in detection device needs to track and detect the target vehicle entering the surrounding area of the vehicle in real time so as to obtain detection data of each frame of the target vehicle. After the detection data of the target vehicle are obtained, judging the coordinate position of the target vehicle according to the coordinate information of the target vehicle in the latest frame of detection data so as to determine whether the target vehicle is in an early warning area of the vehicle, and executing a cut-in judgment strategy of the subsequent step if the target vehicle is in the early warning area of the vehicle; if the target vehicle is not in the early warning area of the vehicle, the cut-in judging strategy of the subsequent step is not required to be executed, the detection data of the next frame is continuously acquired and judged until the target vehicle is determined to be in the early warning area of the vehicle according to the detection data of a certain frame.
The latest frame of detection data is the detection data acquired at the latest time point of the vehicle cut-in detection device. For example, when the vehicle cut-in detection device obtains the first frame detection data, the first frame detection data is the latest frame detection data; when the vehicle cut-in detection device obtains the second frame detection data, the second frame detection data is the latest frame detection data …, and when the vehicle cut-in detection device obtains the nth frame detection data, the nth frame detection data is the latest frame detection data. The multi-frame detection data is the latest frame detection data and the detection data of the history record before the latest frame detection data. By determining whether the target vehicle meets the first preset condition according to the latest frame of detection data, each frame of detection data can be judged to determine whether the target vehicle is in an early warning area of the vehicle, so that timely vehicle cut-in behavior judgment is performed according to a judgment result, the possibility of missing judgment and delay judgment is reduced, and driving experience and driving safety are improved.
SA32: and if the target vehicle is in the early warning area, determining the detection data as first target data.
After determining whether the target vehicle is in the early warning area of the vehicle according to the detection data, if the target vehicle is in the early warning area of the vehicle, the target vehicle is highly likely to cut into a lane where the vehicle is located, at this time, the latest frame of detection data is recorded as first target data, and the detection data received by each frame are judged in sequence, so that multi-frame first target data can be obtained.
SA33: and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame first target data as a first frame number.
After the multi-frame first target data is obtained, the number of continuous frames taking the latest frame as the end frame in the multi-frame first target data is also required to be determined as the first frame number. That is, it is necessary to determine whether the target vehicle starts from a certain historical frame according to the detection data until the latest frame currently detected is in the early warning area of the vehicle, so as to increase the accuracy of the vehicle cut-in algorithm and reduce erroneous judgment caused by vehicle control change.
For example, after the sensor performs real-time tracking detection on the target vehicle, N frame detection data are obtained, and according to the latest frame detection data of the N frame detection data, it is determined that the target vehicle is in the early warning area of the vehicle, among the N frame detection data, the detection data of the target vehicle in the early warning area of the vehicle are taken as the first target data, 10 frames of first target data are shared in the N frame detection data, and are respectively 1 st frame, 2 nd frame, 3 rd frame, 4 th frame, 7 th frame, 8 th frame, N-3 rd frame, N-2 nd frame, N-1 st frame and N-frame, wherein N-3 is not equal to 9, and if the N-frame is the latest frame, among the 10 frames of first target data, the N-3 th frame, the N-2 nd frame, the N-1 th frame and the N-th frame are consecutive frames taking the latest frame as the end frames, the first frame is determined to be 4.
In this embodiment, 10 frames of first target data are shared in the N frames of detection data, the N-3 frame, the N-2 frame, the N-1 frame and the N frame are continuous detection data frames with the latest frame as an end frame, the first frame number is 4, which are all exemplified, and in other embodiments, the number of first target data, the continuous detection data frames and the first frame number may be other ones which are not described herein.
In this embodiment, according to the coordinate information of the target vehicle in the detection data, it is determined whether the target vehicle is in the early warning area, if the target vehicle is in the early warning area, it is determined that the detection data is the first target data, the number of consecutive frames with the latest frame as the end frame in the multi-frame first target data is used as the first number, the specific step of determining the first number of consecutive frames with the target vehicle being in the early warning area is clarified, the number of consecutive frames with the latest frame as the end frame in the multi-frame first target data is used as the first number, the accuracy of the vehicle cut-in algorithm is increased, and erroneous judgment caused by vehicle control change is reduced.
In one embodiment, in step S30, the determining the second frame number for the target vehicle to continuously cut into the lane of the host vehicle specifically includes the following steps:
SB31: and determining the y-axis coordinate value of the target vehicle in the detection data according to the coordinate information of the target vehicle in the detection data.
After the latest frame detection data is acquired, the y-axis coordinate value of the target vehicle in each frame detection data is determined according to the coordinate information of the target vehicle in the frame detection data. Wherein the coordinate information of the target vehicle is coordinate information taking the own vehicle coordinate system as a reference coordinate, namely the y-axis coordinate value of the target vehicle in each frame of detection data is y v
SB32: and carrying out lane allocation on the target vehicle according to the y-axis coordinate value of the target vehicle.
In determining the y-axis coordinate value y of the target vehicle v Then, lane distribution is carried out on the target vehicle according to the y-axis coordinate value of the target vehicle, and the left side of the vehicle body of the vehicle is taken as the positive direction of the y-axis in the vehicle coordinate system, so that if the y-axis coordinate value of the target vehicle is negative, the target vehicle is determined to be positioned on the right side of the vehicle; and if the y-axis coordinate value of the target vehicle is positive, determining that the target vehicle is positioned on the left side of the vehicle. After the target vehicle is determined to be positioned on the left side or the right side of the vehicle, the lane where the target vehicle is positioned is allocated according to the coordinate information and the lane line information of the target vehicle. For example, after determining that the target vehicle is located on the left side of the host vehicle, if the absolute value of the y-axis coordinate value of the target vehicle is smaller than 1.5 lane widths (the width between the left and right lane lines of the host vehicle), it may be determined that the target vehicle is located on the left lane of the host vehicle, and if the absolute value of the y-axis coordinate value of the target vehicle is larger than 1.5 lane widths, allocation is not performed; after the target vehicle is determined to be positioned on the right side of the vehicle, if the absolute value of the y-axis coordinate value of the target vehicle is smaller than 1.5 lane widths, the target vehicle can be determined to be positioned on the right lane of the vehicle, and if the absolute value of the y-axis coordinate value of the target vehicle is larger than 1.5 lane widths, the distribution is not performed.
In other embodiments, lane allocation may be performed on the target vehicle according to other manners, for example, determining multiple lane lines (including a left lane line, a right lane line, and a right lane line) around the host vehicle, determining a position of the target vehicle according to coordinate information of the target vehicle, and if the target vehicle is located between the left lane line and the right lane line, allocating the target vehicle to a lane where the host vehicle is located (where the target vehicle is located in the lane where the host vehicle is located, and performing no cut-in judgment on the target vehicle); if the target vehicle is positioned between the left lane line and the left lane line, distributing the target vehicle to a left lane of the vehicle; if the target vehicle is positioned between the right lane line and the right lane line, the target vehicle is distributed to the right lane of the vehicle, and the target vehicle beyond the left lane line and the right lane line does not perform lane distribution or cut-in judgment.
SB33: if the target vehicle is allocated to the left lane of the host vehicle, it is determined whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the target data of the previous frame.
After lane allocation is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, if the target vehicle is allocated in the left lane of the host vehicle, determining whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the target data of the previous frame.
SB34: and if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the detection data of the previous frame, determining the detection data as second target data.
After determining whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the previous frame of target data, if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the previous frame of detection data, determining that the relative distance between the geometric center of the target vehicle and the road center line of the lane where the vehicle is located is reduced, and determining that the detection data is second target data.
Since the left side of the body of the host vehicle is taken as the positive direction of the y-axis in the host vehicle coordinate system, when the target vehicle is positioned on the left lane of the host vehicle, the y-axis coordinate value of the target vehicle is positive, namely y v If the y-axis coordinate value in the frame detection data is larger than 0 and smaller than the y-axis coordinate value in the previous frame of target data, which indicates that the transverse distance between the target vehicle and the host vehicle is reduced, and the target vehicle approaches the host vehicle, determining that the frame detection data has a tendency of cutting into a lane where the host vehicle is located, and taking the detection data as second target data; otherwise, the transverse distance between the target vehicle and the host vehicle is not reduced, and if the target vehicle is not approaching the host vehicle, the trend that the frame detection data does not cut into the lane where the host vehicle is determined.
SB35: and taking the number of frames meeting the continuous frames taking the latest frame as the end frame in the multi-frame second target data as a second frame number.
And (3) according to the judging modes of the steps S43 to S44, judging each frame of detection data of the multi-frame detection data in sequence to determine a frame of second target data with a trend of cutting into a lane where the vehicle is located, and determining the frame number of continuous frames in a history frame taking the latest frame as an end frame in the a frame of the second target data as a second frame number.
In this embodiment, the y-axis coordinate value of the target vehicle in each frame of detection data is determined according to the coordinate information of the target vehicle in each frame of detection data, the coordinate information of the target vehicle is the coordinate information taking the own vehicle coordinate system as the reference coordinate, and then the lane allocation is performed on the target vehicle according to the y-axis coordinate value of the target vehicle; if the target vehicle is distributed on the left lane of the vehicle, determining whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the target data of the previous frame, if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the detection data of the previous frame, determining that the detection data is second target data, and finally determining the number of continuous frames taking the latest frame as the end frame in the second target data of multiple frames as the second number of frames, wherein when the target vehicle is on the left lane of the vehicle, the number of continuous frames meeting the second preset condition and taking the latest frame as the end frame is determined in the detection data of multiple frames, and the specific process of the second number of frames is provided as a basis for accurately judging the cut-in of the vehicle.
In one embodiment, after step SB32, that is, after lane allocation to the target vehicle according to the y-axis coordinate value of the target vehicle, the method specifically further includes the steps of:
SB36: if the target vehicle is allocated to the right lane of the host vehicle, the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame.
After the lane allocation is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, if the target vehicle is allocated to the right lane of the host vehicle, whether the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame is determined according to the y-axis coordinate value in the detection data.
SB37: and if the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame, determining the detection data as second target data.
Since the left side of the body of the host vehicle is taken as the positive direction of the y-axis in the host vehicle coordinate system, when the target vehicle is positioned on the right lane of the host vehicle, the y-axis coordinate value of the target vehicle is negative, namely y v Less than 0, if the frame is detectedThe y-axis coordinate value in the data is larger than the y-axis coordinate value in the previous frame of target data, which means that the transverse distance between the target vehicle and the host vehicle is reduced, and the target vehicle approaches the host vehicle, and the frame of detection data is determined to meet a second preset condition; otherwise, the transverse distance between the target vehicle and the host vehicle is not reduced, and if the target vehicle is not approaching the host vehicle, the frame detection data is determined to not meet the second preset condition.
SB38: and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame second target data as a second frame number.
According to the judging method of steps S46 to S47, each frame of the multi-frame detection data is sequentially judged to determine b-frame detection data satisfying the second preset condition, and the number of frames of continuous frames in the history frames with the latest frame as the end frame is determined in the b-frame detection data as the second number of frames.
In one embodiment, after lane allocation is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, smoothing of the y-axis coordinate value in each frame of detection data by a preset window size (for example, the preset window size is 3) is required to obtain a smoothed y-axis coordinate value as the y-axis coordinate value in the detection data, so as to determine whether the y-axis coordinate value in the detection data is smaller (or larger) than the y-axis coordinate value in the previous frame of target data. The y-axis coordinate value in each frame of detection data is subjected to smoothing processing, so that a relatively accurate y-axis coordinate value can be obtained, subsequent judgment errors caused by data acquisition errors of the sensor in the actual process are reduced, and the accuracy of the second frame number is further improved.
In this embodiment, after the lane allocation is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, if the target vehicle is allocated in the right lane of the host vehicle, according to whether the y-axis coordinate value in the detection data is greater than the y-axis coordinate value in the detection data of the previous frame, if the y-axis coordinate value in the detection data is greater than the y-axis coordinate value in the detection data of the previous frame, the detection data is determined to be the second target data, then the number of frames of the continuous frames with the latest frame as the end frame in the multi-frame second target data is taken as the second frame number, and it is determined that the number of the continuous frames which meet the second preset condition and take the latest frame as the end frame in the multi-frame detection data is taken as the specific process of the second frame number, thereby providing a basis for accurately performing the cut-in judgment of the subsequent vehicle.
In an embodiment, before step S20, that is, before determining whether the target vehicle is continuously located in the early warning area of the host vehicle, the method specifically further includes the following steps:
s01: and obtaining lane line detection data of the left and right sides of the vehicle.
Before determining whether the target vehicle is continuously located in the early warning area of the vehicle, lane line detection data of the left side and the right side of the vehicle need to be obtained through sensors, wherein the lane detection data are described in the form of pixels.
S02: and converting the lane detection data into coordinate point data under a host vehicle coordinate system, and fitting the coordinate point data to obtain a plurality of lane lines on the left side and the right side of the host vehicle.
After lane line detection data of the left side and the right side of the vehicle are obtained, the lane detection data are converted into coordinate point data under a coordinate system of the vehicle, and curve fitting is carried out on the coordinate point data by adopting a least square method, so that a plurality of lane lines of the left side and the right side of the vehicle are obtained. The plurality of lane lines at least comprise a first lane line and a second lane line which are respectively positioned at the left side and the right side of the vehicle.
For example, lane line detection data (images) of the left and right sides of the host vehicle can be obtained through the intelligent camera, wherein the lane line detection data comprises detection data of 4 lane lines, namely a left lane line, a right lane line and a right lane line; the lane line detection data is a bird's eye view which is described in the form of pixels by taking an image coordinate system as a reference coordinate system. The image coordinate system is defined as: the image in the image coordinate system is 512 x 512 pixels, the coordinate origin (0, 0) of the image coordinate system is the upper left corner of the image, the image is right in the positive direction of the x axis, the image is downward in the positive direction of the y axis, and one pixel represents a 20 cm grid.
Coordinates of the vehicle in the image (lane line detection data) are (256, 411), and pixel values of the left lane line, the right lane line, and the right lane line in the image are respectively: 100. 150, 200, 250, then 4 lane lines may be represented as 4 sets of points, respectively, depending on the different pixel values of the different lane lines:
wherein R is 1 、R 3 、R 2 And R is 4 The point sets are respectively a left lane line, a right lane line and a right lane line; point concentration a ij =(x ij ,y ij ) Representing a point in the image coordinate system; f=1, 2, 3, 4,j =n 1 、n 2 、n 3 、n 4 ;n 1 、n 2 、n 3 、n 4 Respectively represent point sets R 1 、R 3 、R 2 And R is 4 Number of midpoints.
Point a in the image coordinate system ij =(x ij ,y ij ) Is converted into points of the own vehicle coordinate system, and the point coordinates (x v ,y v ,z v ) Calculated by the following formula:
x v =(O vy -y ij )*0.2;
y v =(O vx -x ij )*0.2;
z v =0;
wherein, the liquid crystal display device comprises a liquid crystal display device,x ij is point a in the image coordinate system ij X-axis coordinate value, y ij Is point a in the image coordinate system ij Is a y-axis coordinate value of (2); x is x v 、y v 、z v The method comprises the steps of respectively carrying out x-axis coordinates, y-axis coordinates and z-axis coordinates under a vehicle coordinate system; (O) vx ,O vy ) The coordinates of the host vehicle in the image (lane line detection data) are shown.
In this embodiment, the coordinates of the host vehicle in the image are (256, 411), and the pixel values of the left lane line, the right lane line, and the right lane line in the image are respectively: 100. 150, 200, 250 are only exemplary, in other embodiments, the coordinates of the vehicle in the image may be other coordinates determined according to the actual situation, and the pixel values of the left lane line, the right lane line, and the right lane line in the image may be other actual pixel values, which are not described herein.
S03: the method comprises the steps of taking the areas in the preset width ranges on the left side and the right side of a first lane line as a first early warning area, and taking the areas in the preset width ranges on the left side and the right side of a second lane line as a second early warning area.
After a first lane line and a second lane line which are respectively positioned at the left side and the right side of the vehicle are obtained, the area in the preset width range at the left side and the right side of the first lane line is used as a first early warning area, and the area in the preset width range at the left side and the right side of the second lane line is used as a second early warning area.
The preset width may be one third of the lane width, i.e., lane_width/3. The lane width (lane_width) may be a fixed width preset according to a general lane, for example, the lane width may be 3.5m. The lane width can also be the lane width of the lane where the host vehicle is located, and the lane width of the lane where the host vehicle is located is the distance between the left lane line and the right lane line of the host vehicle.
As shown in fig. 5, the vehicle below the drawing is the host vehicle, and shows a left lane line (first lane line), a left lane line, a right lane line (second lane line), and a right lane line of the host vehicle, and shows a first warning region (left warning region) and a second warning region (right warning region).
In this embodiment, before determining whether a target vehicle is continuously located in an early warning area of the vehicle, lane line detection data of the left and right sides of the vehicle are obtained, the lane detection data are described in the form of pixels, the lane detection data are converted into coordinate point data under a coordinate system of the vehicle, the coordinate point data are fitted to obtain a plurality of lane lines of the left and right sides of the vehicle, the plurality of lane lines at least include a first lane line and a second lane line respectively located on the left and right sides of the vehicle, areas in a preset width range of the left and right sides of the first lane line are used as the first early warning area, areas in a preset width range of the left and right sides of the second lane line are used as the second early warning area, lane lines around the vehicle are determined definitely, a specific process of determining the early warning area of the vehicle according to the lane lines is provided, and a basis is provided for determining whether the target vehicle meets a first preset condition according to the latest frame detection data. Meanwhile, the lane lines around the vehicle are determined so as to facilitate subsequent lane allocation to the target vehicle, and a basis is provided for judging whether the target vehicle meets a second preset condition.
In one embodiment, in step SA31, it is determined whether the target vehicle is in the early warning area according to the coordinate information of the target vehicle in each frame of detection data, and the method specifically includes the following steps:
SA311: and determining whether the target vehicle is in the first early warning area or the second early warning area according to the coordinate information of the target vehicle in the detection data.
After the detection data of the multi-frame target vehicle are obtained, determining whether the target vehicle is in the first early warning area or the second early warning area according to the coordinate information of the target vehicle in the latest frame of detection data.
The first early warning area and the second early warning area are areas within a preset width range of left and right lane lines of the vehicle, the preset width is lane_width/3, the farthest distance between the first early warning area and the center of the lane where the vehicle is located and the second early warning area is l=5lane_width/6, L can represent the distance between the farthest boundary line of the early warning area and the vehicle, and by determining the y-axis coordinate value of the target vehicle, whether the target vehicle is located in the first early warning area or the second early warning area can be determined according to the y-axis and L of the target vehicle.
For example, if the absolute value of the y-axis coordinate value of the target vehicle is smaller than or equal to L, the geometric center of the target vehicle is located in the early warning area (the first early warning area or the second early warning area), and the target vehicle is determined to be located in the first early warning area or the second early warning area; if the absolute value of the y-axis coordinate value of the target vehicle is larger than L, the geometric center of the target vehicle is not located in the early warning area (the first early warning area or the second early warning area) of the vehicle, and the target vehicle is determined not to be located in the first early warning area or the second early warning area.
In other embodiments, the coordinates of the first early warning area and the second early warning area may be determined according to the preset width, and whether the coordinates of the target vehicle fall into the coordinates of the first early warning area or the second early warning area may be determined, if the coordinates of the target vehicle fall into the coordinates of the first early warning area or the second early warning area, it indicates that the target vehicle is located in the first early warning area or the second early warning area correspondingly, otherwise, it indicates that the target vehicle is not located in the first early warning area nor in the second early warning area, and the specific process is not repeated.
SA312: and if the target vehicle is in the first early warning area or the second early warning area, determining that the target vehicle is in the early warning area.
After determining whether the target vehicle is in the first early warning area or the second early warning area, if the target vehicle is in the first early warning area or the second early warning area, the geometric center of the target vehicle is located in the early warning area of the vehicle, and then the target vehicle is determined to be in the early warning area.
SA313: if the target vehicle is not in the first early warning area and the target vehicle is not in the second early warning area, determining that the target vehicle is not in the early warning area.
After determining whether the target vehicle is in the first early warning area or the second early warning area, if the target vehicle is not in the first early warning area or the second early warning area, which means that the geometric center of the target vehicle is not located in the early warning area of the vehicle, determining that the target vehicle is not in the early warning area.
In this embodiment, according to the coordinate information of the target vehicle in the latest frame of detection data, whether the target vehicle is in the first warning area or the second warning area is determined, if the target vehicle is in the first warning area or the second warning area, the target vehicle is determined to be in the warning area, if the target vehicle is not in the first warning area and the target vehicle is not in the second warning area, the target vehicle is determined to be not in the warning area, a specific process of determining whether the target vehicle meets the first preset condition according to the detection data is defined, and a judgment basis is provided for the subsequent vehicle cut-in judgment.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a vehicle cut-in detection device is provided, and the vehicle cut-in detection device corresponds to the vehicle cut-in detection method in the above embodiment one by one. As shown in fig. 6, the vehicle cut-in detection device includes a detection module 601, a first determination module 602, and a second determination module 603. The functional modules are described in detail as follows:
The detection module 601 is configured to perform real-time tracking detection on a target vehicle entering a surrounding area of the host vehicle;
the first determining module 602 is configured to determine, when the target vehicle is continuously located in the early warning area of the host vehicle and the target vehicle has a tendency to continuously cut into the lane where the host vehicle is located, whether a first frame number of the target vehicle continuously located in the early warning area is greater than a first frame number threshold, and determine whether a second frame number of the target vehicle continuously cut into the lane where the host vehicle is located is greater than a second frame number threshold, where the first frame number threshold and the second frame number threshold are frame number thresholds adjusted according to a lateral speed of the target vehicle;
the second determining module 603 is configured to determine that the target vehicle cuts into the lane where the host vehicle is located if the first frame number is greater than the first frame number threshold and the second frame number is greater than the second frame number threshold.
Further, the vehicle cut-in detection device further includes a third determining module 604, where the third determining module 604 is specifically configured to determine the first frame number threshold and the second frame number threshold by:
acquiring detection data for carrying out real-time tracking detection on the target vehicle;
determining the transverse speed of a target vehicle in the latest frame of detection data, and determining the detection frame rate for carrying out real-time tracking detection on the target vehicle;
Determining a preset transverse speed threshold, a first time threshold and a second time threshold;
calculating to obtain a first frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the first time threshold of the target vehicle;
and calculating to obtain a second frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the second time threshold of the target vehicle.
Further, the third determining module 604 is specifically configured to calculate the first frame number threshold by the following formula:
F th_1 =round(f*th 1_std *(v lateral_std /v lateral ));
wherein F is th_1 For the first frame number threshold, round is according to rounding, f is the detected frame rate, th 1_std V is the first time threshold lateral_std Is the transverse velocity threshold, v lateral Is the lateral speed of the target vehicle.
Further, the third determining module 604 is specifically configured to:
acquiring first tracking data of a first sensor for carrying out real-time tracking detection on a target vehicle;
acquiring second tracking data of a second sensor for carrying out real-time tracking detection on a target vehicle, wherein the first sensor and the second sensor are two types of different sensors;
and carrying out matching fusion on the first tracking data and the corresponding second tracking data of each frame based on the Hungary matching algorithm and the Kalman filtering algorithm so as to obtain multi-frame detection data.
Further, the second determining module 603 is specifically further configured to determine the first frame number by:
determining whether the target vehicle is in an early warning area according to the coordinate information of the target vehicle in each frame of detection data;
if the target vehicle is in the early warning area, determining the detection data as first target data;
and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame first target data as a first frame number.
Further, the second determining module 603 is specifically further configured to determine the second frame number by:
determining a y-axis coordinate value of the target vehicle in each frame of detection data according to the coordinate information of the target vehicle in each frame of detection data, wherein the coordinate information of the target vehicle is coordinate information taking a host vehicle coordinate system as a reference coordinate;
lane allocation is carried out on the target vehicle according to the y-axis coordinate value of the target vehicle;
if the target vehicle is distributed in the left lane of the vehicle, determining whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the target data of the previous frame;
if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the detection data of the previous frame, determining second target data of the detection data;
and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame second target data as a second frame number.
Further, after lane allocation is performed on the target vehicle according to the y-axis coordinate value of the target vehicle, the second determining module 603 is specifically further configured to:
if the target vehicle is distributed on the right lane of the vehicle, according to the y-axis coordinate value in the detection data, whether the target vehicle is larger than the y-axis coordinate value in the detection data of the previous frame or not is judged;
if the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame, determining the detection data as second target data;
and taking the number of continuous frames taking the latest frame as an end frame in the multi-frame second target data as a second frame number.
Further, before determining whether the first number of frames of the target vehicle continuously in the early warning area is greater than the first number of frames threshold, the detection module 601 is specifically further configured to:
the method comprises the steps of obtaining lane line detection data of the left side and the right side of a vehicle, wherein the lane detection data are described in a pixel point mode;
converting the lane detection data into coordinate point data under a host vehicle coordinate system, and fitting the coordinate point data to obtain a plurality of lane lines on the left side and the right side of the host vehicle, wherein the lane lines at least comprise a first lane line and a second lane line which are respectively positioned on the left side and the right side of the host vehicle;
the method comprises the steps of taking the areas in the preset width ranges on the left side and the right side of a first lane line as a first early warning area, and taking the areas in the preset width ranges on the left side and the right side of a second lane line as a second early warning area.
Further, the third determining module 604 is specifically configured to calculate the second frame number threshold by the following formula:
F th_2 =round(f*th 2_std *(v lateral_std /v lateral ));
wherein F is th_2 For the second frame number threshold, round is according to rounding, f is the detected frame rate, th 2_std V is the second time threshold lateral_std Is the transverse velocity threshold, v lateral Is the lateral speed of the target vehicle.
For specific limitations of the vehicle cut-in detection device, reference may be made to the above limitations of the vehicle cut-in detection method, and no further description is given here. The various modules in the vehicle cut-in detection apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a vehicle cut-in detection device is provided that includes a processor, a memory, and a database connected by a bus. Wherein the processor of the vehicle cut-in detection device is configured to provide computing and control capabilities. The memory of the vehicle cut-in detection device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer equipment is used for storing related data such as multi-frame detection data, multi-frame target data, a first frame number, a second frame number, a first frame number threshold value, a second frame number threshold value and the like. The computer program is executed by a processor to implement a vehicle cut-in detection method.
In one embodiment, as shown in fig. 7, a vehicle cut-in detection apparatus is provided, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the vehicle cut-in detection method described above when executing the computer program.
In one embodiment, a readable storage medium is provided having a computer program stored thereon, which when executed by a processor, implements the steps of the vehicle cut-in detection method described above.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program, which may be stored on a non-volatile computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, processor, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (9)

1. A vehicle cut-in detection method, comprising:
carrying out real-time tracking detection on a target vehicle entering the surrounding area of the vehicle;
when the target vehicle is continuously located in an early warning area of the vehicle and the target vehicle has a tendency to continuously cut into a lane where the vehicle is located, determining whether a first frame number of the target vehicle continuously located in the early warning area is larger than a first frame number threshold value, and determining whether a second frame number of the target vehicle continuously cutting into the lane where the vehicle is located is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values adjusted according to the transverse speed of the target vehicle;
If the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value, determining that the target vehicle cuts into a lane where the vehicle is located;
the first frame number threshold and the second frame number threshold are determined by:
acquiring detection data for carrying out real-time tracking detection on the target vehicle;
determining the transverse speed of the target vehicle in the latest frame of detection data, and determining the detection frame rate for carrying out real-time tracking detection on the target vehicle;
determining a preset transverse speed threshold, a first time threshold and a second time threshold;
according to the transverse speed, the detection frame rate, the transverse speed threshold and the first time threshold of the target vehicle, calculating to obtain the first frame number threshold;
and calculating and obtaining the second frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the second time threshold of the target vehicle.
2. The vehicle cut-in detection method according to claim 1, wherein the acquiring detection data for real-time tracking detection of the target vehicle includes:
acquiring first tracking data of a first sensor for carrying out real-time tracking detection on the target vehicle;
Acquiring second tracking data of a second sensor for carrying out real-time tracking detection on the target vehicle, wherein the first sensor and the second sensor are two types of different sensors;
and carrying out matching fusion on the first tracking data and the corresponding second tracking data of each frame based on a Hungary matching algorithm and a Kalman filtering algorithm so as to obtain multi-frame detection data.
3. The vehicle cut-in detection method according to claim 1, wherein the first frame number is determined by:
determining whether the target vehicle is in the early warning area according to the coordinate information of the target vehicle in the detection data of each frame;
if the target vehicle is in the early warning area, determining that the detection data is first target data;
and taking the number of continuous frames taking the latest frame as an end frame in the first target data of a plurality of frames as the first frame number.
4. The vehicle cut-in detection method according to claim 1, wherein the second frame number is determined by:
determining a y-axis coordinate value of the target vehicle in each frame of detection data according to the coordinate information of the target vehicle in each frame of detection data, wherein the coordinate information of the target vehicle is coordinate information taking a host vehicle coordinate system as a reference coordinate;
Lane allocation is carried out on the target vehicle according to the y-axis coordinate value of the target vehicle;
if the target vehicle is allocated to the left lane of the host vehicle, determining whether the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the detection data of the previous frame;
if the y-axis coordinate value in the detection data is smaller than the y-axis coordinate value in the detection data of the previous frame, determining the detection data as second target data;
and taking the number of continuous frames taking the latest frame as an end frame in the second target data of a plurality of frames as the second frame number.
5. The vehicle cut-in detection method according to claim 4, wherein after the lane allocation to the target vehicle according to the y-axis coordinate value of the target vehicle, the method further comprises:
if the target vehicle is allocated to the right lane of the host vehicle, according to the y-axis coordinate value in the detection data, whether the target vehicle is larger than the y-axis coordinate value in the detection data of the previous frame or not;
if the y-axis coordinate value in the detection data is larger than the y-axis coordinate value in the detection data of the previous frame, determining that the detection data is the second target data;
And taking the number of continuous frames taking the latest frame as an end frame in the second target data of a plurality of frames as the second frame number.
6. The vehicle cut-in detection method of any one of claims 1-5, wherein the determining whether the first number of frames that the target vehicle is continuously in the pre-warning area is greater than a first number of frames threshold further comprises:
lane line detection data of the left side and the right side of the vehicle are obtained, and the lane line detection data are described in the form of pixel points;
converting the lane line detection data into coordinate point data under a host vehicle coordinate system, and fitting the coordinate point data to obtain a plurality of lane lines on the left side and the right side of the host vehicle, wherein the lane lines at least comprise a first lane line and a second lane line which are respectively positioned on the left side and the right side of the host vehicle;
and taking the areas in the preset width ranges at the left and right sides of the first lane line as a first early warning area, and taking the areas in the preset width ranges at the left and right sides of the second lane line as a second early warning area.
7. The vehicle cut-in detection method according to any one of claims 1 to 5, wherein the first frame number threshold is calculated by the following formula:
F th_1 =round(f*th 1_std *(v lateral_std /v lateral ));
Wherein F is th_1 For the first frame number threshold, round is a rounding method, f is a detected frame rate, th 1_std V is the first time threshold lateral_std Is the transverse velocity threshold, v lateral Is the lateral speed of the target vehicle.
8. A vehicle cut-in detection device, comprising:
the detection module is used for carrying out real-time tracking detection on the target vehicle entering the surrounding area of the vehicle;
the first determining module is used for determining whether a first frame number of the target vehicle in the early warning area is larger than a first frame number threshold value when the target vehicle is continuously in the early warning area of the vehicle and the target vehicle has a trend of continuously cutting into a lane where the vehicle is located, and determining whether a second frame number of the target vehicle in the trend of continuously cutting into the lane where the vehicle is located is larger than a second frame number threshold value, wherein the first frame number threshold value and the second frame number threshold value are frame number threshold values which are adjusted according to the transverse speed of the target vehicle;
the second determining module is used for determining that the target vehicle cuts into the lane where the vehicle is located if the first frame number is greater than the first frame number threshold value and the second frame number is greater than the second frame number threshold value;
The first frame number threshold and the second frame number threshold are determined by:
acquiring detection data for carrying out real-time tracking detection on the target vehicle;
determining the transverse speed of the target vehicle in the latest frame of detection data, and determining the detection frame rate for carrying out real-time tracking detection on the target vehicle;
determining a preset transverse speed threshold, a first time threshold and a second time threshold;
according to the transverse speed, the detection frame rate, the transverse speed threshold and the first time threshold of the target vehicle, calculating to obtain the first frame number threshold;
and calculating and obtaining the second frame number threshold according to the transverse speed, the detection frame rate, the transverse speed threshold and the second time threshold of the target vehicle.
9. A readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the vehicle cut-in detection method according to any one of claims 1 to 7.
CN202111094997.8A 2021-09-17 2021-09-17 Vehicle cut-in detection method and device and storage medium Active CN115222779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111094997.8A CN115222779B (en) 2021-09-17 2021-09-17 Vehicle cut-in detection method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111094997.8A CN115222779B (en) 2021-09-17 2021-09-17 Vehicle cut-in detection method and device and storage medium

Publications (2)

Publication Number Publication Date
CN115222779A CN115222779A (en) 2022-10-21
CN115222779B true CN115222779B (en) 2023-09-22

Family

ID=83606012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111094997.8A Active CN115222779B (en) 2021-09-17 2021-09-17 Vehicle cut-in detection method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115222779B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984824A (en) * 2023-02-28 2023-04-18 安徽蔚来智驾科技有限公司 Scene information screening method based on track information, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109720345A (en) * 2018-12-29 2019-05-07 北京经纬恒润科技有限公司 A kind of incision vehicle monitoring method and system
CN110458050A (en) * 2019-07-25 2019-11-15 清华大学苏州汽车研究院(吴江) Vehicle based on Vehicular video cuts detection method and device
CN111619564A (en) * 2020-05-29 2020-09-04 重庆长安汽车股份有限公司 Vehicle self-adaptive cruise speed control method, device, processor, automobile and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6508118B2 (en) * 2016-04-26 2019-05-08 トヨタ自動車株式会社 Vehicle travel control device
CN109147368A (en) * 2018-08-22 2019-01-04 北京市商汤科技开发有限公司 Intelligent driving control method device and electronic equipment based on lane line

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109720345A (en) * 2018-12-29 2019-05-07 北京经纬恒润科技有限公司 A kind of incision vehicle monitoring method and system
CN110458050A (en) * 2019-07-25 2019-11-15 清华大学苏州汽车研究院(吴江) Vehicle based on Vehicular video cuts detection method and device
CN111619564A (en) * 2020-05-29 2020-09-04 重庆长安汽车股份有限公司 Vehicle self-adaptive cruise speed control method, device, processor, automobile and computer readable storage medium

Also Published As

Publication number Publication date
CN115222779A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
US20230079730A1 (en) Control device, scanning system, control method, and program
US8605947B2 (en) Method for detecting a clear path of travel for a vehicle enhanced by object detection
CN110386065B (en) Vehicle blind area monitoring method and device, computer equipment and storage medium
US20080291276A1 (en) Method for Driver Assistance and Driver Assistance Device on the Basis of Lane Information
US20130141520A1 (en) Lane tracking system
US10127460B2 (en) Lane boundary line information acquiring device
US20110320163A1 (en) Method and system for determining road data
CN114270143A (en) Lateral guidance of a vehicle by means of environmental data detected by other vehicles
KR20180009755A (en) Lane estimation method
CN107209998B (en) Lane line recognition device and lane line recognition method
CN110203210A (en) A kind of lane departure warning method, terminal device and storage medium
JP6697522B2 (en) Marking line recognition device
WO2023201904A1 (en) Abnormal vehicle traveling detection method, and electronic device and storage medium
US20160314359A1 (en) Lane detection device and method thereof, and lane display device and method thereof
CN110738081B (en) Abnormal road condition detection method and device
CN113297881B (en) Target detection method and related device
US10386849B2 (en) ECU, autonomous vehicle including ECU, and method of recognizing nearby vehicle for the same
CN115222779B (en) Vehicle cut-in detection method and device and storage medium
CN110843786A (en) Method and system for determining and displaying a water-engaging condition and vehicle having such a system
CN114312794A (en) System and method for identifying severe weather environment of vehicle running
CN111104824A (en) Method for detecting lane departure, electronic device, and computer-readable storage medium
CN109885943B (en) Prediction method and device for driving decision, storage medium and terminal equipment
US11987251B2 (en) Adaptive rationalizer for vehicle perception systems toward robust automated driving control
CN112990117B (en) Installation data processing method and device based on intelligent driving system
CN113386773A (en) Method and device for judging reliability of visual identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant