CN117555333A - Dynamic travel track processing system and method - Google Patents

Dynamic travel track processing system and method Download PDF

Info

Publication number
CN117555333A
CN117555333A CN202311555834.4A CN202311555834A CN117555333A CN 117555333 A CN117555333 A CN 117555333A CN 202311555834 A CN202311555834 A CN 202311555834A CN 117555333 A CN117555333 A CN 117555333A
Authority
CN
China
Prior art keywords
vehicle
behavior
module
target vehicle
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311555834.4A
Other languages
Chinese (zh)
Inventor
沈春松
王嘉鸿
王广利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ccfrom Co ltd
Original Assignee
Shenzhen Ccfrom Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ccfrom Co ltd filed Critical Shenzhen Ccfrom Co ltd
Priority to CN202311555834.4A priority Critical patent/CN117555333A/en
Publication of CN117555333A publication Critical patent/CN117555333A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

A processing system and method of dynamic travel track belongs to the technical field of image processing, and aims to solve the problems that the existing dynamic travel track image processing technology has insufficient prejudgement capability and still needs to be perfected; according to the invention, the target detection and tracking unit is used for carrying out target detection and tracking by utilizing radar data, the behavior intention recognition unit is used for analyzing the motion characteristics and the context information of the target vehicle and classifying and predicting the behaviors of the target vehicle, the shielding object detection and recognition unit is used for detecting and recognizing the shielding object on the road by analyzing and processing sensor data, and the path planning and decision unit is used for re-planning the path of the traveling path according to the behavior intention of the target vehicle and the position of the shielding object.

Description

Dynamic travel track processing system and method
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a system and a method for processing a dynamic travel track.
Background
With the development of image processing technology, the use of sensors is more and more, especially in the fields of robots and automatic driving, optimal routes are planned and obstacle avoidance is performed in real time through sensors and processing algorithms, and although the automatic driving technology can sense the surrounding environment in real time and make judgment and provide decisions according to the road state, the vehicle and the pedestrian motion state, under the condition of complex traffic environment and various unpredictable conditions, intervention or limitation of human drivers is still required, and the prejudging capability of the existing dynamic travelling track image processing technology is insufficient and still needs to be perfected.
To solve the above problems. Therefore, a system and a method for processing a dynamic travel track are provided.
Disclosure of Invention
The invention aims to provide a processing system and a processing method for a dynamic travel track, which solve the problems that the prior dynamic travel track image processing technology in the background technology has insufficient prejudgment capability and needs to be perfected.
In order to achieve the above purpose, the present invention provides the following technical solutions: a processing system for a dynamic travel path, comprising:
target detection and tracking unit: detecting and tracking information such as position, speed and acceleration of the vehicle by using sensor data based on a radar target detection and tracking algorithm;
behavior intention recognition unit: the method comprises the steps of analyzing the behavior intention of the vehicle, including deceleration, braking, lane changing and the like, and classifying and predicting the behavior of the vehicle by combining the motion characteristics and the context information of the vehicle;
the shelter detects and discerns unit: the system is used for detecting and identifying the shielding object on the road, comprises a static barrier and a moving object, and identifies the key information such as the position, the size and the type of the shielding object through processing and analyzing the sensor data;
path planning and decision unit: the path of travel is re-planned according to the vehicle's behavioral intention and the position of the obstruction.
Further, the target detection and tracking unit includes:
the target detection and identification module: for detecting an object region, which may be a vehicle, in the sensor data and identifying and classifying the object to determine whether it is a vehicle;
and a target tracking module: for tracking the obstruction in successive image frames and estimating information such as its position, size, and motion status;
a motion estimation module: for estimating a motion state of the vehicle, including a speed, an acceleration, and the like, after obtaining a result of the continuous tracking;
a horizontal distance measuring and calculating module: the method is used for measuring and calculating the horizontal distance between the target vehicle and other targets such as a shelter, a road edge and the like so as to carry out path planning and decision.
Further, the behavior intention recognition unit includes:
and a data preprocessing module: the method is used for preprocessing the sensor data, including filtering, noise reduction, calibration and the like of the data, and is used for feature extraction and training of behavior intention recognition;
and the feature extraction module is used for: extracting useful features from the preprocessed data to describe the motion features and context information of the vehicle;
behavior classification and prediction model: the method comprises the steps of constructing a classification and prediction model by using a deep learning technology, and training and learning the extracted features;
context modeling: in view of the fact that prediction of behavior intent may be affected by the surrounding environment of the vehicle, contextual modeling may model the behavior of the target vehicle in association with contextual information of other vehicles, traffic lights, road signs, and the like.
Further, in the behavior classification and prediction model, a support vector machine algorithm is adopted, and the method comprises the following steps:
s01: data preparation: features related to behavior intentions, such as speed, acceleration, direction angle and the like of a vehicle, are firstly extracted from sensor data, and then the features are paired with corresponding behavior tags to form a training data set.
S02: feature conversion and mapping: for non-linearly separable data, features may be mapped to a high-dimensional space by a kernel function.
S03: support vector machine training: and determining an optimal hyperplane by using a training data set through a maximized interval so as to realize data classification, selecting a sample point with the farthest classification boundary distance as a support vector, and solving an optimization problem to obtain hyperplane parameters.
S04: behavior classification and prediction: classifying and predicting new test samples according to the SVM model obtained by training, and judging the behavior category of the samples according to the position of the hyperplane by mapping the test samples to a high-dimensional feature space where training data are located.
Further, the obstruction detection and recognition unit includes:
an object detection module: for detecting an object region in the sensor data that may be an occlusion;
object classification module: for classifying the occlusion when an occlusion region is detected, to determine its type;
an object tracking module: for tracking the obstruction in successive image frames and estimating information such as its position, size, and motion status;
the obstruction attribute analysis module: further analysis of properties is performed for the detected occlusion, such as analyzing the shape, velocity, acceleration, etc. of the occlusion by motion estimation and trajectory analysis techniques.
Further, in the target detection and recognition module and the object detection module, the recognition of the target vehicle or the shielding object by using the YOLO algorithm comprises the following recognition steps:
s01: network input processing: dividing an input image into grids with fixed sizes, wherein each grid is responsible for detecting one object, and predicting a plurality of boundary boxes and category confidence degrees;
s02: feature extraction: extracting rich feature representations from the original image through a convolution and pooling layer by using a pre-trained convolution neural network as a feature extractor;
s03: feature fusion: the feature images of different layers are fused, so that an algorithm can detect objects on different scales, and the feature images of lower layers are connected with the feature images of higher layers through cross-layer connection.
S04: prediction bounding box: for each grid, calculating the boundary frame position relative to the whole image according to boundary frame parameters predicted by the network, wherein the parameters comprise coordinates, width and height;
s05: category prediction: for each grid, determining the category of the object according to the category probability predicted by the network;
s06: non-maximum suppression: since the same object may be detected by multiple grids, to avoid duplicate detection, the bounding box is screened using a non-maximum suppression NMS algorithm, and the bounding box with the highest confidence is retained by setting a threshold.
Further, the path planning and decision unit comprises:
behavior intent classification algorithm: analyzing the motion characteristics and the context information of the target vehicle by using a deep learning method, and classifying and predicting the behavior intention of the target vehicle, such as deceleration, braking, lane change and the like;
collision risk assessment algorithm: based on information such as distance and relative speed between the target vehicle and the shielding object, estimating collision risk between the target vehicle and the shielding object;
and a path generation module: re-planning a path according to the behavior intention of the target vehicle and the position of the shielding object;
decision and control module: and according to the path planning result and the current vehicle state, formulating a control strategy, such as adjusting acceleration, steering angle and the like.
Further, in the collision risk assessment algorithm, the collision risk between the target vehicle and the obstacle is assessed by using a least square method, and the method comprises the following steps:
s01: distance and time modeling: modeling the change of the distance between the target vehicle and the shielding object along with time according to the related data between the target vehicle and the shielding object;
s02: collision risk assessment: fitting a model of which the distance changes along with time by a least square method, and calculating residual errors between a predicted value and an actual observed value, wherein larger residual errors indicate higher collision risks;
s03: solving an optimization problem: the collision risk assessment is converted into an optimization problem, and the optimal model parameters are found by minimizing the sum of squares of residual errors.
The invention provides another technical scheme that: the processing method for the dynamic travel track comprises the following steps:
s1: acquiring sensor data: acquiring data of the surrounding environment of the vehicle from sensors such as radar, cameras and the like, wherein the data comprise information such as the position, the speed, the acceleration and the like of a target vehicle and other shielding objects;
s2: target detection and tracking: detecting and tracking target vehicles and other shielding objects by using a target detection and tracking algorithm based on radar or camera data, and acquiring the positions and motion states of the target vehicles and other shielding objects at different time steps;
s3: collision risk assessment: according to the relative position, speed and other information between the target vehicle and other shielding objects, estimating the collision risk between the target vehicle and the shielding objects, and calculating the collision probability or risk index;
s4: path planning: based on the collision risk assessment result and the current vehicle state, path planning is carried out, and a safe path is selected to avoid an area possibly colliding with the target vehicle;
s5: decision and control: and according to the result of the path planning and the current vehicle state, making a corresponding decision strategy, such as adjusting acceleration, steering angle and the like, so as to realize avoiding the running of the target vehicle.
Compared with the prior art, the invention has the beneficial effects that:
according to the processing system and the processing method for the dynamic travel track, the target detection and tracking unit is used for carrying out target detection and tracking by utilizing radar data, information such as the position, the speed and the acceleration of a target vehicle is obtained, the behavior intention recognition unit is used for analyzing the motion characteristics and the context information of the target vehicle and classifying and predicting the behaviors of the target vehicle, the shielding object detection and recognition unit is used for detecting and recognizing the shielding object on a road through sensor data analysis and processing, and the path planning and decision unit is used for re-planning the path of the travel according to the behavior intention of the target vehicle and the position of the shielding object so as to ensure safe traffic.
Drawings
FIG. 1 is a schematic diagram of the present invention;
FIG. 2 is a block diagram of a processing system according to the present invention;
FIG. 3 is a block diagram of a target detection and tracking unit and a behavior intent recognition unit according to the present invention;
FIG. 4 is a block diagram of an occlusion detection and recognition unit and a path planning and decision unit of the present invention;
FIG. 5 is a logical block diagram of the present invention;
fig. 6 is a flow chart of the method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to solve the problem that the pre-judging capability of the existing dynamic advancing track image processing technology is insufficient, the technical problem still needs to be perfected, as shown in fig. 1-6, the following preferred technical scheme is provided:
a processing system for a dynamic travel path, comprising:
target detection and tracking unit: detecting and tracking information such as position, speed and acceleration of the vehicle by using sensor data based on a radar target detection and tracking algorithm;
behavior intention recognition unit: the method comprises the steps of analyzing the behavior intention of the vehicle, including deceleration, braking, lane changing and the like, and classifying and predicting the behavior of the vehicle by combining the motion characteristics and the context information of the vehicle;
the shelter detects and discerns unit: the system is used for detecting and identifying the shielding object on the road, comprises a static barrier and a moving object, and identifies the key information such as the position, the size and the type of the shielding object through processing and analyzing the sensor data;
path planning and decision unit: the path of travel is re-planned according to the vehicle's behavioral intention and the position of the obstruction.
The target detection and tracking unit includes:
the target detection and identification module: for detecting an object region, which may be a vehicle, in the sensor data and identifying and classifying the object to determine whether it is a vehicle;
and a target tracking module: for tracking the obstruction in successive image frames and estimating information such as its position, size, and motion status;
a motion estimation module: for estimating a motion state of the vehicle, including a speed, an acceleration, and the like, after obtaining a result of the continuous tracking;
a horizontal distance measuring and calculating module: the method is used for measuring and calculating the horizontal distance between the target vehicle and other targets such as a shelter, a road edge and the like so as to carry out path planning and decision.
The behavior intention recognition unit includes:
and a data preprocessing module: the method is used for preprocessing the sensor data, including filtering, noise reduction, calibration and the like of the data, and is used for feature extraction and training of behavior intention recognition;
and the feature extraction module is used for: extracting useful features from the preprocessed data to describe the motion features and context information of the vehicle;
behavior classification and prediction model: the method comprises the steps of constructing a classification and prediction model by using a deep learning technology, and training and learning the extracted features;
context modeling: in view of the fact that prediction of behavior intent may be affected by the surrounding environment of the vehicle, contextual modeling may model the behavior of the target vehicle in association with contextual information of other vehicles, traffic lights, road signs, and the like.
In the behavior classification and prediction model, a support vector machine algorithm is adopted, and the method comprises the following steps:
step one: data preparation: features related to behavior intentions, such as speed, acceleration, direction angle and the like of a vehicle, are firstly extracted from sensor data, and then the features are paired with corresponding behavior tags to form a training data set.
Step two: feature conversion and mapping: for non-linearly separable data, features may be mapped to a high-dimensional space by a kernel function.
Step three: support vector machine training: and determining an optimal hyperplane by using a training data set through a maximized interval so as to realize data classification, selecting a sample point with the farthest classification boundary distance as a support vector, and solving an optimization problem to obtain hyperplane parameters.
Step four: behavior classification and prediction: classifying and predicting new test samples according to the SVM model obtained by training, and judging the behavior category of the samples according to the position of the hyperplane by mapping the test samples to a high-dimensional feature space where training data are located.
For example, N training samples (x_i, y_i), where x_i is an input feature vector, y_i is a corresponding behavior class label, and the goal is to find a hyperplane wx+b=0, so that the interval between positive and negative samples is maximized;
the equation for the hyperplane is expressed as: w·x+b=0;
the optimization problem can be expressed as: minimize is that w 2/2 subjects to y_i (w.x_i+b) is more than or equal to 1, for all i;
wherein ||w|| represents the norm of the weight vector w, b represents the bias term, y i e-1, +1 is a sample tag, representing two different behavior categories, the support vector machine determines the optimal hyperplane for behavior classification and prediction by solving the optimization problem.
The obstruction detection and identification unit includes:
an object detection module: for detecting an object region in the sensor data that may be an occlusion;
object classification module: for classifying the occlusion when an occlusion region is detected, to determine its type;
an object tracking module: for tracking the obstruction in successive image frames and estimating information such as its position, size, and motion status;
the obstruction attribute analysis module: further analysis of properties is performed for the detected occlusion, such as analyzing the shape, velocity, acceleration, etc. of the occlusion by motion estimation and trajectory analysis techniques.
In the object detection and recognition module and the object detection module, by using the YOLO algorithm to recognize the object vehicle or the shielding object, it is assumed that we use the YOLO algorithm to detect three kinds of objects of an automobile, a pedestrian and a bicycle in one image, the recognition steps thereof are:
step one: network input processing: dividing the input image into grids of fixed size, such as 7x7 grids, each of which is responsible for detecting one of the objects while predicting a plurality of bounding boxes and class confidence;
step two: feature extraction: extracting rich feature representations from the original image through a convolution and pooling layer by using a pre-trained convolution neural network as a feature extractor;
step three: feature fusion: the feature images of different layers are fused, so that an algorithm can detect objects on different scales, and the feature images of lower layers are connected with the feature images of higher layers through cross-layer connection.
Step four: prediction bounding box: calculating the boundary frame position relative to the whole image according to boundary frame parameters predicted by the network, including coordinates, width and height, for each grid, and assuming that a certain grid predicts two boundary frames, namely boundary frame A and boundary frame B, respectively, the position of the central coordinate of the boundary frame A relative to the whole image is (100, 200), the width is 50, and the height is 30; the position of the center coordinates of the bounding box B relative to the entire image is (300, 150), width 40, and height 20;
step five: category prediction: for each grid, determining the category of the object according to the category probability predicted by the network, and assuming that the category probability predicted by a certain grid is: automobile: 0.9, pedestrian: 0.7, bicycle: 0.6;
step six: non-maximum suppression: since the same object may be detected by multiple grids, in order to avoid repeated detection, a non-maximum suppression NMS algorithm is used to screen the bounding box, a bounding box with the highest confidence is reserved by setting a threshold, for example, setting a bounding box with an intersection ratio greater than 0.5 to repeat, after NMS, a bounding box a is reserved, YOLO algorithm detects a car in the input image, the position of the bounding box is (100, 200) relative to the whole image, the width is 50, the height is 30, the corresponding class of the bounding box is given as a car, the confidence is 0.9, detection of multiple classes of objects in the image is realized through YOLO algorithm, position and class information are obtained, and real-time target detection and tracking are realized through processing continuous image frames.
Behavior intent classification algorithm: analyzing the motion characteristics and the context information of the target vehicle by using a deep learning method, and classifying and predicting the behavior intention of the target vehicle, such as deceleration, braking, lane change and the like;
collision risk assessment algorithm: based on information such as distance and relative speed between the target vehicle and the shielding object, estimating collision risk between the target vehicle and the shielding object;
and a path generation module: re-planning a path according to the behavior intention of the target vehicle and the position of the shielding object;
decision and control module: and according to the path planning result and the current vehicle state, formulating a control strategy, such as adjusting acceleration, steering angle and the like.
In a collision risk assessment algorithm, a least square method is used for assessing the collision risk between a target vehicle and a shielding object, and the method comprises the following steps:
step one: distance and time modeling: modeling the change of the distance between the target vehicle and the shielding object along with time according to the related data between the target vehicle and the shielding object;
step two: collision risk assessment: fitting a model of which the distance changes along with time by a least square method, and calculating residual errors between a predicted value and an actual observed value, wherein larger residual errors indicate higher collision risks;
step three: solving an optimization problem: the collision risk assessment is converted into an optimization problem, and the optimal model parameters are found by minimizing the sum of squares of residual errors.
For example, N samples of data, the actual observed value of the ith sample is y_i, the predicted value of the model for this sample is f (x_i), and the loss function of the least squares method is expressed as:
L(w,b)=Σ(y_i-f(x_i))^2
wherein Σ represents summing all samples, y_i represents an actual observed value, f (x_i) represents a predicted value of the model for the sample x_i, w and b represent parameters of the model, the model parameters w and b are adjusted to minimize a loss function, so that a gap between the predicted value and the actual observed value is as small as possible, and an optimal model parameter can be found through an optimization process of a least square method, thereby evaluating collision risk between a target vehicle and a shelter, and helping to formulate a safe driving strategy such as deceleration, braking or avoidance of the shelter.
To further better explain the above examples, the present invention also provides an embodiment, a method for processing a dynamic travel track, including the following steps:
step one: acquiring sensor data: acquiring data of the surrounding environment of the vehicle from sensors such as radar, cameras and the like, wherein the data comprise information such as the position, the speed, the acceleration and the like of a target vehicle and other shielding objects;
step two: target detection and tracking: detecting and tracking target vehicles and other shielding objects by using a target detection and tracking algorithm based on radar or camera data, and acquiring the positions and motion states of the target vehicles and other shielding objects at different time steps;
step three: collision risk assessment: according to the relative position, speed and other information between the target vehicle and other shielding objects, estimating the collision risk between the target vehicle and the shielding objects, and calculating the collision probability or risk index;
step four: path planning: based on the collision risk assessment result and the current vehicle state, path planning is carried out, and a safe path is selected to avoid an area possibly colliding with the target vehicle;
step five: decision and control: and according to the result of the path planning and the current vehicle state, making a corresponding decision strategy, such as adjusting acceleration, steering angle and the like, so as to realize avoiding the running of the target vehicle.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should be covered by the protection scope of the present invention by making equivalents and modifications to the technical solution and the inventive concept thereof.

Claims (8)

1. A system for processing a dynamic travel path, comprising:
target detection and tracking unit: radar-based target detection and tracking algorithms, detecting and tracking the position, speed, and acceleration of the vehicle by using sensor data;
behavior intention recognition unit: the method comprises the steps of analyzing the behavior intention of the vehicle, including deceleration, braking and lane changing, and classifying and predicting the behavior of the vehicle by combining the motion characteristics and the context information of the vehicle;
the shelter detects and discerns unit: the system comprises a sensor, a detection device and a detection device, wherein the sensor is used for detecting and identifying a shielding object on a road, the shielding object comprises a static barrier and a moving object, and the position, the size and the type of the shielding object are identified through the processing and the analysis of sensor data;
path planning and decision unit: the path of travel is re-planned according to the vehicle's behavioral intention and the position of the obstruction.
2. A system for processing a dynamic travel path as claimed in claim 1, wherein: the target detection and tracking unit includes:
the target detection and identification module: for detecting an object region, which may be a vehicle, in the sensor data and identifying and classifying the object to determine whether it is a vehicle;
and a target tracking module: for tracking the occlusion object in successive image frames and estimating its position, size and motion status information;
a motion estimation module: for estimating a motion state of the vehicle, including a speed and an acceleration, after obtaining a result of the continuous tracking;
a horizontal distance measuring and calculating module: the method is used for measuring and calculating the horizontal distance between the target vehicle and other targets, including the shielding object and the road edge, so as to carry out path planning and decision.
3. A system for processing a dynamic travel path as claimed in claim 2, wherein: the behavior intention recognition unit includes:
and a data preprocessing module: the sensor data preprocessing method comprises the steps of preprocessing sensor data, including filtering, noise reduction and calibration of the data, and feature extraction and training for behavior intention recognition;
and the feature extraction module is used for: extracting useful features from the preprocessed data to describe the motion features and context information of the vehicle;
behavior classification and prediction model: the method comprises the steps of constructing a classification and prediction model by using a deep learning technology, and training and learning the extracted features;
context modeling: in view of the fact that prediction of behavior intent may be affected by the surrounding environment of the vehicle, contextual modeling models the behavior of the target vehicle in association with other vehicles, traffic lights, road sign contextual information.
4. A system for processing a dynamic travel path as claimed in claim 3, wherein: in the behavior classification and prediction model, a support vector machine algorithm is adopted, and the method comprises the following steps:
s01: data preparation: firstly, extracting characteristics related to behavior intention from sensor data, including speed, acceleration and direction angle of a vehicle, and then pairing the characteristics with corresponding behavior labels to form a training data set.
S02: feature conversion and mapping: for non-linearly separable data, features are mapped to a high-dimensional space by a kernel function.
S03: support vector machine training: and determining an optimal hyperplane by using a training data set through a maximized interval so as to realize data classification, selecting a sample point with the farthest classification boundary distance as a support vector, and solving an optimization problem to obtain hyperplane parameters.
S04: behavior classification and prediction: classifying and predicting new test samples according to the SVM model obtained by training, and judging the behavior category of the samples according to the position of the hyperplane by mapping the test samples to a high-dimensional feature space where training data are located.
5. A system for processing a dynamic travel path as recited in claim 4, wherein: the obstruction detection and identification unit includes:
an object detection module: for detecting an object region in the sensor data that may be an occlusion;
object classification module: for classifying the occlusion when an occlusion region is detected, to determine its type;
an object tracking module: for tracking the occlusion object in successive image frames and estimating its position, size and motion status information;
the obstruction attribute analysis module: further attribute analysis is performed for the detected occlusion, and the shape, velocity and acceleration of the occlusion are analyzed by motion estimation and trajectory analysis techniques.
6. A system for processing a dynamic travel path as recited in claim 5, wherein: in the target detection and recognition module and the object detection module, the recognition of the target vehicle or the shielding object by using the YOLO algorithm comprises the following recognition steps:
s01: network input processing: dividing an input image into grids with fixed sizes, wherein each grid is responsible for detecting one object, and predicting a plurality of boundary boxes and category confidence degrees;
s02: feature extraction: extracting rich feature representations from the original image through a convolution and pooling layer by using a pre-trained convolution neural network as a feature extractor;
s03: feature fusion: the feature images of different layers are fused, so that an algorithm can detect objects on different scales, and the feature images of lower layers are connected with the feature images of higher layers through cross-layer connection.
S04: prediction bounding box: for each grid, calculating the boundary frame position relative to the whole image according to boundary frame parameters predicted by the network, wherein the parameters comprise coordinates, width and height;
s05: category prediction: for each grid, determining the category of the object according to the category probability predicted by the network;
s06: non-maximum suppression: since the same object may be detected by multiple grids, to avoid duplicate detection, the bounding box is screened using a non-maximum suppression NMS algorithm, and the bounding box with the highest confidence is retained by setting a threshold.
7. A system for processing a dynamic travel path as recited in claim 6, wherein: the path planning and decision unit comprises:
behavior intent classification algorithm: analyzing the motion characteristics and the context information of the target vehicle by using a deep learning method, and classifying and predicting the behavior intention of the target vehicle, including decelerating, braking and lane changing;
collision risk assessment algorithm: estimating collision risk between the target vehicle and the shielding object based on the distance and the relative speed information between the target vehicle and the shielding object;
and a path generation module: re-planning a path according to the behavior intention of the target vehicle and the position of the shielding object;
decision and control module: and formulating a control strategy according to the path planning result and the current vehicle state, wherein the control strategy comprises the adjustment of acceleration and steering angle.
8. A system for processing a dynamic travel path as recited in claim 7, wherein: in a collision risk assessment algorithm, a least square method is used for assessing the collision risk between a target vehicle and a shielding object, and the method comprises the following steps:
s01: distance and time modeling: modeling the change of the distance between the target vehicle and the shielding object along with time according to the related data between the target vehicle and the shielding object;
s02: collision risk assessment: fitting a model of which the distance changes along with time by a least square method, and calculating residual errors between a predicted value and an actual observed value, wherein larger residual errors indicate higher collision risks;
s03: solving an optimization problem: the collision risk assessment is converted into an optimization problem, and the optimal model parameters are found by minimizing the sum of squares of residual errors.
CN202311555834.4A 2023-11-21 2023-11-21 Dynamic travel track processing system and method Pending CN117555333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311555834.4A CN117555333A (en) 2023-11-21 2023-11-21 Dynamic travel track processing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311555834.4A CN117555333A (en) 2023-11-21 2023-11-21 Dynamic travel track processing system and method

Publications (1)

Publication Number Publication Date
CN117555333A true CN117555333A (en) 2024-02-13

Family

ID=89816366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311555834.4A Pending CN117555333A (en) 2023-11-21 2023-11-21 Dynamic travel track processing system and method

Country Status (1)

Country Link
CN (1) CN117555333A (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006168628A (en) * 2004-12-17 2006-06-29 Daihatsu Motor Co Ltd Collision prevention supporting method and device
CN106875424A (en) * 2017-01-16 2017-06-20 西北工业大学 A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN108146503A (en) * 2016-12-05 2018-06-12 福特全球技术公司 Collision prevention of vehicle
CN108604292A (en) * 2015-11-26 2018-09-28 御眼视觉技术有限公司 Automatic Prediction to the vehicle in incision track and Li Ta responses
CN111104969A (en) * 2019-12-04 2020-05-05 东北大学 Method for pre-judging collision possibility between unmanned vehicle and surrounding vehicle
CN111409639A (en) * 2020-04-07 2020-07-14 北京理工大学 Main vehicle network connection cruise control method and system
CN111746559A (en) * 2020-07-02 2020-10-09 湖北汽车工业学院 Method and system for predicting lane changing intention of front vehicle
CN112053589A (en) * 2020-08-18 2020-12-08 北京航空航天大学 Target vehicle lane changing behavior adaptive identification model construction method
CN112052802A (en) * 2020-09-09 2020-12-08 上海工程技术大学 Front vehicle behavior identification method based on machine vision
CN112896188A (en) * 2021-02-22 2021-06-04 浙江大学 Automatic driving decision control system considering front vehicle encounter
CN113313154A (en) * 2021-05-20 2021-08-27 四川天奥空天信息技术有限公司 Integrated multi-sensor integrated automatic driving intelligent sensing device
CN114291116A (en) * 2022-01-24 2022-04-08 广州小鹏自动驾驶科技有限公司 Method and device for predicting track of surrounding vehicle, vehicle and storage medium
CN115027497A (en) * 2022-06-20 2022-09-09 重庆长安汽车股份有限公司 Target vehicle cut-in intention prediction method and readable storage medium
CN115179959A (en) * 2022-07-18 2022-10-14 福州大学 Intelligent driving vehicle behavior prediction method based on self-adaptive updating threshold of driving road
CN115817470A (en) * 2022-12-28 2023-03-21 毫末智行科技有限公司 Method and apparatus for predicting lane-changing intention of straight lane of vehicle, and computer storage medium
CN115995163A (en) * 2023-03-23 2023-04-21 江西通慧科技集团股份有限公司 Vehicle collision early warning method and system
CN116118777A (en) * 2023-02-09 2023-05-16 同济大学 Integrated multifunctional advanced automatic driving auxiliary system
CN116811916A (en) * 2023-07-05 2023-09-29 江苏安必行无线科技有限公司 Automatic driving system based on 5G vehicle road cooperation

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006168628A (en) * 2004-12-17 2006-06-29 Daihatsu Motor Co Ltd Collision prevention supporting method and device
CN108604292A (en) * 2015-11-26 2018-09-28 御眼视觉技术有限公司 Automatic Prediction to the vehicle in incision track and Li Ta responses
CN108146503A (en) * 2016-12-05 2018-06-12 福特全球技术公司 Collision prevention of vehicle
CN106875424A (en) * 2017-01-16 2017-06-20 西北工业大学 A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN111104969A (en) * 2019-12-04 2020-05-05 东北大学 Method for pre-judging collision possibility between unmanned vehicle and surrounding vehicle
CN111409639A (en) * 2020-04-07 2020-07-14 北京理工大学 Main vehicle network connection cruise control method and system
CN111746559A (en) * 2020-07-02 2020-10-09 湖北汽车工业学院 Method and system for predicting lane changing intention of front vehicle
CN112053589A (en) * 2020-08-18 2020-12-08 北京航空航天大学 Target vehicle lane changing behavior adaptive identification model construction method
CN112052802A (en) * 2020-09-09 2020-12-08 上海工程技术大学 Front vehicle behavior identification method based on machine vision
CN112896188A (en) * 2021-02-22 2021-06-04 浙江大学 Automatic driving decision control system considering front vehicle encounter
CN113313154A (en) * 2021-05-20 2021-08-27 四川天奥空天信息技术有限公司 Integrated multi-sensor integrated automatic driving intelligent sensing device
CN114291116A (en) * 2022-01-24 2022-04-08 广州小鹏自动驾驶科技有限公司 Method and device for predicting track of surrounding vehicle, vehicle and storage medium
CN115027497A (en) * 2022-06-20 2022-09-09 重庆长安汽车股份有限公司 Target vehicle cut-in intention prediction method and readable storage medium
CN115179959A (en) * 2022-07-18 2022-10-14 福州大学 Intelligent driving vehicle behavior prediction method based on self-adaptive updating threshold of driving road
CN115817470A (en) * 2022-12-28 2023-03-21 毫末智行科技有限公司 Method and apparatus for predicting lane-changing intention of straight lane of vehicle, and computer storage medium
CN116118777A (en) * 2023-02-09 2023-05-16 同济大学 Integrated multifunctional advanced automatic driving auxiliary system
CN115995163A (en) * 2023-03-23 2023-04-21 江西通慧科技集团股份有限公司 Vehicle collision early warning method and system
CN116811916A (en) * 2023-07-05 2023-09-29 江苏安必行无线科技有限公司 Automatic driving system based on 5G vehicle road cooperation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王硕;王岩松;王孝兰;: "基于CNN和LSTM混合模型的车辆行为检测方法", 智能计算机与应用, no. 02, 1 February 2020 (2020-02-01) *

Similar Documents

Publication Publication Date Title
Bila et al. Vehicles of the future: A survey of research on safety issues
CN112700470B (en) Target detection and track extraction method based on traffic video stream
US9767368B2 (en) Method and system for adaptive ray based scene analysis of semantic traffic spaces and vehicle equipped with such system
Kumar et al. Framework for real-time behavior interpretation from traffic video
Gandhi et al. Pedestrian protection systems: Issues, survey, and challenges
Song et al. Vehicle behavior analysis using target motion trajectories
Zhang et al. Prediction of pedestrian-vehicle conflicts at signalized intersections based on long short-term memory neural network
Chang et al. Onboard measurement and warning module for irregular vehicle behavior
Mithun et al. Video-based tracking of vehicles using multiple time-spatial images
CN112487905B (en) Method and system for predicting danger level of pedestrian around vehicle
Kim Multiple vehicle tracking and classification system with a convolutional neural network
CN107031661A (en) A kind of lane change method for early warning and system based on blind area camera input
Zhang et al. A framework for turning behavior classification at intersections using 3D LIDAR
CN113658427A (en) Road condition monitoring method, system and equipment based on vision and radar
Chavez-Garcia Multiple sensor fusion for detection, classification and tracking of moving objects in driving environments
CN114693909A (en) Microcosmic vehicle track sensing equipment based on multi-sensor machine vision fusion
He et al. Deep learning based geometric features for effective truck selection and classification from highway videos
Saravanarajan et al. Car crash detection using ensemble deep learning
Quinn et al. Traffic flow monitoring in crowded cities
Zhao et al. Improving autonomous vehicle visual perception by fusing human gaze and machine vision
da Silva Bastos et al. Vehicle speed detection and safety distance estimation using aerial images of Brazilian highways
CN117555333A (en) Dynamic travel track processing system and method
Khan et al. Multiple moving vehicle speed estimation using Blob analysis
Sathiya et al. Probabilistic collision estimation for tracked vehicles based on corner point self-activation approach
Alam et al. Deep Learning envisioned accident detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination