CN115861972A - Collision detection method and device, electronic equipment and storage medium - Google Patents

Collision detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115861972A
CN115861972A CN202211667015.4A CN202211667015A CN115861972A CN 115861972 A CN115861972 A CN 115861972A CN 202211667015 A CN202211667015 A CN 202211667015A CN 115861972 A CN115861972 A CN 115861972A
Authority
CN
China
Prior art keywords
trailer
obstacle
motion
collision
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211667015.4A
Other languages
Chinese (zh)
Inventor
李建磊
刘羿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sinian Zhijia Technology Co ltd
Original Assignee
Beijing Sinian Zhijia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sinian Zhijia Technology Co ltd filed Critical Beijing Sinian Zhijia Technology Co ltd
Priority to CN202211667015.4A priority Critical patent/CN115861972A/en
Publication of CN115861972A publication Critical patent/CN115861972A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application provides a collision detection method, a collision detection device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a motion track of at least one obstacle relative to a current vehicle; establishing a trailer motion model, wherein the trailer motion model is used for simulating the motion tracks of a tractor and a trailer; performing feature matching on at least one obstacle and a trailer motion model, and acquiring at least one matched target obstacle, wherein each target obstacle in the at least one target obstacle represents a trailer; and performing labeling processing on the collision area of each target obstacle to obtain the collision area. The method and the device can predict the collision area of the trailer in the motion process.

Description

Collision detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of unmanned driving technologies, and in particular, to a collision detection method and apparatus, an electronic device, and a storage medium.
Background
Recognizing information such as an obstacle and effectively avoiding a collision in an autonomous vehicle are necessary conditions for autonomous driving of the vehicle. The motion attitude prediction and collision detection of the trailer mainly have the following difficulties:
one is that a trailer/articulated vehicle generally consists of two parts: a tractor portion and a trailer portion. The difficulty of predicting the motion attitude and the track is high. And secondly, the trailer/articulated vehicle is mostly a large vehicle, has higher danger and needs to be accurate when being used as the collision detection risk of the barrier. (CN 114661039A) the logistics vehicle and the trailer pose determining and driving method thereof provide a method for obtaining pose information and chassis information of the trailer through a self vehicle sensor, bringing the information into a logistics vehicle kinematics model to obtain trailer pose information, and establishing a travelable area of the self vehicle according to the trailer and the trailer for collision detection. The method is mainly based on collision and trailer pose determination when a self vehicle is a trailer, and an identification and collision detection processing mechanism for processing an obstacle into a trailer or a hinged vehicle is not provided. (CN 113329927A) proposes a method for acquiring data of trailer attitude by using lidar in trailer tracking based on lidar, which is mainly used for detecting the attitude of a trailer for tractor alignment or other information, and cannot be used for identifying the whole motion attitude of a trailer or a hinged vehicle.
Disclosure of Invention
In view of this, embodiments of the present application provide a collision detection method, an apparatus, an electronic device, and a storage medium, which can predict a collision area of a trailer during a movement process.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a collision detection method, including the following steps:
acquiring a motion track of at least one obstacle relative to a current vehicle;
establishing a trailer motion model, wherein the trailer motion model is used for simulating the motion tracks of a tractor and a trailer;
performing feature matching on the at least one obstacle and the towed vehicle motion model, and acquiring at least one matched target obstacle, wherein each target obstacle in the at least one target obstacle represents a towed vehicle;
and performing labeling processing on the collision area of each target obstacle to obtain the collision area.
In one possible embodiment, the acquiring the motion trajectory of the at least one obstacle relative to the current vehicle includes:
acquiring point cloud information of the at least one obstacle through a vehicle-mounted radar sensor, wherein the point cloud information takes a current vehicle as a reference object;
calibrating the outline of the at least one obstacle through a rectangular calibration frame based on the point cloud information to obtain at least one rectangular calibration frame;
and acquiring the motion trail of each rectangular calibration frame in the at least one rectangular calibration frame relative to the current vehicle.
In a possible implementation, the obtaining of the motion trajectory of each of the at least one rectangular calibration frame with respect to the current vehicle includes:
for each rectangular calibration frame, acquiring the rectangular calibration frames at two continuous moments, and taking the intersection point of the central lines of the rectangular calibration frames at the two moments as a centroid, wherein the central line is the central line along the motion direction;
and continuously obtaining the centroid positions of the rectangular calibration frame at n moments, and taking the centroid positions of the n moments as the motion trail, wherein n is a positive integer.
In one possible embodiment, the establishing the trailer motion model includes:
establishing a plane rectangular coordinate system, wherein the speed of the tractor is v, the included angle between the motion direction of the tractor and a transverse axis is theta, the trailer is hinged with the tractor through a hinge point, the included angle between the motion direction of the trailer and the transverse axis is alpha, the length of the trailer is L, and the included angle between an extension line of the motion direction of the trailer from the hinge point and the tractor is theta;
determining a movement step of the tractor as S0= v × t0, where t0 is a movement sampling frequency, t2= t1+ t0, t1 is an initial time of a sampling frequency, and t2 is an end time of a sampling frequency, and then:
θ’=θt2-θt1;
α’=αt2-αt1;
and establishing a trailer motion model:
theta’=S0*sinθt1/L+θ’;
theta’=θ’-α’;
where θ ' represents the angle change value of θ, α ' represents the angle change value of α, and theta ' represents the angle change value of theta.
In a possible embodiment, the feature matching the at least one obstacle with the towed vehicle motion model and obtaining at least one matched target obstacle includes:
detecting whether two crossed obstacles exist in the at least one obstacle at the same moment;
when two crossed obstacles exist, the central lines of the two crossed obstacles along the track direction are respectively obtained, the intersection point of the two central lines is used as a hinge point, and the two crossed obstacles are used as a target obstacle.
In a possible implementation manner, the labeling the collision region of each target obstacle to obtain a collision region includes:
and carrying out convex optimization processing on the target obstacle to obtain a polygon capable of enveloping the target obstacle, wherein the convex polygon is a convex polygon.
In one possible embodiment, the method further comprises:
and based on the collision area, formulating the running route of the current vehicle, or based on the collision area, displaying the collision area on a vehicle-mounted navigation interface, and sending an alarm when the distance between the current vehicle and the collision area is less than a preset distance threshold value.
In a second aspect, an embodiment of the present application further provides a collision detection apparatus, where the apparatus includes:
the acquisition module is used for acquiring the motion trail of at least one obstacle relative to the current vehicle;
the system comprises an establishing module, a judging module and a control module, wherein the establishing module is used for establishing a trailer motion model, and the trailer motion model is used for simulating the motion tracks of a tractor and a trailer;
the matching module is used for carrying out feature matching on the at least one obstacle and the trailer motion model and acquiring at least one matched target obstacle, wherein each target obstacle in the at least one target obstacle represents a trailer;
and the marking module is used for marking the collision area of each target obstacle to obtain the collision area.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when an electronic device runs, the processor and the storage medium communicate through the bus, and the processor executes the machine-readable instructions to execute the collision detection method according to any one of the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the collision detection method according to any one of the first aspect.
The embodiment of the application has the following beneficial effects:
and extracting the motion trail, contour characteristics and pose state information of each obstacle through the obstacle point cloud information acquired by the radar. Through the kinematics model of the trailer, the trailer vehicle in the barrier is extracted, the outer contour for collision detection of the trailer is obtained according to the extracted characteristics of the trailer vehicle, the self-vehicle collision contour of each track point is generated through the self-vehicle track, and then the intersection check is carried out on the trailer contour and the contour of each track point of the self-vehicle, so that whether the self-vehicle collides with the trailer or not and the position point of the self-vehicle when the self-vehicle collides with the trailer or not is judged. The towed vehicle in the obstacle detected by the automatically driven vehicle can be effectively identified; and through handling the outline with trailer and trailer in order to obtain the collision outline of trailer, promoted and collided the detection effect in trailer large vehicle blind area part.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 is a schematic flowchart of steps S101-S104 provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of steps S1011 to S1013 provided in the embodiment of the present application;
FIG. 3 is a schematic flowchart of steps S10131-S10132 provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of steps S1031 to S1032 provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a collision detection device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of a centroid location provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a trailer motion model provided by an embodiment of the present application;
FIG. 9 is a schematic view of a hinge point provided by an embodiment of the present application;
fig. 10 is a schematic diagram of a collision region labeling provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and that steps without logical context may be reversed in order or performed concurrently. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application and is not intended to be limiting of the application.
Referring to fig. 1, fig. 1 is a schematic flow chart of steps S101 to S104 of a collision detection method provided in an embodiment of the present application, and will be described with reference to steps S101 to S104 shown in fig. 1.
Step S101, obtaining the motion track of at least one obstacle relative to the current vehicle;
step S102, establishing a trailer movement model, wherein the trailer movement model is used for simulating movement tracks of a tractor and a trailer;
step S103, performing feature matching on the at least one obstacle and the trailer motion model, and acquiring at least one matched target obstacle, wherein each target obstacle in the at least one target obstacle represents a trailer;
and step S104, performing labeling processing on the collision area of each target obstacle to obtain the collision area.
According to the collision detection method, the motion track, the contour characteristics and the pose state information of each obstacle are extracted through the obstacle point cloud information acquired by the radar. Through the kinematics model of trailer, draw the trailer in the barrier to the outer contour for the collision detection of trailer is obtained according to the trailer vehicle characteristic of drawing, and the rethread is from the car track and is generated the car collision profile of each track point from the car, will tow the profile of trailer and carry out crossing check-up from the profile of each track point from the car again, with judge whether from the car with the trailer bump and from the car position point when colliding with the collision. The towed vehicle in the obstacle detected by the automatically driven vehicle can be effectively identified; and through handling the outline with trailer and trailer in order to obtain the collision outline of trailer, promoted and collided the detection effect in trailer large vehicle blind area part.
The above exemplary steps of the embodiments of the present application will be described below.
In step S101, a movement locus of at least one obstacle with respect to the current vehicle is acquired.
In some embodiments, referring to fig. 2, fig. 2 is a schematic flowchart of steps S1011 to S1013 provided in an embodiment of the present application, and step S101 shown in fig. 1 may be implemented by steps S1011 to S1013, which will be described in conjunction with the steps.
In step S1011, point cloud information of the at least one obstacle is obtained by the vehicle-mounted radar sensor, wherein the point cloud information uses the current vehicle as a reference.
In step S1012, the contour of the at least one obstacle is calibrated by a rectangular calibration frame based on the point cloud information, so as to obtain at least one rectangular calibration frame.
In step S1013, a motion trajectory of each of the at least one rectangular calibration frame with respect to the current vehicle is acquired.
In some embodiments, referring to fig. 3, fig. 3 is a schematic flowchart of steps S10131-S10132 provided in this application, and step S1013 shown in fig. 2 can be implemented by steps S10131-S10132, which will be described in conjunction with the steps.
In step S10131, for each rectangular calibration frame, rectangular calibration frames at two consecutive time instants are acquired, and an intersection of center lines of the rectangular calibration frames at the two time instants is taken as a centroid, wherein the center line is a center line along the movement direction.
In step S10132, the centroid positions at n times of the rectangular calibration frame are continuously obtained, and the centroid positions at n times are taken as the motion trajectory, where n is a positive integer.
Here, first, point cloud information of an obstacle, which is coordinates with respect to the own vehicle position, is acquired from the radar sensor.
Secondly, information of the obstacle is given to the point cloud information of which the outline needs to be ensured as the minimum rectangular frame in the rectangular calibration frame by calibrating the outline of the obstacle through the rectangular calibration frame, and the outline of the rectangular calibration frame is represented by four vertexes of a rectangle; the calibration frame can be obtained by connecting the four vertexes in sequence.
Finally, the centroid position of each rectangular calibration frame, that is, the centroid position of each obstacle, is obtained, referring to fig. 7, and fig. 7 is a schematic diagram of the centroid position provided by the embodiment of the present application. One of the obstacles is illustrated schematically.
1) And acquiring the rectangular calibration frames at two continuous moments, and acquiring the central lines (along the motion direction) of the rectangular calibration frames, wherein the intersection point of the central lines at the two continuous moments is the centroid position of the rectangular calibration frames.
2) By analogy, the position of the mass center at n continuous moments, namely the motion track of the obstacle can be obtained
3) The orientation information and the position information of the obstacle can be acquired according to the orientation of the vehicle and the orientation of the calibration frame. The position and orientation of the vehicle are obtained through positioning information.
4) And acquiring the motion trail and orientation information of each detected obstacle according to the steps 1) 2) 3).
In step S102, a trailer motion model is created, wherein the trailer motion model is used to simulate the motion trajectory of the tractor and the trailer.
In some embodiments, the establishing the trailer motion model includes:
establishing a plane rectangular coordinate system, wherein the speed of the tractor is v, the included angle between the motion direction of the tractor and a transverse axis is theta, the trailer is hinged with the tractor through a hinge point, the included angle between the motion direction of the trailer and the transverse axis is alpha, the length of the trailer is L, and the included angle between an extension line of the motion direction of the trailer from the hinge point and the tractor is theta;
determining a motion step of the tractor as S0= v × t0, where t0 is a motion sampling frequency, t2= t1+ t0, t1 is an initial time of a sampling frequency, and t2 is an end time of a sampling frequency, then:
θ’=θt2-θt1;
α’=αt2-αt1;
and establishing a trailer motion model:
theta’=S0*sinθt1/L+θ’;
theta’=θ’-α’;
where θ ' represents the angle change value of θ, α ' represents the angle change value of α, and theta ' represents the angle change value of theta.
Referring to fig. 8, fig. 8 is a schematic diagram of a motion model of a trailer provided in this embodiment of the present application, as shown in the drawing, in a rectangular planar coordinate system, a horizontal axis is an X axis, a vertical axis is a Y axis, a speed of the tractor is v, an included angle between a motion direction of the tractor and the horizontal axis is θ, the trailer is hinged to the tractor through a hinge point, an included angle between the motion direction of the trailer and the horizontal axis is α, a length of the trailer is L, and an included angle between an extension line of the motion direction of the trailer from the hinge point and the tractor is theta;
determining a motion step of the tractor as S0= v × t0, where t0 is a motion sampling frequency, t2= t1+ t0, t1 is an initial time of a sampling frequency, and t2 is an end time of a sampling frequency, then:
θ’=θt2-θt1;
α’=αt2-αt1;
the motion and position orientation information of the two obstacles satisfy the following two formulas:
theta’=S0*sinθt1/L+θ’;
theta’=θ’-α’;
where θ ' represents the angle change value of θ, α ' represents the angle change value of α, and theta ' represents the angle change value of theta.
In step S103, performing feature matching on the at least one obstacle and the trailer motion model, and obtaining at least one matched target obstacle, where each target obstacle in the at least one target obstacle represents a trailer.
In some embodiments, referring to fig. 4, fig. 4 is a schematic flowchart of steps S1031 to S1032 provided in the embodiments of the present application, and step S103 shown in fig. 1 may be implemented by steps S1031 to S1032, which will be described with reference to the steps.
In step S1031, it is detected whether there are two intersecting obstacles in the at least one obstacle at the same time.
In step S1032, when there are two intersecting obstacles, the center lines of the two intersecting obstacles in the track direction are obtained, respectively, and the intersection point of the two center lines is used as a hinge point, and the two intersecting obstacles are used as a target obstacle.
Here, it is detected whether the two obstacles intersect at the same time: i.e. whether the rectangular calibration frames of the two obstacles intersect (obtained by whether each edge of the obstacle intersects, each edge and the line segment connecting two continuous vertexes).
If the two obstacles intersect, acquiring the hinge point of the two obstacles, wherein the method for acquiring the hinge point comprises the following steps: center lines of two intersecting obstacles along the track direction are respectively obtained, the intersection point of the two center lines is a hinge point, see fig. 9, fig. 9 is a schematic diagram of the hinge point provided by the embodiment of the application, and the middle intersection point of fig. 9 is the hinge point of the two obstacles.
And matching the obstacles pairwise, judging whether the obstacles meet the kinematic model of the trailer, and if so, marking the obstacles as the trailer.
And finally, repeating the steps until the obstacle is completely matched and traversed.
In step S104, a collision region of each target obstacle is subjected to labeling processing, and a collision region is obtained.
In some embodiments, the labeling the collision region of each target obstacle to obtain a collision region includes:
and carrying out convex optimization processing on the target obstacle to obtain a polygon capable of enveloping the target obstacle, wherein the convex polygon is a convex polygon.
Referring to fig. 10, fig. 10 is a schematic diagram of a collision region labeling provided in the embodiment of the present application, and as shown in fig. 10, four calibration frames of a trailer and a trailer are labeled at four points, four vertices of the trailer are ABCD, four vertices of the trailer are EFGH, an outer contour for collision of the trailer is obtained through trailer convex optimization processing, and a polygon that can envelop the trailer and the trailer part, that is, a convex polygon of AEBFGD continuous vertices in the following figure, is obtained. And generating a rectangular calibration frame of the vehicle for each track point according to the vehicle track, respectively detecting whether the rectangular calibration frame of the vehicle for each track point is intersected with the polygonal outer contour of the trailer, and if so, colliding at the position of the track point.
In some embodiments, the method further comprises:
and based on the collision area, formulating the running route of the current vehicle, or based on the collision area, displaying the collision area on a vehicle-mounted navigation interface, and sending an alarm when the distance between the current vehicle and the collision area is less than a preset distance threshold value.
Here, a driving route of the current vehicle may be set for unmanned driving according to the obtained collision zone to prevent collision, or, in a manual driving state again, the collision zone may be displayed on a vehicle navigation interface based on the collision zone, and when a distance from the current vehicle to the collision zone is less than a preset distance threshold, an alarm may be issued to prompt the driver.
In summary, the embodiments of the present application have the following beneficial effects:
and extracting the motion trail, contour characteristics and pose state information of each obstacle through the obstacle point cloud information acquired by the radar. Through the kinematics model of trailer, draw the trailer in the barrier to the outer contour for the collision detection of trailer is obtained according to the trailer vehicle characteristic of drawing, and the rethread is from the car track and is generated the car collision profile of each track point from the car, will tow the profile of trailer and carry out crossing check-up from the profile of each track point from the car again, with judge whether from the car with the trailer bump and from the car position point when colliding with the collision. The towed vehicle in the obstacle detected by the automatically driven vehicle can be effectively identified; and handle in order to obtain the collision outline of trailer through the outline with trailer and trailer, promoted and collided the detection effect in trailer heavy vehicle blind area part.
Based on the same inventive concept, the embodiment of the present application further provides a collision detection apparatus corresponding to the collision detection method in the first embodiment, and since the principle of solving the problem of the apparatus in the embodiment of the present application is similar to that of the collision detection method, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 5, fig. 5 is a schematic structural diagram of a collision detection apparatus 500 according to an embodiment of the present application. The collision detecting apparatus 500 includes:
an obtaining module 501, configured to obtain a motion trajectory of at least one obstacle relative to a current vehicle;
an establishing module 502, configured to establish a trailer motion model, where the trailer motion model is used to simulate motion trajectories of a tractor and a trailer;
a matching module 503, configured to perform feature matching on the at least one obstacle and the towed vehicle motion model, and obtain at least one matched target obstacle, where each target obstacle in the at least one target obstacle represents a towed vehicle;
and a labeling module 504, configured to perform labeling processing on the collision region of each target obstacle to obtain a collision region.
It will be understood by those skilled in the art that the functions implemented by the units in the collision detection apparatus 500 shown in fig. 5 can be understood by referring to the related description of the collision detection method described above. The functions of the units in the collision detection apparatus 500 shown in fig. 5 may be implemented by a program running on a processor, or may be implemented by specific logic circuits.
In one possible embodiment, the acquiring module acquires a motion track of at least one obstacle relative to the current vehicle, and includes:
acquiring point cloud information of the at least one obstacle through a vehicle-mounted radar sensor, wherein the point cloud information takes a current vehicle as a reference object;
calibrating the outline of the at least one obstacle through a rectangular calibration frame based on the point cloud information to obtain at least one rectangular calibration frame;
and acquiring the motion track of each rectangular calibration frame in the at least one rectangular calibration frame relative to the current vehicle.
In a possible implementation, the obtaining module 501 obtains a motion trajectory of each of the at least one rectangular calibration frame with respect to the current vehicle, including:
for each rectangular calibration frame, acquiring the rectangular calibration frames at two continuous moments, and taking the intersection point of the central lines of the rectangular calibration frames at the two moments as a centroid, wherein the central line is the central line along the motion direction;
and continuously obtaining the centroid positions of the rectangular calibration frame at n moments, and taking the centroid positions of the n moments as the motion trail, wherein n is a positive integer.
In one possible implementation, the building module 502 builds a trailer motion model, including:
establishing a plane rectangular coordinate system, wherein the speed of the tractor is v, the included angle between the motion direction of the tractor and a transverse axis is theta, the trailer is hinged with the tractor through a hinge point, the included angle between the motion direction of the trailer and the transverse axis is alpha, the length of the trailer is L, and the included angle between an extension line of the motion direction of the trailer from the hinge point and the tractor is theta;
determining a motion step of the tractor as S0= v × t0, where t0 is a motion sampling frequency, t2= t1+ t0, t1 is an initial time of a sampling frequency, and t2 is an end time of a sampling frequency, then:
θ’=θt2-θt1;
α’=αt2-αt1;
and establishing a trailer motion model:
theta’=S0*sinθt1/L+θ’;
theta’=θ’-α’;
where θ ' represents the angle change value of θ, α ' represents the angle change value of α, and theta ' represents the angle change value of theta.
In a possible implementation, the matching module 503 performs feature matching on the at least one obstacle and the towed vehicle motion model, and obtains at least one target obstacle that matches, including:
detecting whether two crossed obstacles exist in the at least one obstacle at the same moment;
when two crossed obstacles exist, the central lines of the two crossed obstacles along the track direction are respectively obtained, the intersection point of the two central lines is used as a hinge point, and the two crossed obstacles are used as a target obstacle.
In a possible implementation manner, the labeling module 504 performs a labeling process on the collision area of each target obstacle to obtain a collision area, including:
and carrying out convex optimization processing on the target obstacle to obtain a polygon capable of enveloping the target obstacle, wherein the convex polygon is a convex polygon.
In one possible implementation, the labeling module 504 further includes:
and based on the collision area, formulating the running route of the current vehicle, or based on the collision area, displaying the collision area on a vehicle-mounted navigation interface, and sending an alarm when the distance between the current vehicle and the collision area is less than a preset distance threshold value.
The collision detection device extracts the motion track, contour characteristics and pose state information of each obstacle through the obstacle point cloud information acquired by the radar. Through the kinematics model of trailer, draw the trailer in the barrier to the outer contour for the collision detection of trailer is obtained according to the trailer vehicle characteristic of drawing, and the rethread is from the car track and is generated the car collision profile of each track point from the car, will tow the profile of trailer and carry out crossing check-up from the profile of each track point from the car again, with judge whether from the car with the trailer bump and from the car position point when colliding with the collision. The towed vehicle in the obstacle detected by the automatically driven vehicle can be effectively identified; and through handling the outline with trailer and trailer in order to obtain the collision outline of trailer, promoted and collided the detection effect in trailer large vehicle blind area part.
As shown in fig. 6, fig. 6 is a schematic view of a composition structure of an electronic device 600 provided in an embodiment of the present application, where the electronic device 600 includes:
the collision detection device comprises a processor 601, a storage medium 602 and a bus 603, wherein the storage medium 602 stores machine-readable instructions executable by the processor 601, when the electronic device 600 runs, the processor 601 and the storage medium 602 communicate through the bus 603, and the processor 601 executes the machine-readable instructions to perform the steps of the collision detection method according to the embodiment of the present application.
In practice, the various components of the electronic device 600 are coupled together by a bus 603. It is understood that the bus 603 is used to enable communications among the components. The bus 603 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. But for clarity of illustration the various busses are labeled as bus 603 in figure 6.
The electronic equipment extracts the motion track, contour characteristics and pose state information of each obstacle through the obstacle point cloud information acquired by the radar. Through the kinematics model of trailer, draw the trailer in the barrier to the outer contour for the collision detection of trailer is obtained according to the trailer vehicle characteristic of drawing, and the rethread is from the car track and is generated the car collision profile of each track point from the car, will tow the profile of trailer and carry out crossing check-up from the profile of each track point from the car again, with judge whether from the car with the trailer bump and from the car position point when colliding with the collision. The towed vehicle in the obstacle detected by the automatically driven vehicle can be effectively identified; and through handling the outline with trailer and trailer in order to obtain the collision outline of trailer, promoted and collided the detection effect in trailer large vehicle blind area part.
The embodiment of the present application further provides a computer-readable storage medium, where the storage medium stores executable instructions, and when the executable instructions are executed by at least one processor 601, the collision detection method according to the embodiment of the present application is implemented.
In some embodiments, the storage medium may be a Memory such as a magnetic random Access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts stored in a hypertext markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The computer readable storage medium extracts the motion track, contour characteristics and pose state information of each obstacle through the obstacle point cloud information acquired by the radar. Through the kinematics model of the trailer, the trailer vehicle in the barrier is extracted, the outer contour for collision detection of the trailer is obtained according to the extracted characteristics of the trailer vehicle, the self-vehicle collision contour of each track point is generated through the self-vehicle track, and then the intersection check is carried out on the trailer contour and the contour of each track point of the self-vehicle, so that whether the self-vehicle collides with the trailer or not and the position point of the self-vehicle when the self-vehicle collides with the trailer or not is judged. The towed vehicle in the obstacle detected by the automatically driven vehicle can be effectively identified; and through handling the outline with trailer and trailer in order to obtain the collision outline of trailer, promoted and collided the detection effect in trailer large vehicle blind area part.
In the several embodiments provided in the present application, it should be understood that the disclosed method and electronic device may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a platform server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk or an optical disk, and various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A collision detection method, characterized by comprising the steps of:
acquiring a motion track of at least one obstacle relative to a current vehicle;
establishing a trailer motion model, wherein the trailer motion model is used for simulating the motion tracks of a tractor and a trailer;
performing feature matching on the at least one obstacle and the towed vehicle motion model, and acquiring at least one matched target obstacle, wherein each target obstacle in the at least one target obstacle represents a towed vehicle;
and performing labeling processing on the collision area of each target obstacle to obtain the collision area.
2. The method of claim 1, wherein said obtaining a trajectory of motion of at least one obstacle relative to a current vehicle comprises:
acquiring point cloud information of the at least one obstacle through a vehicle-mounted radar sensor, wherein the point cloud information takes a current vehicle as a reference object;
calibrating the outline of the at least one obstacle through a rectangular calibration frame based on the point cloud information to obtain at least one rectangular calibration frame;
and acquiring the motion track of each rectangular calibration frame in the at least one rectangular calibration frame relative to the current vehicle.
3. The method according to claim 2, wherein the obtaining of the motion trajectory of each of the at least one rectangular calibration frame with respect to the current vehicle comprises:
for each rectangular calibration frame, acquiring the rectangular calibration frames at two continuous moments, and taking the intersection point of the central lines of the rectangular calibration frames at the two moments as a centroid, wherein the central line is the central line along the motion direction;
and continuously obtaining the centroid positions of the rectangular calibration frame at n moments, and taking the centroid positions of the n moments as the motion trail, wherein n is a positive integer.
4. The method of claim 1, wherein the establishing the trailer motion model comprises:
establishing a plane rectangular coordinate system, wherein the speed of the tractor is v, the included angle between the motion direction of the tractor and a transverse axis is theta, the trailer is hinged with the tractor through a hinge point, the included angle between the motion direction of the trailer and the transverse axis is alpha, the length of the trailer is L, and the included angle between an extension line of the motion direction of the trailer from the hinge point and the tractor is theta;
determining a motion step of the tractor as S0= v × t0, where t0 is a motion sampling frequency, t2= t1+ t0, t1 is an initial time of a sampling frequency, and t2 is an end time of a sampling frequency, then:
θ’=θt2-θt1;
α’=αt2-αt1;
and establishing a trailer motion model:
theta’=S0*sinθt1/L+θ’;
theta’=θ’-α’;
where θ ' represents the angle change value of θ, α ' represents the angle change value of α, and theta ' represents the angle change value of theta.
5. The method of claim 1, wherein the feature matching the at least one obstacle to the towed vehicle motion model and obtaining the matched at least one target obstacle comprises:
detecting whether two crossed obstacles exist in the at least one obstacle at the same moment;
when two crossed obstacles exist, the central lines of the two crossed obstacles along the track direction are respectively obtained, the intersection point of the two central lines is used as a hinge point, and the two crossed obstacles are used as a target obstacle.
6. The method of claim 1, wherein the labeling the collision region of each target obstacle to obtain the collision region comprises:
and carrying out convex optimization processing on the target obstacle to obtain a polygon capable of enveloping the target obstacle, wherein the convex polygon is a convex polygon.
7. The method of claim 1, further comprising:
and based on the collision area, formulating the running route of the current vehicle, or based on the 5 collision area, displaying the collision area on a vehicle-mounted navigation interface, and sending out an alarm when the distance between the current vehicle and the collision area is less than a preset distance threshold value.
8. A collision detecting apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the motion trail of at least one obstacle relative to the current vehicle;
the system comprises an establishing module, a judging module and a control module, wherein the establishing module is used for establishing a trailer motion model, and the trailer motion model is used for simulating the motion trail of a tractor and a trailer by 0;
the matching module is used for carrying out feature matching on the at least one obstacle and the towed vehicle motion model and obtaining at least one matched target obstacle, wherein each target obstacle in the at least one target obstacle represents a towed vehicle;
and the marking module is used for marking the collision area of each target obstacle to obtain a collision area of collision 5.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is operated, the processor executing the processor
Machine readable instructions to perform the collision detection method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs a collision detection method according to any one of claims 1 to 7.
CN202211667015.4A 2022-12-22 2022-12-22 Collision detection method and device, electronic equipment and storage medium Pending CN115861972A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211667015.4A CN115861972A (en) 2022-12-22 2022-12-22 Collision detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211667015.4A CN115861972A (en) 2022-12-22 2022-12-22 Collision detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115861972A true CN115861972A (en) 2023-03-28

Family

ID=85654433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211667015.4A Pending CN115861972A (en) 2022-12-22 2022-12-22 Collision detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115861972A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116279457A (en) * 2023-05-15 2023-06-23 北京斯年智驾科技有限公司 Anti-collision method, device, equipment and storage medium based on Lei Dadian cloud

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100171828A1 (en) * 2007-09-03 2010-07-08 Sanyo Electric Co., Ltd. Driving Assistance System And Connected Vehicles
US20180154888A1 (en) * 2015-06-02 2018-06-07 Knorr-Bremse Systeme Fuer Nutzfahrzeuge Gmbh Method for stabilizing a tractor vehicle-trailer combination during travel
CN111361557A (en) * 2020-02-13 2020-07-03 江苏大学 Early warning method for collision accident during turning of heavy truck
CN114643983A (en) * 2020-12-17 2022-06-21 华为技术有限公司 Control method and device
CN114937053A (en) * 2022-06-02 2022-08-23 福建中科云杉信息技术有限公司 Edge detection method for unmanned truck trailer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100171828A1 (en) * 2007-09-03 2010-07-08 Sanyo Electric Co., Ltd. Driving Assistance System And Connected Vehicles
US20180154888A1 (en) * 2015-06-02 2018-06-07 Knorr-Bremse Systeme Fuer Nutzfahrzeuge Gmbh Method for stabilizing a tractor vehicle-trailer combination during travel
CN111361557A (en) * 2020-02-13 2020-07-03 江苏大学 Early warning method for collision accident during turning of heavy truck
CN114643983A (en) * 2020-12-17 2022-06-21 华为技术有限公司 Control method and device
CN114937053A (en) * 2022-06-02 2022-08-23 福建中科云杉信息技术有限公司 Edge detection method for unmanned truck trailer

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116279457A (en) * 2023-05-15 2023-06-23 北京斯年智驾科技有限公司 Anti-collision method, device, equipment and storage medium based on Lei Dadian cloud

Similar Documents

Publication Publication Date Title
CN111505965B (en) Method and device for simulation test of automatic driving vehicle, computer equipment and storage medium
EP3699048B1 (en) Travelling track prediction method and device for vehicle
CN109817021B (en) Method and device for avoiding traffic participants in roadside blind areas of laser radar
JP6949238B2 (en) Systems and methods for improving collision avoidance in logistics ground support devices using fusion of multi-sensor detection
EP3703033A1 (en) Track prediction method and device for obstacle at junction
CN110018496A (en) Obstacle recognition method and device, electronic equipment, storage medium
CN106845412B (en) Obstacle identification method and device, computer equipment and readable medium
US20160282879A1 (en) Detailed map format for autonomous driving
CN112307566A (en) Vehicle simulation test method, device, equipment and storage medium
CN110834630A (en) Vehicle driving control method and device, vehicle and storage medium
Raju et al. Performance of open autonomous vehicle platforms: Autoware and Apollo
EP3798577B1 (en) Method and apparatus for determining turn-round path of vehicle, and medium
CN113052321B (en) Generating trajectory markers from short-term intent and long-term results
KR102595485B1 (en) Method and apparatus for vehicle avoiding obstacle, electronic device, and computer storage medium
CN111127651A (en) Automatic driving test development method and device based on high-precision visualization technology
CN104773177A (en) Aided driving method and aided driving device
CN115861972A (en) Collision detection method and device, electronic equipment and storage medium
CN113835102A (en) Lane line generation method and device
CN113686595B (en) Vehicle endurance test method, device, cloud control platform and system
CN114475656A (en) Travel track prediction method, travel track prediction device, electronic device, and storage medium
CN114694108A (en) Image processing method, device, equipment and storage medium
CN116686028A (en) Driving assistance method and related equipment
CN114638103A (en) Automatic driving joint simulation method and device, computer equipment and storage medium
CN114537447A (en) Safe passing method and device, electronic equipment and storage medium
Marques et al. YOLOv3: Traffic Signs & Lights Detection and Recognition for Autonomous Driving.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination