CN118015825A - Collision identification method, device, terminal equipment and storage medium - Google Patents

Collision identification method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN118015825A
CN118015825A CN202410074506.0A CN202410074506A CN118015825A CN 118015825 A CN118015825 A CN 118015825A CN 202410074506 A CN202410074506 A CN 202410074506A CN 118015825 A CN118015825 A CN 118015825A
Authority
CN
China
Prior art keywords
collision
data
similarity
category
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410074506.0A
Other languages
Chinese (zh)
Inventor
王浩乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanchuan Online Hangzhou Information Technology Co ltd
Original Assignee
Sanchuan Online Hangzhou Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanchuan Online Hangzhou Information Technology Co ltd filed Critical Sanchuan Online Hangzhou Information Technology Co ltd
Priority to CN202410074506.0A priority Critical patent/CN118015825A/en
Publication of CN118015825A publication Critical patent/CN118015825A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a collision recognition method, a device, terminal equipment and a storage medium, wherein the method comprises the following steps: firstly, acquiring collision data to be identified of a target vehicle; then, recognizing the collision data to be recognized by using a preset collision recognition feature algorithm, and determining a first similarity between the collision data to be recognized and a preset collision category; inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified output by the collision identification model and the collision category; and finally, determining a collision recognition result of the target vehicle according to the first similarity and the second similarity. Because the collision recognition feature algorithm and the collision recognition model may have different performances under different data processing or different collision scenes, the final collision recognition result is comprehensively obtained by combining the results of the collision recognition feature algorithm and the collision recognition model, and the accuracy and the robustness of collision recognition are improved.

Description

Collision identification method, device, terminal equipment and storage medium
Technical Field
The present application belongs to the field of computer technology, and in particular, relates to a collision recognition method, a device, a terminal device, and a storage medium.
Background
With the continuous development of vehicle technology, the popularity of vehicles is greatly improved, so that more vehicles on roads are caused, and the possibility of traffic accidents is continuously increased. A large number of surveys show that after a serious collision accident occurs, the casualties caused by untimely rescue are far greater than those caused by the accident directly.
In the prior art, through carrying out collision recognition to the vehicle, namely analyzing various data in the vehicle driving process, send distress signal to the rescue center immediately when data appear unusual to this result that reduces the collision accident can provide first aid and medical treatment for wounded person more fast.
However, the collision recognition result is inevitably wrong, and if the vehicle is misjudged to collide, a distress signal can be sent to a rescue center in a wrong way, so that the rescue resource is wasted. Therefore, how to improve the accuracy of collision recognition is a problem to be solved at present.
Disclosure of Invention
The embodiment of the application provides a collision recognition method, a device, terminal equipment and a storage medium, which can solve the problem of how to improve the accuracy of collision recognition.
A first aspect of an embodiment of the present application provides a collision recognition method, including: acquiring collision data to be identified of a target vehicle; identifying collision data to be identified by utilizing a preset collision identification feature algorithm, and determining a first similarity between the collision data to be identified and a preset collision category; inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified output by the collision identification model and the collision category; and determining a collision recognition result of the target vehicle according to the first similarity and the second similarity.
Optionally, in a possible implementation manner of the first aspect, the number of the collision categories is a plurality, the collision recognition result includes a target category and a target similarity corresponding to the target category, and determining, according to the first similarity and the second similarity, the collision recognition result of the target vehicle includes:
Fusing the first similarity and the second similarity corresponding to each collision category to determine the reference similarity corresponding to each collision category;
determining a target class according to the reference similarity corresponding to each collision class;
And determining the reference similarity corresponding to the target category as the target similarity.
Optionally, in another possible implementation manner of the first aspect, the fusing the first similarity and the second similarity corresponding to each collision category to determine the reference similarity corresponding to each collision category includes:
And determining the reference similarity corresponding to each collision category according to the preset algorithm weight corresponding to the collision recognition feature algorithm, the preset model weight corresponding to the collision recognition model, and the first similarity and the second similarity corresponding to each collision category.
Optionally, in still another possible implementation manner of the first aspect, the collision data to be identified is obtained by using a plurality of sensors, each sensor corresponds to a preset sensor weight, and the fusing the first similarity and the second similarity corresponding to each collision category to determine the reference similarity corresponding to each collision category includes:
And determining the reference similarity corresponding to each collision category according to the sensor corresponding to each collision category, the preset sensor weight corresponding to each sensor, and the first similarity and the second similarity corresponding to each collision category.
Optionally, in still another possible implementation manner of the first aspect, determining the target class according to the reference similarity corresponding to each collision class includes:
and when the reference similarity corresponding to the collision categories is the same, determining the target category according to the preset priority sequence corresponding to each collision category.
Optionally, in another possible implementation manner of the first aspect, the acquiring collision data to be identified of the target vehicle includes:
Acquiring vehicle operation data of a target vehicle;
and carrying out abnormal data slicing processing on the vehicle operation data to extract collision data to be identified in the vehicle operation data.
Optionally, in a further possible implementation manner of the first aspect, the step of generating the collision recognition feature algorithm includes:
Acquiring a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data;
determining the data characteristics corresponding to each collision type according to a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data;
And carrying out iterative parameter adjustment on the initial feature algorithm by utilizing the data features corresponding to each collision category until the initial feature algorithm converges to obtain a collision recognition feature algorithm.
Optionally, in a further possible implementation manner of the first aspect, the step of generating the collision recognition model includes:
Acquiring a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data;
And training the initial convolutional neural network model by utilizing a plurality of groups of reference collision data and collision class labels corresponding to each group of reference collision data so as to generate a collision recognition model.
Optionally, in another possible implementation manner of the first aspect, the method further includes:
Acquiring a manual verification result corresponding to collision data to be identified;
And optimizing a collision recognition feature algorithm and a collision recognition model according to the collision recognition result and the manual verification result of the target vehicle.
Optionally, in still another possible implementation manner of the first aspect, the collision categories include a real collision category and a suspected collision category, wherein the real collision category includes any one or more of rear-end collision, front collision, side collision and rollover, and the suspected collision category includes any one or more of sudden braking, passing through a deceleration strip, closing a door, washing a car, raining, hail dropping, abnormal flapping and windshield wipers.
A second aspect of an embodiment of the present application provides a collision recognition apparatus, including:
the data acquisition module is used for acquiring collision data to be identified of the target vehicle;
The first processing module is used for identifying the collision data to be identified by utilizing a preset collision identification characteristic algorithm, and determining a first similarity between the collision data to be identified and a preset collision category;
the second processing module is used for inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified and the collision category output by the collision identification model;
And the collision recognition module is used for determining a collision recognition result of the target vehicle according to the first similarity and the second similarity.
A third aspect of an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the collision recognition method of the first aspect described above when executing the computer program.
A fourth aspect of an embodiment of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the collision recognition method of the first aspect described above.
A fifth aspect of an embodiment of the present application provides a computer program product for causing a terminal device to execute the collision recognition method of the first aspect described above when the computer program product is run on the terminal device.
Compared with the prior art, the embodiment of the application has the beneficial effects that: the embodiment of the application provides a collision recognition method, a device, terminal equipment and a storage medium, wherein collision data to be recognized of a target vehicle are firstly obtained; then, recognizing the collision data to be recognized by using a preset collision recognition feature algorithm, and determining a first similarity between the collision data to be recognized and a preset collision category; inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified output by the collision identification model and the collision category; and finally, determining a collision recognition result of the target vehicle according to the first similarity and the second similarity. Because the collision recognition feature algorithm and the collision recognition model may have different performances under different data processing or different collision scenes, the final collision recognition result is comprehensively obtained by combining the results of the collision recognition feature algorithm and the collision recognition model, and the accuracy and the robustness of collision recognition are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a collision recognition method according to a first embodiment of the present application;
Fig. 2 is a schematic view of a scenario of a collision recognition method according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of a collision recognition device according to a second embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device according to a third embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
It should be understood that, the sequence number of each step in this embodiment does not mean the execution sequence, and the execution sequence of each process should be determined by its function and internal logic, and should not limit the implementation process of the embodiment of the present application in any way.
In the related art, through carrying out collision recognition on the vehicle, namely analyzing various data in the running process of the vehicle, immediately sending a distress signal to a rescue center when the data are abnormal, thereby reducing the consequences of collision accidents and being capable of providing emergency and medical treatment for wounded persons more rapidly. However, the collision recognition result is inevitably wrong, and if the vehicle is misjudged to collide, a distress signal can be sent to a rescue center in a wrong way, so that the rescue resource is wasted. Therefore, how to improve the accuracy of collision recognition is a problem to be solved at present.
In view of this, the embodiments of the present application provide a method, an apparatus, a terminal device, and a storage medium for collision recognition, where the collision recognition feature algorithm and the collision recognition model may perform differently when processing different data or facing different collision scenes, so that the final collision recognition result is obtained by combining the results of the collision recognition feature algorithm and the collision recognition model, thereby improving accuracy and robustness of collision recognition.
In order to illustrate the technical scheme of the application, the following description is given by specific examples.
Referring to fig. 1, a schematic flow chart of a collision recognition method according to an embodiment of the application is shown. As shown in fig. 1, the collision recognition method can be applied to a number platform, and specifically includes the following steps:
step 101, acquiring collision data to be identified of a target vehicle.
In order to obtain collision data to be identified of the target vehicle, vehicle operation data corresponding to the target vehicle needs to be obtained in real time, then the vehicle operation data is analyzed, abnormal data in the vehicle operation data is extracted, and the abnormal data is determined to be the collision data to be identified.
Various sensors, such as an acceleration sensor (Gsensor), a positioning system (e.g., a Beidou satellite navigation system, a global positioning system, etc.), a gyroscope, a vibration sensor, a camera, an audio sensor, etc., may be mounted on the target vehicle. In addition, the target vehicle can be further provided with a vehicle-mounted terminal, the vehicle-mounted terminal can acquire vehicle operation data acquired by the sensor in real time, such as speed, position, acceleration, multiaxial angular speed, vibration data, video, audio and the like, and finally the vehicle-mounted terminal uploads the acquired vehicle operation data to the digital platform.
It should be noted that the vehicle-mounted terminal may integrate various vehicle operation data and generate continuous real-time data stream packets according to the preset time and the preset data acquisition frequency. It should be appreciated that the preset time cannot be set too long (e.g., less than or equal to 15 s) and the data acquisition frequency cannot be set too low (e.g., greater than or equal to 100 Hz) to ensure timeliness and accuracy of collision recognition. And each time the vehicle-mounted terminal generates a data stream packet, the data stream packet can be uploaded to the number platform through the mobile network.
In one possible implementation manner, if the network signal is poor, or the target vehicle is in an environment without network signals, the vehicle-mounted terminal may temporarily store the data flow packet in the local database, and upload the data flow packet to the computing platform after the network signal is recovered, so as to ensure the data integrity and avoid the data loss caused by the network problem. In addition, after the network is restored, the vehicle-mounted terminal can upload the data flow packet at the first time, and the data flow packet is acquired by the counting platform and then analyzed, so that the timeliness of collision identification is ensured.
In the embodiment of the application, continuous data stream packets uploaded by the vehicle-mounted terminal are integrated and spliced to obtain the vehicle running data of the target vehicle, and then the vehicle running data can be subjected to abnormal data slicing processing to extract collision data to be identified in the vehicle running data.
The collision data to be identified may be a range of data extracted based on a certain singular point (i.e., abnormal data) in the vehicle running data.
As an example, assuming that the acceleration sensor collects multi-axis acceleration data, during normal running of the target vehicle, the acceleration of each axis should approach 0 regardless of high-speed running or low-speed running, but if the acceleration data of a certain axis exceeds a set acceleration threshold value, for example, exceeds 20m/s 2 at a certain moment, the acceleration data at that moment is regarded as a singular point, so that slicing processing is required for the vehicle running data before and after the moment to obtain collision data to be identified.
As another example, assuming that the target vehicle is always traveling at a speed of around 60km/h, then suddenly rises in speed within 1s, and then falls to 0 again, it is necessary to slice the vehicle operation data in the process of the speed from 60km/h to 0 to obtain collision data to be recognized.
For example, if a sudden surprise (sudden increase in pitch) is detected from the audio data, or if a rapid change in angular velocity is detected from the angular velocity data for a certain period of time, it is considered that abnormal data has occurred, and it is necessary to slice these abnormal data to obtain collision data to be identified.
As a possible embodiment, after the collision data to be recognized is acquired, the collision data to be recognized may be subjected to data classification and data cleaning. Specifically, firstly, according to the data type, collision data to be identified is split into data such as speed, acceleration, angular speed, displacement, vibration, video, audio and the like. Each data is then preprocessed, including normalization, smoothing, and feature extraction. And finally, washing meaningless data in the data to obtain structured and semi-structured data with consistent and definite dimension and standard.
The normalization can convert the value of the data to between 0 and 1 or convert the value of the data to a smaller range, so that the influence of the dimension and magnitude between the data is eliminated, and the data is more concentrated and has comparability; smoothing is a method of eliminating data noise and reducing data fluctuations, for example, various filters or algorithms may be used to reduce noise and interference in the data; feature extraction is the extraction of useful features from data to better describe the data and extract rules in the data.
Note that, the meaningless data washed out in the data washing step includes: the isolated point instantaneous abnormal jump, no fluctuation data, discontinuous data, incomplete data and the like. The reason why the point-isolated instantaneous abnormal jump occurs is that the sensor occasionally has a false alarm. For example, if acceleration at a certain moment in a certain collision data to be identified exceeds a threshold value, but no abnormality occurs in other types of data related to acceleration, such as speed and portrait in video, then determining that an orphan instantaneous abnormal jump occurs; for example, if the speed of the target vehicle suddenly becomes 0 at a certain moment, the next moment is recovered, and other data related to the speed are normal, it is determined that the point-isolated instantaneous abnormal jump occurs, and the collision data to be identified is not subjected to subsequent analysis processing.
Step 102, recognizing the collision data to be recognized by using a preset collision recognition feature algorithm, and determining a first similarity between the collision data to be recognized and a preset collision category.
The collision type can be obtained by analyzing a large amount of reference collision data, wherein the reference collision data can be data similar to the collision data to be identified and obtained from a historical database, and can also be simulated suspected collision data. The number of collision categories may be one or a plurality.
As a possible implementation, since some suspected collision categories are similar to the data change rules corresponding to the real collision categories, or only slightly different, the probability of misjudgment of the algorithm is high if only the real collision categories are identified.
For example, at the moment of a collision of the vehicle, the acceleration changes, and then the vehicle stops moving, and the speed drops to 0. The vehicle passes through a deceleration strip, an intersection is arranged in front of the deceleration strip, at the moment, the data change rule corresponding to the action of 'passing through the deceleration strip' is acceleration change (driving through the deceleration strip), and then the speed is reduced to 0 (the vehicle is stopped at the intersection and waits). Therefore, the data change rule corresponding to the collision occurrence of the passing deceleration strip has similarities, so that in the practical application process, the true collision type is required to be identified, the suspected collision type is required to be identified, and the problem of vehicle safety caused by collision identification errors can be avoided to the greatest extent.
Accordingly, the collision categories in step 102 may include a real collision category, which may include any one or more of rear-end collisions, frontal collisions, side collisions, rollover, and on-road shoulders, and a suspected collision category, which may include any one or more of sudden braking, passing a deceleration strip, closing a door, washing a car, raining, hail, abnormal tapping, and wipers.
The characteristics corresponding to different collision categories may be different. Taking acceleration as an example, aiming at the collision type of the instrument passing through the deceleration strip, the vehicle has a front-back acceleration change and a vertical acceleration change at the same time; for a frontal collision or a side collision, the vehicle may have a left-right acceleration change in addition to a front-rear acceleration change.
It should be noted that, the collision recognition feature algorithm is a machine learning method using manual design and selection features as main methods, and is mainly implemented by manually analyzing a large amount of reference collision data to extract data features specific to different collision categories, where each group of reference collision data corresponds to a collision category label. And analyzing a large amount of reference collision data and respective data characteristics of each collision category, defining a general rule, and performing cyclic iteration parameter adjustment on the characteristic algorithm to finally generate a collision recognition characteristic algorithm. That is, as a possible implementation manner of the embodiment of the present application, the step of generating the collision recognition feature algorithm may include: acquiring a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data; determining the data characteristics corresponding to each collision type according to a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data; and carrying out iterative parameter adjustment on the initial feature algorithm by utilizing the data features corresponding to each collision category until the initial feature algorithm converges to obtain a collision recognition feature algorithm.
Further, after the collision recognition feature algorithm is obtained, the data feature can be captured by MATLAB and subjected to engineering landing to form a tool pack which can be integrated by Python or Java, and further after the collision data to be recognized is obtained, the tool pack is directly called to recognize the collision data to be recognized, so that the similarity between the collision data to be recognized and each collision category, namely the first similarity, is obtained.
As an example, the collision recognition feature algorithm may output a collision class for which the similarity reaches a threshold, such as 70% rear-end collision, 45% sudden braking … …
Step 103, inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified and the collision category output by the collision identification model.
It should be noted that the collision recognition model is a deep learning algorithm based on a neuron model. As far as possible, the model can automatically simulate the conditions of some collisions under each condition according to actual scenes, reference collision data and collision type labels without manual interference, and then automatically generate a collision recognition model. That is, as a possible implementation manner of the embodiment of the present application, the step of generating the collision recognition model may include: acquiring a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data; and training the initial convolutional neural network model by utilizing a plurality of groups of reference collision data and collision class labels corresponding to each group of reference collision data so as to generate a collision recognition model.
In one possible implementation manner, multiple sets of reference collision data and collision class labels corresponding to each set of reference collision data may be split into two parts: a training data set and a test data set. The training data set is then model trained using the initial convolutional neural network model, and the accuracy of the model is assessed using the test data set. In the model training process, the optimized weight and bias can be iterated through a back propagation algorithm and an optimizer, so that collision behaviors can be accurately predicted; in the model evaluation process, indexes such as accuracy, precision, recall rate and the like of the model can be evaluated, and finally the model can be optimized according to an evaluation result. For example, attempts may be made to adjust the hyper-parameters of the model, change the network structure, and increase regularization to improve the performance of the model.
In the embodiment of the application, after the collision data to be identified is obtained, the collision data to be identified can be identified by calling the collision identification model, so that the similarity between the collision data to be identified and each collision category, namely the second similarity, is obtained. Similar to the collision recognition feature algorithm, the collision recognition model may output a collision class with a similarity reaching a threshold.
Step 104, determining a collision recognition result of the target vehicle according to the first similarity and the second similarity.
It should be noted that, the collision recognition feature algorithm and the collision recognition model may have different performances when processing different data or facing different collision scenes, so that by combining the results of the collision recognition feature algorithm and the collision recognition model, the final collision recognition result is comprehensively obtained, and the accuracy and the robustness of collision recognition are improved.
In a possible implementation manner, the collision recognition result includes a target category and a target similarity corresponding to the target category, and the step 104 may include: fusing the first similarity and the second similarity corresponding to each collision category to determine the reference similarity corresponding to each collision category; determining a target class according to the reference similarity corresponding to each collision class; and determining the reference similarity corresponding to the target category as the target similarity.
The results of the collision recognition feature algorithm and the collision recognition model may be subjected to a similarity process before the first similarity and the second similarity corresponding to each collision category are fused, because the results of the collision recognition feature algorithm and the collision recognition model may be different in form. For example, the results output by the collision recognition feature algorithm are sudden braking 050, rear-end collision 001, etc., the results of the collision recognition model are rear-end collision 0.5 and sudden braking 0.1, and the similarity processing may be performed, for example, the results are converted into a percentage form, the results of the collision recognition feature algorithm are converted into sudden braking 50%, rear-end collision 1%, and the results of the collision recognition model are converted into rear-end collision 50% and sudden braking 10%.
In the embodiment of the application, the first similarity and the second similarity corresponding to each collision category are fused in an average manner, and if the collision recognition feature algorithm results are 50% of sudden braking and 20% of rear-end collisions and the collision recognition model results are 50% of rear-end collisions and 10% of sudden braking, the fused results are 35% of rear-end collisions and 30% of sudden braking, so that the rear-end collisions can be determined as target categories and 35% as target similarities.
The method may further adopt a weighted average manner to fuse the results, for example, different weights may be set for the collision recognition feature algorithm and the collision recognition model in advance according to respective historic recognition accuracy rates of the collision recognition feature algorithm and the collision recognition model, that is, the first similarity and the second similarity corresponding to each collision category are fused, so as to determine the reference similarity corresponding to each collision category, which may include: and determining the reference similarity corresponding to each collision category according to the preset algorithm weight corresponding to the collision recognition feature algorithm, the preset model weight corresponding to the collision recognition model, and the first similarity and the second similarity corresponding to each collision category.
It should be noted that, the different collision categories are obtained by analyzing and identifying data collected by at least one sensor, for example, a rear-end collision, which can be obtained by analyzing acceleration change and speed change, that is, the collision category of the rear-end collision is derived from an acceleration sensor and a speed sensor. For example, the door is closed in a suspected collision category, which is mainly identified by vibration data acquired by a vibration sensor, that is, the collision category of closing the door is derived by the vibration sensor.
However, the degree of influence of the data acquired by different sensors on the collision recognition result varies, so that each sensor may be assigned a different weight according to the historical data analysis. Assuming that the final accuracy of the target class from the acceleration analysis is high, a high weight may be set for the acceleration sensor. Therefore, as a possible implementation manner of the embodiment of the present application, the fusing the first similarity and the second similarity corresponding to each collision category to determine the reference similarity corresponding to each collision category may include: and determining the reference similarity corresponding to each collision category according to the sensor corresponding to each collision category, the preset sensor weight corresponding to each sensor, and the first similarity and the second similarity corresponding to each collision category.
In the embodiment of the application, the collision category with the highest reference similarity can be directly determined as the target category. However, there may be a case where two or more collision category references are identical in similarity, and a plurality of collision categories may be simultaneously determined as target categories. And analyzing which collision categories appear more according to the historical data analysis, presetting a priority order for all the collision categories, and determining the target category according to the preset priority order corresponding to each collision category when the reference similarity corresponding to the collision categories is the same.
As a possible implementation manner, after determining the collision recognition result of the target vehicle, a manual verification result corresponding to the collision data to be recognized may be obtained, and then the collision recognition feature algorithm and the collision recognition model may be optimized according to the collision recognition result and the manual verification result of the target vehicle. Such as by parameter tuning, feature selection, model improvement, etc.
For example, assuming that the final collision recognition result is a passing deceleration strip, it is found that the target vehicle is actually driving on a shoulder by manual verification (e.g., manual call video verification). For the collision recognition feature algorithm, the following adjustment can be adopted: the vehicle runs through the deceleration strip, the acceleration changes upwards and downwards, and the acceleration changes upwards and then stably when running on the road shoulder. In addition, since the vehicle is usually driving on a shoulder in a diagonal manner, the target vehicle often has a yaw in the direction of angular velocity, which is the law of variation that the vehicle does not pass over the deceleration strip. Through the analysis, the data characteristics of the collision recognition characteristic algorithm corresponding to the two collision categories of the road deceleration strip and the road shoulder can be corrected.
In addition, aiming at the collision recognition model, according to the road shoulder driving on and the characteristics corresponding to the road shoulder driving on, the neural network algorithm can search which parameters possibly influence the result in the training process, and then the parameters are adjusted to reduce errors.
According to the collision recognition method disclosed by the embodiment of the application, firstly, collision data to be recognized of a target vehicle is obtained; then, recognizing the collision data to be recognized by using a preset collision recognition feature algorithm, and determining a first similarity between the collision data to be recognized and a preset collision category; inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified output by the collision identification model and the collision category; and finally, determining a collision recognition result of the target vehicle according to the first similarity and the second similarity. Because the collision recognition feature algorithm and the collision recognition model may have different performances under different data processing or different collision scenes, the final collision recognition result is comprehensively obtained by combining the results of the collision recognition feature algorithm and the collision recognition model, and the accuracy and the robustness of collision recognition are improved.
Referring to fig. 2, a schematic view of a scenario of a collision recognition method according to an embodiment of the present application is shown. As shown in fig. 2, the vehicle-mounted terminal collects vehicle operation data and uploads the vehicle operation data to the number platform through the mobile network, then the number platform performs data slicing and data cleaning on the vehicle operation data to obtain collision data to be recognized, the collision data to be recognized are input to the collision recognition feature algorithm and the collision recognition model respectively to obtain two recognition results, and the two recognition results are further fused to obtain a final collision recognition result. In addition, after the collision recognition result is obtained, the verification data of manual classification can be combined to perform optimization parameter adjustment on the collision recognition feature algorithm and the collision recognition model.
Referring to fig. 3, a schematic structural diagram of a collision recognition device according to a second embodiment of the present application is shown, and for convenience of explanation, only a portion related to the embodiment of the present application is shown.
The collision recognition device may specifically include the following modules:
A data acquisition module 301, configured to acquire collision data to be identified of a target vehicle;
The first processing module 302 is configured to identify collision data to be identified using a preset collision identification feature algorithm, and determine a first similarity between the collision data to be identified and a preset collision category;
The second processing module 303 is configured to input the collision data to be identified into a preset collision identification model, so as to obtain a second similarity between the collision data to be identified and the collision category output by the collision identification model;
the collision recognition module 304 is configured to determine a collision recognition result of the target vehicle according to the first similarity and the second similarity.
The collision recognition device disclosed in the above embodiment of the present application firstly obtains collision data to be recognized of a target vehicle; then, recognizing the collision data to be recognized by using a preset collision recognition feature algorithm, and determining a first similarity between the collision data to be recognized and a preset collision category; inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified output by the collision identification model and the collision category; and finally, determining a collision recognition result of the target vehicle according to the first similarity and the second similarity. Because the collision recognition feature algorithm and the collision recognition model may have different performances under different data processing or different collision scenes, the final collision recognition result is comprehensively obtained by combining the results of the collision recognition feature algorithm and the collision recognition model, and the accuracy and the robustness of collision recognition are improved.
In a second possible implementation manner of the present application, the collision recognition module 304 may specifically include the following sub-modules:
and the first processing sub-module is used for fusing the first similarity and the second similarity corresponding to each collision category so as to determine the reference similarity corresponding to each collision category.
And the first determining submodule is used for determining the target category according to the reference similarity corresponding to each collision category.
And the second determining submodule is used for determining the reference similarity corresponding to the target category as the target similarity.
In another possible implementation manner of the second embodiment of the present application, the first processing submodule may specifically include the following units:
The first determining unit is used for determining the reference similarity corresponding to each collision category according to the preset algorithm weight corresponding to the collision recognition feature algorithm, the preset model weight corresponding to the collision recognition model, the first similarity and the second similarity corresponding to each collision category.
In still another possible implementation manner of the second embodiment of the present application, the first processing submodule may specifically further include the following units:
And the second determining unit is used for determining the reference similarity corresponding to each collision category according to the sensor corresponding to each collision category, the preset sensor weight corresponding to each sensor, the first similarity corresponding to each collision category and the second similarity.
In still another possible implementation manner of the second embodiment of the present application, the first determining submodule may specifically further include the following units:
and the third determining unit is used for determining the target category according to the preset priority sequence corresponding to each collision category when the reference similarity corresponding to the plurality of collision categories is the same.
In another possible implementation manner of the second embodiment of the present application, the data obtaining module 301 may specifically include the following sub-modules:
and the first acquisition sub-module is used for acquiring vehicle operation data of the target vehicle.
And the second processing sub-module is used for carrying out abnormal data slicing processing on the vehicle operation data so as to extract collision data to be identified in the vehicle operation data.
In still another possible implementation manner of the second embodiment of the present application, the collision identifying device may further include the following modules:
the first acquisition module is used for acquiring a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data.
The first determining module is used for determining the data characteristics corresponding to each collision type according to the plurality of groups of reference collision data and the collision type labels corresponding to each group of reference collision data.
And the third processing module is used for carrying out iterative parameter adjustment on the initial characteristic algorithm by utilizing the data characteristics corresponding to each collision category until the initial characteristic algorithm converges to obtain a collision recognition characteristic algorithm.
In still another possible implementation manner of the second embodiment of the present application, the collision identifying device may further include the following modules:
The second acquisition module is used for acquiring a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data.
And the fourth processing module is used for training the initial convolutional neural network model by utilizing a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data so as to generate a collision recognition model.
In another possible implementation manner of the second embodiment of the present application, the collision identifying device may further include the following modules:
And the third acquisition module is used for acquiring a manual verification result corresponding to the collision data to be identified.
And the fifth processing module is used for optimizing the collision recognition characteristic algorithm and the collision recognition model according to the collision recognition result and the manual verification result of the target vehicle.
In still another possible implementation manner of the second embodiment of the present application, the collision categories include a real collision category and a suspected collision category, wherein the real collision category includes any one or more of rear-end collision, front collision, side collision, and rollover, and the suspected collision category includes any one or more of sudden braking, passing through a deceleration strip, closing a vehicle door, washing a vehicle, raining, hail dropping, abnormal flapping, and a wiper.
The collision recognition device provided by the embodiment of the present application may be applied to the foregoing method embodiment, and details of the description of the foregoing method embodiment are referred to in the foregoing description, and are not repeated herein.
Fig. 4 is a schematic structural diagram of a terminal device according to a third embodiment of the present application. As shown in fig. 4, the terminal device 400 of this embodiment includes: at least one processor 410 (only one processor is shown in fig. 4), a memory 420, and a computer program 421 stored in the memory 420 and executable on the at least one processor 410, the steps in the above-described collision recognition method embodiments being implemented when the processor 410 executes the computer program 421.
The terminal device 400 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The terminal device may include, but is not limited to, a processor 410, a memory 420. It will be appreciated by those skilled in the art that fig. 4 is merely an example of a terminal device 400 and is not limiting of the terminal device 400, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The Processor 410 may be a central processing unit (Central Processing Unit, CPU), the Processor 410 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 420 may in some embodiments be an internal storage unit of the terminal device 400, such as a hard disk or a memory of the terminal device 400. The memory 420 may also be an external storage device of the terminal device 400 in other embodiments, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the terminal device 400. Further, the memory 420 may also include both an internal storage unit and an external storage device of the terminal device 400. The memory 420 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, such as program code for the computer program. The memory 420 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The present application may also be implemented by a computer program product for implementing all or part of the steps of the above embodiments of the method, when the computer program product is run on a terminal device, for enabling the terminal device to execute the steps of the above embodiments of the method.
The above embodiments are only for illustrating the technical solution of the present application, and are not limited thereto. Although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (12)

1. A collision recognition method, comprising:
acquiring collision data to be identified of a target vehicle;
Identifying the collision data to be identified by using a preset collision identification characteristic algorithm, and determining a first similarity between the collision data to be identified and a preset collision category;
Inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified and the collision category output by the collision identification model;
and determining a collision recognition result of the target vehicle according to the first similarity and the second similarity.
2. The method of claim 1, wherein the number of collision categories is a plurality, the collision recognition result includes a target category and a target similarity corresponding to the target category, and the determining the collision recognition result of the target vehicle according to the first similarity and the second similarity includes:
Fusing the first similarity and the second similarity corresponding to each collision category to determine a reference similarity corresponding to each collision category;
Determining the target category according to the reference similarity corresponding to each collision category;
And determining the reference similarity corresponding to the target category as the target similarity.
3. The method of claim 2, wherein said fusing the first and second similarities for each of the collision categories to determine a reference similarity for each of the collision categories comprises:
And determining the reference similarity corresponding to each collision category according to the preset algorithm weight corresponding to the collision recognition feature algorithm, the preset model weight corresponding to the collision recognition model, the first similarity and the second similarity corresponding to each collision category.
4. The method of claim 2, wherein the collision data to be identified is obtained by using a plurality of sensors, each sensor corresponds to a preset sensor weight, and the fusing the first similarity and the second similarity corresponding to each collision category to determine the reference similarity corresponding to each collision category includes:
And determining the reference similarity corresponding to each collision category according to the sensor corresponding to each collision category, the preset sensor weight corresponding to each sensor, the first similarity corresponding to each collision category and the second similarity corresponding to each collision category.
5. The collision recognition method as claimed in claim 2, wherein said determining the target class based on the reference similarity corresponding to each of the collision classes comprises:
And when the reference similarity corresponding to the collision categories is the same, determining the target category according to the preset priority sequence corresponding to each collision category.
6. The collision recognition method according to claim 1, wherein the acquiring collision data to be recognized of the target vehicle includes:
Acquiring vehicle operation data of a target vehicle;
And carrying out abnormal data slicing processing on the vehicle operation data to extract the collision data to be identified in the vehicle operation data.
7. The collision recognition method of claim 1, wherein the step of generating the collision recognition feature algorithm comprises:
Acquiring a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data;
Determining data characteristics corresponding to each collision type according to a plurality of groups of the reference collision data and collision type labels corresponding to each group of the reference collision data;
And carrying out iterative parameter adjustment on the initial feature algorithm by utilizing the data features corresponding to each collision category until the initial feature algorithm converges to obtain the collision recognition feature algorithm.
8. The collision recognition method of claim 1, wherein the step of generating the collision recognition model comprises:
Acquiring a plurality of groups of reference collision data and collision type labels corresponding to each group of reference collision data;
and training an initial convolutional neural network model by utilizing a plurality of groups of the reference collision data and collision type labels corresponding to each group of the reference collision data so as to generate the collision recognition model.
9. A collision recognition method as claimed in any one of claims 1 to 8, in which the method further comprises:
acquiring a manual verification result corresponding to the collision data to be identified;
and optimizing the collision recognition feature algorithm and the collision recognition model according to the collision recognition result of the target vehicle and the manual verification result.
10. A collision recognition apparatus, characterized by comprising:
the data acquisition module is used for acquiring collision data to be identified of the target vehicle;
The first processing module is used for identifying the collision data to be identified by utilizing a preset collision identification characteristic algorithm, and determining a first similarity between the collision data to be identified and a preset collision category;
the second processing module is used for inputting the collision data to be identified into a preset collision identification model to obtain a second similarity between the collision data to be identified and the collision category output by the collision identification model;
And the collision recognition module is used for determining a collision recognition result of the target vehicle according to the first similarity and the second similarity.
11. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 9 when executing the computer program.
12. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 9.
CN202410074506.0A 2024-01-18 2024-01-18 Collision identification method, device, terminal equipment and storage medium Pending CN118015825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410074506.0A CN118015825A (en) 2024-01-18 2024-01-18 Collision identification method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410074506.0A CN118015825A (en) 2024-01-18 2024-01-18 Collision identification method, device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118015825A true CN118015825A (en) 2024-05-10

Family

ID=90942086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410074506.0A Pending CN118015825A (en) 2024-01-18 2024-01-18 Collision identification method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118015825A (en)

Similar Documents

Publication Publication Date Title
CN111062240B (en) Monitoring method and device for automobile driving safety, computer equipment and storage medium
WO2020042984A1 (en) Vehicle behavior detection method and apparatus
US11599563B2 (en) Programmatically identifying a personality of an autonomous vehicle
CN111452799B (en) Driving behavior evaluation method and system
CN107564280B (en) Driving behavior data acquisition and analysis system and method based on environmental perception
Matousek et al. Detecting anomalous driving behavior using neural networks
CN111383362B (en) Safety monitoring method and device
CN111951560B (en) Service anomaly detection method, method for training service anomaly detection model and method for training acoustic model
Martinelli et al. Cluster analysis for driver aggressiveness identification.
CN114694449A (en) Method and device for generating vehicle traffic scene, training method and device
CN110620760A (en) FlexRay bus fusion intrusion detection method and detection device for SVM (support vector machine) and Bayesian network
CN118015825A (en) Collision identification method, device, terminal equipment and storage medium
Peng et al. A Method for Vehicle Collision Risk Assessment through Inferring Driver's Braking Actions in Near-Crash Situations
Kubin et al. Deep crash detection from vehicular sensor data with multimodal self-supervision
CN110660217B (en) Method and device for detecting information security
CN114119256A (en) UBI dangerous chemical vehicle driving behavior acquisition and analysis system and premium discount method
CN111325869B (en) Vehicle fatigue driving accurate judgment method, terminal device and storage medium
CN113658426A (en) Vehicle accident identification method and device
CN116894225B (en) Driving behavior abnormality analysis method, device, equipment and medium thereof
Mijic et al. Autonomous driving solution based on traffic sign detection
KR20200075918A (en) Vehicle and control method thereof
CN114954489A (en) Method and device for identifying driving behaviors and styles of automobile
Sarker et al. DeepDMC: A traffic context independent deep driving maneuver classification framework
CN114155476B (en) AEB (automatic Emergency bank) accident scene identification method, device, equipment and medium
Milardo et al. An unsupervised approach for driving behavior analysis of professional truck drivers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination