CN111311010B - Vehicle risk prediction method, device, electronic equipment and readable storage medium - Google Patents

Vehicle risk prediction method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111311010B
CN111311010B CN202010110995.2A CN202010110995A CN111311010B CN 111311010 B CN111311010 B CN 111311010B CN 202010110995 A CN202010110995 A CN 202010110995A CN 111311010 B CN111311010 B CN 111311010B
Authority
CN
China
Prior art keywords
vehicle
feature
optical flow
calculating
flow information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010110995.2A
Other languages
Chinese (zh)
Other versions
CN111311010A (en
Inventor
张炯文
汪海祥
陈真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202010110995.2A priority Critical patent/CN111311010B/en
Publication of CN111311010A publication Critical patent/CN111311010A/en
Application granted granted Critical
Publication of CN111311010B publication Critical patent/CN111311010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Development Economics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses a vehicle risk prediction method, which comprises the following steps: based on a pre-constructed feature extraction method, extracting vehicle feature points from a vehicle running picture set to obtain a vehicle feature set, calculating optical flow information of the vehicle feature set to obtain an optical flow information set, extracting feature tracks of a vehicle from the optical flow information set, collecting the feature tracks to obtain a feature two-dimensional track set, performing space projection on the feature two-dimensional track set to obtain a feature three-dimensional track set, constructing a clustering matrix according to the feature three-dimensional track set, performing inter-class combination on the clustering matrix to obtain a combination matrix, calculating values in the combination matrix to obtain a running speed, and calculating a vehicle risk coefficient according to the running speed. The invention also provides a vehicle risk prediction device, electronic equipment and a computer readable storage medium. The invention can meet the real-time detection requirement of the vehicle and improve the judgment of the risk of the vehicle.

Description

Vehicle risk prediction method, device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a vehicle risk prediction method, a device, an electronic apparatus, and a readable storage medium.
Background
In the past, motor vehicles are mainly controlled by traffic managers as the main object of traffic management, but due to the rapid increase of the number of motor vehicles, limited police resources of traffic departments cannot be controlled strictly in all directions, so that various traffic violations and disorder layers are endless. Brings great hidden trouble to the traffic trip safety of people.
Most of existing vehicles are provided with positioning devices with a running recording function, the driving behaviors of the vehicles are calculated by adopting data of a vehicle GPS (global positioning system) and a monitoring camera, for example, the running speed, the acceleration and the like of the vehicles are evaluated, or the driving risks of the vehicles are evaluated subjectively only according to historical insurance data, in the process of calculating the driving behaviors of the vehicles by matching the data of the vehicle GPS with the monitoring camera, the detection and tracking technology for high definition and high precision of the vehicles is lacked, and the detection and tracking precision is very high only when the vehicles have high definition videos and good light and are not shielded, so that the requirements for judging the vehicle risks are difficult to meet.
Disclosure of Invention
The invention provides a vehicle risk prediction method, a device, electronic equipment and a computer readable storage medium, and mainly aims to provide a scheme for meeting the requirement of real-time detection of a vehicle and improving the judgment of vehicle risk.
In order to achieve the above object, the present invention provides a vehicle risk prediction method, including:
extracting vehicle feature points from a vehicle driving picture set based on a pre-constructed feature extraction method to obtain a vehicle feature set;
calculating optical flow information of the vehicle feature set to obtain an optical flow information set, extracting a feature track of the vehicle from the optical flow information set, and collecting the feature track to obtain a feature two-dimensional track set;
performing space projection on the characteristic two-dimensional track set to obtain a characteristic three-dimensional track set;
constructing a feature similarity matrix according to the feature three-dimensional track set, carrying out sparsification treatment on the feature similarity matrix to obtain a sparse block matrix, and carrying out sparse spectral clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a clustering matrix;
performing inter-class combination on the clustering matrix to obtain a combination matrix, and calculating the numerical value in the combination matrix to obtain the running speed;
And calculating according to the running speed to obtain a vehicle risk coefficient.
Optionally, the method for extracting features of a vehicle from a vehicle running picture set to obtain a vehicle feature set based on the pre-constructed feature extraction method includes:
detecting vehicle feature points in the vehicle running picture set through a corner detection algorithm to obtain an original vehicle feature set;
calculating descriptors of each original vehicle feature in the original vehicle feature set through a feature point description algorithm;
and calculating the similarity between the descriptors of each original vehicle feature, and cleaning the original vehicle feature set according to the similarity to obtain the vehicle feature set.
Optionally, the calculating the optical flow information of the vehicle feature set to obtain an optical flow information set includes:
performing up-sampling processing on each picture of the vehicle driving picture set to obtain an image pyramid of each picture;
performing optical flow calculation on the top layer image of the image pyramid to obtain first optical flow information;
the first optical flow information is used as an initial value and is transmitted to a lower layer image adjacent to the top layer image, and optical flow calculation is carried out on the lower layer image to obtain second optical flow information;
and collecting all optical flow information until the optical flow information is transmitted to the bottommost image of the image pyramid to obtain an optical flow information set.
Optionally, the feature two-dimensional trajectory set includes:
wherein N is the total track number of the characteristic two-dimensional track set, i is the track number of the vehicle characteristic points in the characteristic two-dimensional track set, and N i Is the number of the vehicle characteristic points of the ith track,is the pixel coordinates of the nth vehicle feature point of the ith track.
Optionally, the spatially projecting the feature two-dimensional trajectory set to obtain a feature three-dimensional trajectory set includes:
a coordinate system is pre-built, each vehicle characteristic point in the vehicle characteristic set is projected to the coordinate system to obtain a characteristic coordinate set, and the back projection speed of the characteristic coordinate set is calculated according to the pre-built vector addition inverse operation;
calculating the relative height of the characteristic coordinate set to obtain a relative height set;
according to a pre-constructed perspective projection matrix transformation equation, projecting the pixel coordinates into a region parallel to the transverse axis of the coordinate system to obtain projected pixel coordinates, calculating the parallel height between the projected pixel coordinates and the transverse axis of the coordinate system, and summarizing the projected pixel coordinates and the parallel height to obtain a three-dimensional coordinate information set;
and collecting the three-dimensional coordinate information set, the relative altitude set and the back projection speed into the characteristic three-dimensional track set.
In order to solve the above-mentioned problems, the present invention also provides a vehicle risk prediction apparatus, the apparatus comprising:
the feature extraction module is used for extracting vehicle feature points from the vehicle running picture set based on a pre-constructed feature extraction method to obtain a vehicle feature set;
the two-dimensional track calculation module is used for calculating the optical flow information of the vehicle feature set to obtain an optical flow information set, extracting the feature track of the vehicle from the optical flow information set, and collecting the feature track to obtain a feature two-dimensional track set;
the three-dimensional track calculation module is used for carrying out space projection on the characteristic two-dimensional track set to obtain a characteristic three-dimensional track set;
the vehicle risk factor calculation module is used for constructing a feature similarity matrix according to the feature three-dimensional track set, carrying out sparsification treatment on the feature similarity matrix to obtain a sparse block matrix, carrying out sparse spectrum clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a cluster matrix, carrying out inter-class combination on the cluster matrix to obtain a combined matrix, calculating the numerical value in the combined matrix to obtain a running speed, and calculating the vehicle risk factor according to the running speed.
Optionally, the method for extracting features of a vehicle from a vehicle running picture set to obtain a vehicle feature set based on the pre-constructed feature extraction method includes:
detecting vehicle feature points in the vehicle running picture set through a corner detection algorithm to obtain an original vehicle feature set;
calculating descriptors of each original vehicle feature in the original vehicle feature set through a feature point description algorithm;
and calculating the similarity between the descriptors of each original vehicle feature, and cleaning the original vehicle feature set according to the similarity to obtain the vehicle feature set.
Optionally, the calculating the optical flow information of the vehicle feature set to obtain an optical flow information set includes:
performing up-sampling processing on each picture of the vehicle driving picture set to obtain an image pyramid of each picture;
performing optical flow calculation on the top layer image of the image pyramid to obtain first optical flow information;
the first optical flow information is used as an initial value and is transmitted to a lower layer image adjacent to the top layer image, and optical flow calculation is carried out on the lower layer image to obtain second optical flow information;
and collecting all optical flow information until the optical flow information is transmitted to the bottommost image of the image pyramid to obtain an optical flow information set.
Optionally, the feature two-dimensional trajectory set includes:
wherein N is the total track number of the characteristic two-dimensional track set, i is the track number of the vehicle characteristic points in the characteristic two-dimensional track set, and N i Is the number of the vehicle characteristic points of the ith track,is the pixel coordinates of the nth vehicle feature point of the ith track.
Optionally, the spatially projecting the feature two-dimensional trajectory set to obtain a feature three-dimensional trajectory set includes:
a coordinate system is pre-built, each vehicle characteristic point in the vehicle characteristic set is projected to the coordinate system to obtain a characteristic coordinate set, and the back projection speed of the characteristic coordinate set is calculated according to the pre-built vector addition inverse operation;
calculating the relative height of the characteristic coordinate set to obtain a relative height set;
according to a pre-constructed perspective projection matrix transformation equation, projecting the pixel coordinates into a region parallel to the transverse axis of the coordinate system to obtain projected pixel coordinates, calculating the parallel height between the projected pixel coordinates and the transverse axis of the coordinate system, and summarizing the projected pixel coordinates and the parallel height to obtain a three-dimensional coordinate information set;
collecting the three-dimensional coordinate information set, the relative height set and the back projection speed into the characteristic three-dimensional track set;
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
a memory storing at least one instruction; and
And a processor executing instructions stored in the memory to implement the vehicle risk prediction method of any one of the above.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium having stored therein at least one instruction that is executed by a processor in an electronic device to implement the vehicle risk prediction method of any one of the above.
According to the embodiment of the invention, the tracks of the vehicle feature points are acquired, the feature three-dimensional tracks are constructed through the feature two-dimensional tracks, and the tracks are combined among the clusters, so that real-time effective and high-precision traffic parameters are provided, the vehicle is tracked, the accurate running speed is obtained, and then the vehicle risk coefficient is calculated, wherein in the track feature construction stage, the three-dimensional parameters of the tracks are reconstructed, the influence of illumination shake and vehicle shielding on the clustering result is reduced, compared with other detection results, in the track clustering stage, the track similarity matrix is thinned, the calculation time is reduced, the real-time detection requirement is met, the judgment of the vehicle risk is improved, the imaging data is rapidly processed by combining an artificial intelligence technology, and the vehicle risk prediction efficiency is greatly improved.
Drawings
FIG. 1 is a flowchart of a vehicle risk prediction method according to an embodiment of the present invention;
FIG. 2 is a detailed implementation flow chart of the step S1 in FIG. 1;
FIG. 3 is a detailed implementation flowchart of the step S2 in FIG. 1;
FIG. 4 is a detailed implementation flowchart of the step S4 in FIG. 1;
FIG. 5 is a detailed implementation flowchart of the step S5 in FIG. 1;
FIG. 6 is a detailed implementation flowchart of the step S6 in FIG. 1;
FIG. 7 is a detailed implementation flowchart of the step S7 in FIG. 1;
FIG. 8 is a schematic block diagram of a risk prediction apparatus for a vehicle according to an embodiment of the present invention;
fig. 9 is a schematic diagram of an internal structure of an electronic device of a vehicle risk prediction method according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a vehicle risk prediction method. Referring to fig. 1, a flow chart of a vehicle risk prediction method according to an embodiment of the invention is shown. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
In the present embodiment, the vehicle risk prediction method includes:
s1, receiving an original vehicle running video, and preprocessing the original vehicle running video to obtain a vehicle running picture set.
In detail, referring to fig. 2, the S1 includes:
s11, sequentially performing low-pass filtering processing and noise removing processing on the original vehicle driving video to obtain a standard vehicle driving video;
s12, decomposing the standard vehicle driving video to obtain continuous predefined number of frame images of the standard vehicle driving video;
and S13, integrating the continuous predefined number of frame images to obtain the vehicle running picture set.
In the preferred embodiment of the present invention, the noise removal, video decomposition and integration can all be performed using the presently disclosed techniques, e.g., the noise removal can be performed using median filtering.
If xiao Li is a truck driver, acquiring a monitoring video of a small Li Jiashi truck in a certain city to obtain an original vehicle running video, and processing the original vehicle running video by the steps to obtain a vehicle running picture set of the xiao Li driving truck.
S2, extracting vehicle feature points from the vehicle driving picture set based on a pre-constructed feature extraction method to obtain a vehicle feature set.
In detail, referring to fig. 3, the S2 includes:
s21, detecting vehicle feature points in the vehicle running picture set through a corner detection algorithm to obtain an original vehicle feature set;
s22, calculating descriptors of each original vehicle feature in the original vehicle feature set through a feature point description algorithm;
s23, calculating the similarity between descriptors of each original vehicle feature, and cleaning the original vehicle feature set according to the similarity to obtain the vehicle feature set.
In detail, the corner detection algorithm is called FAST algorithm for short, and the main principle is that a certain number of pixels around a pixel are different from the pixel value of the point, and the corner is considered as a corner, and the corner in the scheme is a vehicle characteristic point.
The feature point description algorithm is also called BRIEF algorithm, and the main idea is to randomly select a plurality of point pairs near the feature point, combine the gray values of the point pairs into a binary string, and use the binary string as a feature descriptor of the feature point. The method discards the traditional method for describing the feature points by using the regional gray histogram, greatly quickens the establishment speed of the feature descriptors and greatly reduces the time for feature matching, so that the method is a very rapid algorithm and can be applied to a system meeting the real-time performance.
Further, calculating the similarity between the descriptors of each original vehicle feature may employ a square difference formula.
S3, calculating optical flow information of the vehicle feature set to obtain an optical flow information set, extracting a feature track of the vehicle from the optical flow information set, and collecting the feature track to obtain a feature two-dimensional track set.
Preferably, the calculating the optical flow information of the vehicle feature set to obtain an optical flow information set includes: performing up-sampling processing on each picture of the vehicle driving picture set to obtain an image pyramid of each picture; performing optical flow calculation on the top layer image of the image pyramid to obtain first optical flow information; the first optical flow information is used as an initial value and is transmitted to a lower layer image adjacent to the top layer image, and optical flow calculation is carried out on the lower layer image to obtain second optical flow information; and collecting all optical flow information until the optical flow information is transmitted to the bottommost image of the image pyramid to obtain an optical flow information set.
In detail, the up-sampling process is a currently disclosed method, namely, new elements are inserted between pixel points by adopting a proper interpolation algorithm on the basis of original image pixels, and the image pyramid can be obtained according to the method of inserting the new elements.
The optical flow calculation can be calculated by adopting the optical flow method disclosed in the prior art.
Further, the feature two-dimensional trajectory set includes:
wherein N is the total track number of the characteristic two-dimensional track set, i is the track number of the vehicle characteristic points in the characteristic two-dimensional track set, and N i Is the number of the vehicle characteristic points of the ith track,is the pixel coordinates of the nth vehicle feature point of the ith track.
And S4, performing space projection on the characteristic two-dimensional track set to obtain a characteristic three-dimensional track set.
Preferably, referring to fig. 4, the step S4 includes:
s41, a coordinate system is pre-built, each vehicle characteristic point in the vehicle characteristic set is projected to the coordinate system to obtain a characteristic coordinate set, and the back projection speed of the characteristic coordinate set is calculated according to the pre-built vector addition inverse operation;
s42, calculating the relative height of the characteristic coordinate set to obtain a relative height set;
s43, according to a pre-constructed perspective projection matrix transformation equation, projecting the pixel coordinates into a region parallel to the transverse axis of the coordinate system to obtain projected pixel coordinates, calculating the parallel height between the projected pixel coordinates and the transverse axis of the coordinate system, and summarizing the projected pixel coordinates and the parallel height to obtain a three-dimensional coordinate information set;
S44, collecting the three-dimensional coordinate information set, the relative altitude set and the back projection speed into the characteristic three-dimensional track set.
In detail, the pixel coordinates of the vehicle feature point as the nth one for the ith trackReprojecting it to a height h by a perspective projection matrix transformation equation r And a plane parallel to the ground, updating the three-dimensional coordinate information of the vehicle feature point to +.>
Further, the formula for calculating the relative height of the vehicle feature points is as follows:
wherein v is the average running speed of the vehicle, which can be obtained by approximating the speed calculation of all vehicles in the current vehicle section, H c To monitor the height of the camera, H i 、v i(2D) The relative height and the pixel speed of the track of the characteristic point of the vehicle in the ith item are respectively.
S5, constructing a feature similarity matrix according to the feature three-dimensional track set, carrying out sparsification treatment on the feature similarity matrix to obtain a sparse block matrix, and carrying out sparse spectral clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a clustering matrix.
In detail, referring to fig. 5, the constructing a feature similarity matrix according to the feature three-dimensional trajectory set includes:
s51, calculating the speed v= { v of each track on the image according to the track points of the characteristic two-dimensional track set 1 ,v 2 …v N Selecting the minimum track speed v of the characteristic two-dimensional track set p The corresponding track is a reference track t p
S52, calculating each track t by using a formula for calculating the relative height of the vehicle characteristic points i And reference trajectory t p Relative height h between i
S53, constructing an attribute feature vector F corresponding to each track i =(v i(3D) ,h r );
S54, calculating the similarity W between different tracks ij Constructing the feature similarity matrix W, wherein: w= [ W ] ij ] N×N
Wherein i=1, 2 …, N; j=1, 2 …, N; d (F) i ,F j ) 2 And the Euclidean distance between attribute feature vectors of any two tracks in the feature two-dimensional track set is used as the Euclidean distance between attribute feature vectors of any two tracks.
Preferably, the step of performing sparsification processing on the feature similarity matrix W to obtain a sparse block matrix includes:
s55, if any one of the tracks t i And reference trajectory t p Relative height h between i Greater than a predefined value, the similarity W between two = tracks ij Setting 0;
s56, the feature similarity matrix W is subjected to elementary transformation of row and column ordering, and then the block diagonal matrix W is obtained through local adjustment.
Further, the performing sparse spectral clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a clustering matrix includes:
s57, calculating normalized Laplace matrix according to the sparse block matrix w
S58, calculating the characteristic value { lambda } i I=1, 2, n } and its corresponding eigenvector { H } i ,i=1,2,N};
S59, calculating and H i Corresponding indication feature vector Q i And carrying out K-means clustering by using the feature vectors corresponding to the first K minimum feature values of the Q to obtain a clustering matrix.
In order to better solve the problem of vehicle detection in a complex traffic scene, the characteristic two-dimensional track set is projected to a world coordinate system through space projection, and through analysis of the three-dimensional track of the vehicle, the height, speed and three-dimensional position information are selected as the three-dimensional characteristics of the track of the vehicle characteristic points and extracted, so that the characteristic three-dimensional track set is obtained.
S6, carrying out inter-class combination on the clustering matrix to obtain a combination matrix, and calculating the numerical value in the combination matrix to obtain the running speed.
In detail, referring to fig. 6, the step S6 includes:
s61, set category C i And C j Is two categories in the clustering result of the characteristic three-dimensional track set, and n is respectively arranged in the two categories i 、n j Calculating the back projection speed of each track, and finding the minimum back projection speed v of each category i And v j As category C i And C j Taking the track of the vehicle characteristic point with the minimum back projection speed as the reference track of the category;
S62, comparing the representative speeds v of the two classes i And v j Finding the smallest representative speed as the true movement speed v of the vehicle T Reference speed v T =min(v i ,v j );
S63, setting the track point height in the reference category to zero, and restoring the track point height to the world coordinate system to obtain three-dimensional coordinate information (X) i ,Y i 0), in combination with a reference velocity v T Calculating the phase of the tracks in the class to be mergedFor height h r Restoring it to world coordinate system to obtain three-dimensional coordinate information (X m ,Y m ,h r );
S64, calculating the absolute distance between the track of the vehicle characteristic point in the category to be combined and the reference characteristic track, namely the absolute distance between the three-dimensional position information of the representative points of the two tracks, wherein the absolute distance is represented by DIS (delta X, delta Y, delta Z), and the absolute distance is represented by the following formula:
ΔX=|X m -X i |
ΔY=|Y m -Y i |
ΔZ=|Z m -0|;
s65, judging the type of the three-dimensional model possibly belonging to the vehicle in advance according to the delta Z, judging whether the delta X and the delta Y are both in the three-dimensional model, and if the delta X and the delta Y meet the three-dimensional model given by the delta Z, recording the parameter n of the track number of the characteristic three-dimensional track which can be processed in the type to be combined c Adding 1, skipping the track if the n is not satisfied, and when n is in the category to be combined j After the bar characteristic points are processed, ifCombining the two categories to obtain a combined matrix; if not, directly outputting the clustering matrix without combining;
And S66, calculating the numerical value in the merging matrix to obtain the running speed.
Through initial clustering, vehicles can be accurately segmented under most conditions, but due to errors of the constructed feature similarity matrix, the situation that the track of the same vehicle is divided into a plurality of categories can occur, particularly, feature three-dimensional tracks from large vehicles, so that clustering results of sparse spectrum clustering are needed to be combined with rigid motion rules, vehicle models and other prior knowledge.
And calculating the numerical value in the merging matrix to obtain the running speed, such as by calculating the diagonal product, the rank value and the like in the merging matrix.
And S7, calculating a vehicle risk coefficient according to the running speed.
In detail, referring to fig. 7, the step S7 includes:
s72, outputting a vehicle risk coefficient 'normal' when the running speed is greater than a predefined minimum speed and less than a predefined maximum speed;
and S71, outputting the vehicle risk factor high when the running speed is smaller than a predefined minimum speed or larger than a predefined maximum speed.
As shown in fig. 8, a functional block diagram of the vehicle risk prediction apparatus according to the present invention is shown.
The vehicle risk prediction apparatus 100 according to the present invention may be mounted in an electronic device. Depending on the functions implemented, the vehicle risk prediction apparatus may include a feature extraction module 101, a two-dimensional trajectory calculation module 102, a three-dimensional trajectory calculation module 103, and a vehicle risk factor calculation module 104. The module of the present invention may also be referred to as a unit, meaning a series of computer program segments capable of being executed by the processor of the electronic device and of performing fixed functions, stored in the memory of the electronic device.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the feature extraction module 101 is configured to extract a vehicle feature point from a vehicle running picture set based on a pre-constructed feature extraction method to obtain a vehicle feature set;
the two-dimensional track calculation module 102 is configured to calculate optical flow information of the vehicle feature set to obtain an optical flow information set, extract a feature track of the vehicle from the optical flow information set, and collect the feature track to obtain a feature two-dimensional track set;
the three-dimensional track calculation module 103 is configured to spatially project the feature two-dimensional track set to obtain a feature three-dimensional track set;
the vehicle risk coefficient calculation module 104 is configured to construct a feature similarity matrix according to the feature three-dimensional track set, perform sparsification processing on the feature similarity matrix to obtain a sparse block matrix, perform sparse spectral clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a cluster matrix, perform inter-class combination on the cluster matrix to obtain a combination matrix, calculate a value in the combination matrix to obtain a running speed, and calculate a vehicle risk coefficient according to the running speed.
In detail, the specific implementation steps of each module of the vehicle risk prediction device are as follows:
The feature extraction module 101 receives an original vehicle driving video, performs preprocessing on the original vehicle driving video to obtain a vehicle driving image set, and extracts vehicle feature points from the vehicle driving image set based on a pre-constructed feature extraction method to obtain a vehicle feature set.
In detail, the receiving the original vehicle driving video, and preprocessing the original vehicle driving video to obtain a vehicle driving picture set includes:
sequentially performing low-pass filtering processing and noise removing processing on the original vehicle running video to obtain a standard vehicle running video;
decomposing the standard vehicle running video to obtain continuous predefined number of frame images of the standard vehicle running video;
and integrating the continuous predefined number of frame images to obtain the vehicle running picture set.
In the preferred embodiment of the present invention, the noise removal, video decomposition and integration can all be performed using the presently disclosed techniques, e.g., the noise removal can be performed using median filtering.
If xiao Li is a truck driver, acquiring a monitoring video of a small Li Jiashi truck in a certain city to obtain an original vehicle running video, and processing the original vehicle running video by the steps to obtain a vehicle running picture set of the xiao Li driving truck.
In detail, the feature extraction method based on the pre-construction extracts feature points of a vehicle from a vehicle running picture set to obtain a vehicle feature set, and the feature extraction method comprises the following steps:
detecting vehicle feature points in the vehicle running picture set through a corner detection algorithm to obtain an original vehicle feature set;
calculating descriptors of each original vehicle feature in the original vehicle feature set through a feature point description algorithm;
and calculating the similarity between the descriptors of each original vehicle feature, and cleaning the original vehicle feature set according to the similarity to obtain the vehicle feature set.
In detail, the corner detection algorithm is called FAST algorithm for short, and the main principle is that a certain number of pixels around a pixel are different from the pixel value of the point, and the corner is considered as a corner, and the corner in the scheme is a vehicle characteristic point.
The feature point description algorithm is also called BRIEF algorithm, and the main idea is to randomly select a plurality of point pairs near the feature point, combine the gray values of the point pairs into a binary string, and use the binary string as a feature descriptor of the feature point. The method discards the traditional method for describing the feature points by using the regional gray histogram, greatly quickens the establishment speed of the feature descriptors and greatly reduces the time for feature matching, so that the method is a very rapid algorithm and can be applied to a system meeting the real-time performance.
Further, calculating the similarity between the descriptors of each original vehicle feature may employ a square difference formula.
The two-dimensional track calculation module 102 calculates optical flow information of the vehicle feature set to obtain an optical flow information set, extracts a feature track of the vehicle from the optical flow information set, and gathers the feature track to obtain a feature two-dimensional track set.
Preferably, the calculating the optical flow information of the vehicle feature set to obtain an optical flow information set includes:
performing up-sampling processing on each picture of the vehicle driving picture set to obtain an image pyramid of each picture;
performing optical flow calculation on the top layer image of the image pyramid to obtain first optical flow information;
the first optical flow information is used as an initial value and is transmitted to a lower layer image adjacent to the top layer image, and optical flow calculation is carried out on the lower layer image to obtain second optical flow information;
and collecting all optical flow information until the optical flow information is transmitted to the bottommost image of the image pyramid to obtain an optical flow information set.
In detail, the up-sampling process is a currently disclosed method, namely, new elements are inserted between pixel points by adopting a proper interpolation algorithm on the basis of original image pixels, and the image pyramid can be obtained according to the method of inserting the new elements.
The optical flow calculation can be calculated by adopting the optical flow method disclosed in the prior art.
Further, the feature two-dimensional trajectory set includes:
wherein N is the total track number of the characteristic two-dimensional track set, i is the track number of the vehicle characteristic points in the characteristic two-dimensional track set, and N i Is the number of the vehicle characteristic points of the ith track,is the pixel coordinates of the nth vehicle feature point of the ith track.
The three-dimensional track calculation module 103 performs space projection on the characteristic two-dimensional track set to obtain a characteristic three-dimensional track set.
Preferably, the performing spatial projection on the characteristic two-dimensional track set to obtain a characteristic three-dimensional track set includes:
a coordinate system is pre-built, each vehicle characteristic point in the vehicle characteristic set is projected to the coordinate system to obtain a characteristic coordinate set, and the back projection speed of the characteristic coordinate set is calculated according to the pre-built vector addition inverse operation;
calculating the relative height of the characteristic coordinate set to obtain a relative height set;
according to a pre-constructed perspective projection matrix transformation equation, projecting the pixel coordinates into a region parallel to the transverse axis of the coordinate system to obtain projected pixel coordinates, calculating the parallel height between the projected pixel coordinates and the transverse axis of the coordinate system, and summarizing the projected pixel coordinates and the parallel height to obtain a three-dimensional coordinate information set;
And collecting the three-dimensional coordinate information set, the relative altitude set and the back projection speed into the characteristic three-dimensional track set.
In detail, the pixel coordinates of the vehicle feature point as the nth one for the ith trackReprojecting it to a height h by a perspective projection matrix transformation equation r And a plane parallel to the ground, updating the three-dimensional coordinate information of the vehicle feature point to +.>
Further, the formula for calculating the relative height of the vehicle feature points is as follows:
wherein v is the average running speed of the vehicle, which can be obtained by approximating the speed calculation of all vehicles in the current vehicle section, H c To monitor the height of the camera, H i 、v i(2D) The relative height and the pixel speed of the track of the characteristic point of the vehicle in the ith item are respectively.
The vehicle risk factor calculation module 104 constructs a feature similarity matrix according to the feature three-dimensional track set, performs sparsification treatment on the feature similarity matrix to obtain a sparse block matrix, performs sparse spectral clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a cluster matrix, performs inter-class combination on the cluster matrix to obtain a combination matrix, calculates values in the combination matrix to obtain a running speed, and calculates to obtain a vehicle risk factor according to the running speed.
In detail, the constructing a feature similarity matrix according to the feature three-dimensional track set includes:
calculating the speed v= { v of each track on the image according to the track points of the characteristic two-dimensional track set 1 ,v 2 …v N Selecting the minimum track speed v of the characteristic two-dimensional track set p The corresponding track is a reference track t p
Calculating each trajectory t using a formula for calculating the relative height of the vehicle feature points i And reference trajectory t p Relative height h between i
Constructing an attribute feature vector F corresponding to each track i =(v i(3D) ,h r );
Calculating the similarity W between different tracks ij Constructing the feature similarity matrix W, wherein: w= [ W ] ij ] N×N
Wherein i=1, 2 …, N; j=1, 2 …, N; d (F) i ,F j ) 2 And the Euclidean distance between attribute feature vectors of any two tracks in the feature two-dimensional track set is used as the Euclidean distance between attribute feature vectors of any two tracks.
Preferably, the step of performing sparsification processing on the feature similarity matrix W to obtain a sparse block matrix includes:
if any one track t i And reference trajectory t p Relative height h between i Greater than a predefined value, the similarity W between two = tracks ij Setting 0;
and carrying out elementary transformation on the characteristic similarity matrix W through row and column sequencing, and then carrying out local adjustment to obtain the block diagonal matrix W.
Further, the performing sparse spectral clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a clustering matrix includes:
calculating normalized common according to the sparse block matrix wLas matrix
Calculating the eigenvalue { lambda } i I=1, 2, n } and its corresponding eigenvector { H } i ,i=1,2,N};
Calculation and H i Corresponding indication feature vector Q i And carrying out K-means clustering by using the feature vectors corresponding to the first K minimum feature values of the Q to obtain a clustering matrix.
In order to better solve the problem of vehicle detection in a complex traffic scene, the characteristic two-dimensional track set is projected to a world coordinate system through space projection, and through analysis of the three-dimensional track of the vehicle, the height, speed and three-dimensional position information are selected as the three-dimensional characteristics of the track of the vehicle characteristic points and extracted, so that the characteristic three-dimensional track set is obtained.
In detail, the performing inter-class combination on the cluster matrix to obtain a combined matrix, calculating the numerical value in the combined matrix, and obtaining the driving speed includes:
category C i And C j Is two categories in the clustering result of the characteristic three-dimensional track set, and n is respectively arranged in the two categories i 、n j Calculating the back projection speed of each track, and finding the minimum back projection speed v of each category i And v j As category C i And C j Taking the track of the vehicle characteristic point with the minimum back projection speed as the reference track of the category;
comparing the representative speeds v of the two classes i And v j Finding the smallest representative speed as the true movement speed v of the vehicle T Reference speed v T =min(v i ,v j );
The track point height in the reference category is set to zero, and the track point height is restored to the world coordinate system to obtain three-dimensional coordinate information (X i ,Y i 0), in combination with a reference velocity v T Calculating the relative height h of the tracks in the classes to be combined r Restoring the three-dimensional coordinate information to the world coordinate system to obtain the three-dimensional coordinate information(X m ,Y m ,h r );
Calculating the absolute distance between the track of the vehicle characteristic point and the reference characteristic track in the category to be combined, namely the absolute distance between the three-dimensional position information of the two track representative points, wherein the absolute distance is expressed by DIS (delta X, delta Y, delta Z), and the absolute distance is expressed by the following formula:
ΔX=|X m -X i |
ΔY=|Y m -Y i |
ΔZ=|Z m -0|;
judging the type of the three-dimensional model possibly belonging to the vehicle in advance according to the delta Z, judging whether delta X and delta Y are both in the three-dimensional model, and if delta X and delta Y meet the three-dimensional model given by delta Z, recording the parameter n of the track number of the characteristic three-dimensional track which can be processed in the type to be combined c Adding 1, skipping the track if the n is not satisfied, and when n is in the category to be combined j After the bar characteristic points are processed, if Combining the two categories to obtain a combined matrix; if not, directly outputting the clustering matrix without combining;
and calculating the numerical value in the merging matrix to obtain the running speed.
Through initial clustering, vehicles can be accurately segmented under most conditions, but due to errors of the constructed feature similarity matrix, the situation that the track of the same vehicle is divided into a plurality of categories can occur, particularly, feature three-dimensional tracks from large vehicles, so that clustering results of sparse spectrum clustering are needed to be combined with rigid motion rules, vehicle models and other prior knowledge.
And calculating the numerical value in the merging matrix to obtain the running speed, such as by calculating the diagonal product, the rank value and the like in the merging matrix.
In detail, the calculating the vehicle risk coefficient according to the running speed includes:
outputting the vehicle risk factor "normal" when the travel speed is greater than a predefined minimum speed and less than a predefined maximum speed;
and outputting the vehicle risk factor high when the running speed is smaller than a predefined minimum speed or larger than a predefined maximum speed.
As shown in fig. 9, a schematic structural diagram of an electronic device for implementing the vehicle risk prediction method according to the present invention is shown.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a vehicle risk prediction program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes for vehicle risk prediction, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, executes programs or modules stored in the memory 11 (for example, performs vehicle risk prediction, etc.), and invokes data stored in the memory 11 to perform various functions of the electronic device 1 and process data.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 9 shows only an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may include fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The vehicle risk prediction 12 stored by the memory 11 in the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
extracting vehicle feature points from a vehicle driving picture set based on a pre-constructed feature extraction method to obtain a vehicle feature set;
calculating optical flow information of the vehicle feature set to obtain an optical flow information set, extracting a feature track of the vehicle from the optical flow information set, and collecting the feature track to obtain a feature two-dimensional track set;
Performing space projection on the characteristic two-dimensional track set to obtain a characteristic three-dimensional track set;
constructing a feature similarity matrix according to the feature three-dimensional track set, carrying out sparsification treatment on the feature similarity matrix to obtain a sparse block matrix, and carrying out sparse spectral clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a clustering matrix;
performing inter-class combination on the clustering matrix to obtain a combination matrix, and calculating the numerical value in the combination matrix to obtain the running speed;
and calculating according to the running speed to obtain a vehicle risk coefficient.
Specifically, the specific implementation method of the above instruction by the processor 10 may refer to descriptions of related steps in the corresponding embodiments of fig. 1 to 7, which are not repeated herein.
Further, the integrated modules/units of the electronic device 1 may be stored in a non-volatile computer readable storage medium if implemented in the form of software functional units and sold or used as a stand alone product. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (7)

1. A vehicle risk prediction method, the method comprising:
Detecting vehicle feature points in a vehicle running picture set through a corner detection algorithm to obtain an original vehicle feature set;
calculating descriptors of each original vehicle feature in the original vehicle feature set through a feature point description algorithm;
calculating the similarity between descriptors of each original vehicle feature by adopting a square variance formula method, and cleaning the original vehicle feature set according to the similarity to obtain the vehicle feature set;
calculating optical flow information of the vehicle feature set to obtain an optical flow information set, extracting a feature track of the vehicle from the optical flow information set, and collecting the feature track to obtain a feature two-dimensional track set;
a coordinate system is pre-built, each vehicle characteristic point in the vehicle characteristic set is projected to the coordinate system to obtain a characteristic coordinate set, and the back projection speed of the characteristic coordinate set is calculated according to the pre-built vector addition inverse operation;
calculating the relative height of the characteristic coordinate set to obtain a relative height set;
according to a pre-constructed perspective projection matrix transformation equation, projecting pixel coordinates into a region parallel to a transverse axis of the coordinate system to obtain projected pixel coordinates, calculating the parallel height of the projected pixel coordinates and the transverse axis of the coordinate system, and summarizing the projected pixel coordinates and the parallel height to obtain a three-dimensional coordinate information set;
Collecting the three-dimensional coordinate information set, the relative height set and the back projection speed as a characteristic three-dimensional track set;
constructing a feature similarity matrix according to the feature three-dimensional track set, carrying out sparsification treatment on the feature similarity matrix to obtain a sparse block matrix, and carrying out sparse spectral clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a clustering matrix;
performing inter-class combination on the clustering matrix to obtain a combination matrix, and calculating the numerical value in the combination matrix to obtain the running speed;
and calculating according to the running speed to obtain a vehicle risk coefficient.
2. The vehicle risk prediction method according to claim 1, wherein the calculating optical flow information of the vehicle feature set to obtain an optical flow information set includes:
performing up-sampling processing on each picture of the vehicle driving picture set to obtain an image pyramid of each picture;
performing optical flow calculation on the top layer image of the image pyramid to obtain first optical flow information;
the first optical flow information is used as an initial value and is transmitted to a lower layer image adjacent to the top layer image, and optical flow calculation is carried out on the lower layer image to obtain second optical flow information;
And collecting all optical flow information until the optical flow information is transmitted to the bottommost image of the image pyramid to obtain an optical flow information set.
3. The vehicle risk prediction method of claim 1, wherein the feature two-dimensional trajectory set includes:
wherein N is the total track number of the characteristic two-dimensional track set, i is the track number of the vehicle characteristic points in the characteristic two-dimensional track set, and N i Is the number of the vehicle characteristic points of the ith track,is the pixel coordinates of the nth vehicle feature point of the ith track.
4. A vehicle risk prediction apparatus, characterized in that the apparatus comprises:
the feature extraction module is used for detecting vehicle feature points in the vehicle running picture set through a corner detection algorithm to obtain an original vehicle feature set; calculating descriptors of each original vehicle feature in the original vehicle feature set through a feature point description algorithm; calculating the similarity between descriptors of each original vehicle feature by adopting a square variance formula method, and cleaning the original vehicle feature set according to the similarity to obtain the vehicle feature set;
the two-dimensional track calculation module is used for calculating the optical flow information of the vehicle feature set to obtain an optical flow information set, extracting the feature track of the vehicle from the optical flow information set, and collecting the feature track to obtain a feature two-dimensional track set;
The three-dimensional track calculation module is used for pre-constructing a coordinate system, projecting each vehicle characteristic point in the vehicle characteristic set to the coordinate system to obtain a characteristic coordinate set, and calculating the back projection speed of the characteristic coordinate set according to the pre-constructed vector addition inverse operation; calculating the relative height of the characteristic coordinate set to obtain a relative height set; according to a pre-constructed perspective projection matrix transformation equation, projecting pixel coordinates into a region parallel to a transverse axis of the coordinate system to obtain projected pixel coordinates, calculating the parallel height of the projected pixel coordinates and the transverse axis of the coordinate system, and summarizing the projected pixel coordinates and the parallel height to obtain a three-dimensional coordinate information set; collecting the three-dimensional coordinate information set, the relative height set and the back projection speed as a characteristic three-dimensional track set;
the vehicle risk factor calculation module is used for constructing a feature similarity matrix according to the feature three-dimensional track set, carrying out sparsification treatment on the feature similarity matrix to obtain a sparse block matrix, carrying out sparse spectrum clustering on the feature three-dimensional track set according to the sparse block matrix to obtain a cluster matrix, carrying out inter-class combination on the cluster matrix to obtain a combined matrix, calculating the numerical value in the combined matrix to obtain a running speed, and calculating the vehicle risk factor according to the running speed.
5. The vehicle risk prediction apparatus according to claim 4, wherein the calculating optical flow information of the vehicle feature set to obtain an optical flow information set includes:
performing up-sampling processing on each picture of the vehicle driving picture set to obtain an image pyramid of each picture;
performing optical flow calculation on the top layer image of the image pyramid to obtain first optical flow information;
the first optical flow information is used as an initial value and is transmitted to a lower layer image adjacent to the top layer image, and optical flow calculation is carried out on the lower layer image to obtain second optical flow information;
and collecting all optical flow information until the optical flow information is transmitted to the bottommost image of the image pyramid to obtain an optical flow information set.
6. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle risk prediction method of any one of claims 1 to 3.
7. A non-transitory computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the vehicle risk prediction method according to any one of claims 1 to 3.
CN202010110995.2A 2020-02-22 2020-02-22 Vehicle risk prediction method, device, electronic equipment and readable storage medium Active CN111311010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010110995.2A CN111311010B (en) 2020-02-22 2020-02-22 Vehicle risk prediction method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010110995.2A CN111311010B (en) 2020-02-22 2020-02-22 Vehicle risk prediction method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111311010A CN111311010A (en) 2020-06-19
CN111311010B true CN111311010B (en) 2023-07-28

Family

ID=71148028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010110995.2A Active CN111311010B (en) 2020-02-22 2020-02-22 Vehicle risk prediction method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111311010B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860383B (en) * 2020-07-27 2023-11-10 苏州市职业大学 Group abnormal behavior identification method, device, equipment and storage medium
CN112258547B (en) * 2020-10-19 2022-09-20 郑州轻工业大学 Vehicle three-dimensional track optimization method based on inverse perspective projection transformation and vehicle following model
CN112232581B (en) * 2020-10-26 2024-04-12 腾讯科技(深圳)有限公司 Driving risk prediction method and device, electronic equipment and storage medium
CN112927117B (en) * 2021-03-22 2022-08-23 上海京知信息科技有限公司 Block chain-based vehicle management communication method, management system, device and medium
CN115209037A (en) * 2021-06-30 2022-10-18 惠州华阳通用电子有限公司 Vehicle bottom perspective method and device
CN115859129B (en) * 2023-02-27 2023-07-14 南昌工程学院 Vehicle driving track similarity measurement method and system based on sparse satellite positioning
CN116308763B (en) * 2023-05-19 2023-09-12 北京泛钛客科技有限公司 Vehicle lending post-lending risk prediction method and system based on convolution self-encoder

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840660A (en) * 2017-11-29 2019-06-04 北京四维图新科技股份有限公司 A kind of vehicular characteristics data processing method and vehicle risk prediction model training method
CN110588648A (en) * 2019-10-25 2019-12-20 北京行易道科技有限公司 Method and device for identifying collision danger during vehicle running, vehicle and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315994B (en) * 2017-05-12 2020-08-18 长安大学 Clustering method based on Spectral Clustering space trajectory

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840660A (en) * 2017-11-29 2019-06-04 北京四维图新科技股份有限公司 A kind of vehicular characteristics data processing method and vehicle risk prediction model training method
CN110588648A (en) * 2019-10-25 2019-12-20 北京行易道科技有限公司 Method and device for identifying collision danger during vehicle running, vehicle and storage medium

Also Published As

Publication number Publication date
CN111311010A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111311010B (en) Vehicle risk prediction method, device, electronic equipment and readable storage medium
Wu et al. Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud
Feris et al. Large-scale vehicle detection, indexing, and search in urban surveillance videos
US8620026B2 (en) Video-based detection of multiple object types under varying poses
CN111598030A (en) Method and system for detecting and segmenting vehicle in aerial image
US11508157B2 (en) Device and method of objective identification and driving assistance device
US10289884B2 (en) Image analyzer, image analysis method, computer program product, and image analysis system
CN113450579B (en) Method, device, equipment and medium for acquiring speed information
CN112200131A (en) Vision-based vehicle collision detection method, intelligent terminal and storage medium
CN115512251A (en) Unmanned aerial vehicle low-illumination target tracking method based on double-branch progressive feature enhancement
Tang et al. Multiple-kernel based vehicle tracking using 3D deformable model and camera self-calibration
CN112651881A (en) Image synthesis method, apparatus, device, storage medium, and program product
WO2023155903A1 (en) Systems and methods for generating road surface semantic segmentation map from sequence of point clouds
Park et al. Drivable dirt road region identification using image and point cloud semantic segmentation fusion
CN113177432A (en) Head pose estimation method, system, device and medium based on multi-scale lightweight network
CN117197388A (en) Live-action three-dimensional virtual reality scene construction method and system based on generation of antagonistic neural network and oblique photography
Ouyang et al. PV-EncoNet: Fast object detection based on colored point cloud
CN117197227A (en) Method, device, equipment and medium for calculating yaw angle of target vehicle
Choe et al. Segment2Regress: Monocular 3D Vehicle Localization in Two Stages.
Li et al. Tfnet: Exploiting temporal cues for fast and accurate lidar semantic segmentation
CN116012609A (en) Multi-target tracking method, device, electronic equipment and medium for looking around fish eyes
CN116468796A (en) Method for generating representation from bird's eye view, vehicle object recognition system, and storage medium
CN112434601B (en) Vehicle illegal detection method, device, equipment and medium based on driving video
Schennings Deep convolutional neural networks for real-time single frame monocular depth estimation
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant