CN113484530A - Vehicle speed detection method, system and computer readable storage medium - Google Patents

Vehicle speed detection method, system and computer readable storage medium Download PDF

Info

Publication number
CN113484530A
CN113484530A CN202110577554.8A CN202110577554A CN113484530A CN 113484530 A CN113484530 A CN 113484530A CN 202110577554 A CN202110577554 A CN 202110577554A CN 113484530 A CN113484530 A CN 113484530A
Authority
CN
China
Prior art keywords
speed
determining
effective
parameter
acceleration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110577554.8A
Other languages
Chinese (zh)
Inventor
杨世航
杜磊
项振佳
马伟涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Erlangshen Vision Technology Co ltd
Original Assignee
Shenzhen Erlangshen Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Erlangshen Vision Technology Co ltd filed Critical Shenzhen Erlangshen Vision Technology Co ltd
Priority to CN202110577554.8A priority Critical patent/CN113484530A/en
Publication of CN113484530A publication Critical patent/CN113484530A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/64Devices characterised by the determination of the time taken to traverse a fixed distance
    • G01P3/68Devices characterised by the determination of the time taken to traverse a fixed distance using optical means, i.e. using infrared, visible, or ultraviolet light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a vehicle speed detection method, system and computer readable storage medium, it includes obtaining the image sequence on the basis of the time parameter preserved; obtaining effective characteristic points in an image sequence, and determining an effective characteristic combination, wherein the effective characteristic combination comprises effective characteristic points corresponding to different images; determining the displacement parameters of the effective feature combinations according to the positions of the effective feature points in the same effective feature combination; and determining the synchronous speed according to the displacement parameter and the time parameter. The displacement of the vehicle is reflected through the displacement of the characteristic points to obtain displacement parameters, and then the synchronous speed can be determined through the displacement parameters and the time parameters. The actual running speed of the vehicle is calculated by continuously shooting images and analyzing the displacement of the characteristic points, so that the obstruction to the running path of the vehicle in actual speed measurement can be reduced, and the method is more convenient.

Description

Vehicle speed detection method, system and computer readable storage medium
Technical Field
The present application relates to the field of vehicle parameter detection, and in particular, to a vehicle speed detection method, system, and computer-readable storage medium.
Background
With the development of science and technology and the improvement of economic level of people, a lot of people go out and use automobiles, the scale of the automobile industry is more and more, and people pay more and more attention to the quality and performance of automobiles. Vehicle speed detection is a basic detection item for detecting the actual traveling speed of a vehicle, and is often used for detecting traffic safety violations or quality detection of outgoing vehicles. The vehicle speed detection can also assist other detection items, such as vehicle chassis shooting.
In order to obtain a complete vehicle chassis picture with sufficient inclination in the vehicle chassis shooting process, a user usually uses a camera to continuously and sectionally shoot the chassis of a running vehicle to obtain a plurality of sectional pictures, and then the sectional pictures are spliced to form the complete vehicle chassis picture, wherein the principle of obtaining the complete vehicle chassis picture is similar to the principle of shooting a panoramic picture. However, since the vehicle speed may change during the driving process of the vehicle, and the change of the vehicle speed may cause the length proportion of the shot photos to be inconsistent, the actual vehicle traveling speed needs to be introduced, that is, the shooting frequency is adjusted according to the actual vehicle traveling speed during the shooting, so that the segmented photos meeting the standard requirements are obtained, and then the segmented photos are spliced.
In the related art, the method, the device and the mobile terminal for panoramic shooting disclosed in the Chinese invention application with the application publication number of CN102905079A, the method comprises the steps of determining the current time interval for collecting two adjacent frames of images according to the initial speed and the acceleration of the current time interval of a camera and the moving distance of the two adjacent frames of images meeting the splicing condition of panoramic pictures; taking the reciprocal of the current time interval to obtain the frequency of the collected image; and sending the frequency of the collected images to the camera, and indicating the camera to collect images for generating panoramic photos according to the frequency of the collected images.
In view of the above technical solutions, the inventor believes that, in order to obtain an actual traveling speed of a vehicle in a vehicle chassis photographing process, distance measuring instruments need to be respectively placed in front of and behind the vehicle, and distances between the two distance measuring instruments and the vehicle are measured without interruption, so as to calculate the actual traveling speed.
Disclosure of Invention
The method for detecting the vehicle speed has the advantages that the vehicle speed can be conveniently detected.
The above object of the present invention is achieved by the following technical solutions:
a vehicle speed detection method characterized by:
acquiring an image sequence based on a preset time parameter;
obtaining effective characteristic points in an image sequence, and determining an effective characteristic combination, wherein the effective characteristic combination comprises effective characteristic points corresponding to different images;
determining the displacement parameters of the effective feature combinations according to the positions of the effective feature points in the same effective feature combination;
and determining a synchronous speed according to the displacement parameter and the time parameter, wherein the synchronous speed can reflect the running speed of the vehicle.
By adopting the technical scheme, after the image sequence is obtained, the characteristic points of the image sequence can be extracted, the displacement of the vehicle is reflected through the displacement of the characteristic points, the displacement parameters are obtained, and then the synchronization speed can be determined through the displacement parameters and the time parameters. The actual running speed of the vehicle is calculated by continuously shooting images and analyzing the displacement of the characteristic points, so that the obstruction to the running path of the vehicle in actual speed measurement can be reduced, and the method is more convenient.
When feature points are acquired, the system may acquire wrong feature points, so that a pair of acquired feature points is actually different, and therefore, valid feature points and invalid feature points exist in all the feature points. In order to make the finally obtained synchronous speed more accurate, effective characteristic points are selected to form effective characteristic combinations, and then the next analysis is carried out to improve the accuracy of speed detection.
Optionally, in a specific method for acquiring effective feature points in an image sequence and determining an effective feature combination, the method includes:
acquiring feature points in an image sequence, and determining a feature combination, wherein the feature combination comprises feature points corresponding to different images; presetting effective parameters, determining the motion parameters of the feature points in the same feature combination, and determining the effective feature combination according to the motion parameters and the effective parameters.
By adopting the technical scheme, the motion parameter can reflect the displacement of two feature points in the same feature combination, the effective parameter is used for comparing with the motion parameter, so that whether the displacement of the feature points is normal or not is analyzed, if not, the feature combination is judged to be an invalid feature combination, otherwise, the feature combination is determined to be an effective feature combination.
Optionally, in a specific method for presetting effective parameters, determining motion parameters of feature points in the same feature combination, and determining an effective feature combination according to the motion parameters and the effective parameters, the method includes:
presetting a reference coordinate system and effective parameters;
determining a starting coordinate and a terminal coordinate according to the position of the feature point in the feature combination corresponding to the reference coordinate system;
determining motion parameters according to the starting coordinates and the end coordinates, wherein the motion parameters comprise deviation motion parameters deviating from the preset driving direction of the vehicle;
and determining the effective characteristic combination according to the deviation motion parameter and the effective parameter.
By adopting the technical scheme, the deviation motion parameter is used for reflecting the deviation degree between the motion direction of the characteristic point and the preset driving direction of the vehicle, whether the displacement of the characteristic point deviates from the normal driving direction of the vehicle seriously can be judged by comparing the deviation motion parameter with the effective parameter, namely whether the offset of the characteristic point is too large is evaluated, and therefore the characteristic combination with small offset can be obtained as the effective characteristic combination.
Optionally, in a specific method for presetting effective parameters, determining motion parameters of feature points in the same feature combination, and determining an effective feature combination according to the motion parameters and the effective parameters, the method includes:
presetting a reference coordinate system and effective parameters;
determining a starting coordinate and a terminal coordinate according to the position of the feature point in the feature combination corresponding to the reference coordinate system;
determining motion parameters according to the starting coordinates and the end coordinates, wherein the motion parameters comprise a traveling motion parameter corresponding to a preset traveling direction of the vehicle and a deviation motion parameter deviating from the preset traveling direction of the vehicle;
and determining the effective characteristic combination according to the effective parameters, the advancing motion parameters and the deviation motion parameters.
By adopting the technical scheme, the advancing motion parameter is used for reflecting the corresponding degree between the motion direction of the characteristic point and the preset driving direction of the vehicle, obvious difference exists between the advancing motion parameter and the deviation motion parameter in the normal driving process of the vehicle, and the characteristic combination of the advancing motion parameter which is real can be determined to be effective characteristic combination through the comparison between the advancing motion parameter and the deviation motion parameter.
Optionally, in the specific method for determining the synchronization speed according to the displacement parameter and the time parameter, the method includes:
determining a time difference value according to the position of the image in the image sequence and a preset time parameter, wherein the time difference value refers to the time difference between the acquisition times of different images;
determining a height parameter, and determining a basic speed according to the displacement parameter, the time difference value and a preset focal length parameter, wherein the height parameter is the distance between the image acquisition module and the vehicle;
the synchronous speed is determined according to the basic speed.
By adopting the technical scheme, the corresponding acquisition time of the two selected images is different, the difference value between the two acquisition time is a time difference value, the actual driving distance of the vehicle can be calculated by utilizing the distance between the camera lens and the vehicle, the focal length of the camera lens and the displacement parameters acquired by analyzing the pixel points, and the basic speed of the vehicle can be calculated by utilizing the actual driving distance and the time difference value of the vehicle.
Optionally, in a specific method for determining a synchronization speed according to a base speed, the method includes:
determining a first base speed and a second base speed, wherein the first base speed and the second base speed are respectively base speeds corresponding to different time nodes;
determining a first acceleration from the first base velocity and the second base velocity;
and the momentum weighting method is used for determining the target speed according to the first basic speed, the first acceleration and the time difference value.
By adopting the technical scheme, after the acceleration is determined by utilizing the known basic speed, the change of the vehicle speed in the future can be deduced by utilizing the acceleration, and further the future target speed can be deduced. The momentum weighting method is a method for deducing the target speed by combining constant acceleration and known basic speed, the method for deducing the speed does not need to acquire images again, only uses preset time difference to participate in deduction, reduces the time consumed by a system during receiving and transmitting electric signals or performing photographing, does not have the problem that the difference exists between the actual time difference and the time difference, and improves the accuracy of the vehicle speed.
Optionally, in a specific method for determining a synchronization speed according to a base speed, the method includes:
determining a first base speed and a second base speed, wherein the first synchronization speed and the second synchronization speed are base speeds corresponding to different time nodes;
determining a first acceleration according to the first basic speed, the second basic speed and the time difference value;
determining a third basic speed according to the first acceleration, the second basic speed and the time difference value;
determining a second acceleration according to the second basic speed, the third basic speed and the time difference value;
determining an acceleration change value according to the first acceleration and the second acceleration;
and the fluctuation weighting method is used for determining the target speed according to the third basic speed, the time difference value, the acceleration change value and a preset fluctuation coefficient.
By adopting the technical scheme, after the acceleration is determined by utilizing the known basic speed, the change value of the acceleration can be determined by utilizing different accelerations, the change of the acceleration which is likely to occur in the future is deduced, the future acceleration is deduced, the change of the vehicle speed in the future is deduced by utilizing the obtained acceleration, and the future target speed can be deduced.
The momentum weighting method is actually a method of deducing the target velocity by combining the changing acceleration, and the known base velocity. The method for deducing the speed has the same effect as the momentum weighting method, the method for deducing the speed does not need to acquire images again, only uses the preset time difference value to participate in deduction, reduces the time consumed by a system when receiving and transmitting electric signals or performing photographing, does not have the problem that the actual time difference and the time difference value have difference, and improves the accuracy of the vehicle speed.
Alternatively, the variation amount coefficient can be varied according to different time nodes.
By adopting the technical scheme, the fluctuation quantity coefficient refers to an adjusting coefficient preset in the system and used for determining the change amplitude of the acceleration; the magnitude of the variation coefficient affects the magnitude of the acceleration variation. Since the variation amplitude of the acceleration is not constant in different time periods, in order to make the predicted acceleration closer to the actual value, the variation coefficient should be changed according to the variation situation of the actual speed, that is, the value of the variation coefficient is different in different time nodes, so that different acceleration variation values have different variation coefficients.
Optionally, in a specific method for determining a synchronization speed according to a base speed, the method includes:
determining a first base speed and a second base speed, wherein the first synchronization speed and the second synchronization speed are base speeds corresponding to different time nodes;
determining a first acceleration from the first base velocity and the second base velocity;
determining a third basic speed according to the first acceleration, the second basic speed and the time difference value;
determining a second acceleration according to the second basic speed, the third basic speed and the time difference value;
determining an acceleration change value according to the first acceleration and the second acceleration, and selecting a speed deduction method based on the acceleration change value, wherein the speed deduction method comprises a momentum weighting method and a fluctuation weighting method;
and determining the target speed according to the selected speed deduction method.
By adopting the technical scheme, the acceleration change value refers to the difference value between the acceleration before and after the acceleration changes when the acceleration of the vehicle in running changes. With different base velocities, the acceleration can be determined; with different accelerations, an acceleration change value may be determined. If the acceleration change value does not exist, the acceleration is not changed, and a momentum weighting method can be selected to deduce the target speed; if the acceleration change value exists, the acceleration is changed, and the target speed can be deduced by a fluctuation amount weighting method.
The second purpose of the application is to provide a vehicle speed detection system which has the characteristic of conveniently detecting the running speed of a vehicle.
The second objective of the present invention is achieved by the following technical solutions:
the image acquisition module is used for acquiring an image sequence based on a preset time parameter;
the image acquisition module comprises a camera sub-module, a lens sub-module used for increasing the shooting visual field of the camera sub-module, an illumination sub-module used for improving the ambient brightness of the shooting visual field, and a height measurement sub-module used for detecting the distance between the lens sub-module and a vehicle;
the effective characteristic point acquisition module is used for acquiring effective characteristic points in the image sequence and determining effective characteristic combinations, wherein the effective characteristic combinations comprise effective characteristic points corresponding to different images;
the displacement parameter acquisition module is used for determining the displacement parameters of the effective characteristic combination according to the positions of the effective characteristic points in the same effective characteristic combination; and the number of the first and second groups,
and the speed prediction module is used for determining the synchronous speed according to the displacement parameter and the time parameter.
The third purpose of the present application is to provide a computer storage medium, which can store corresponding programs and has the characteristic of conveniently detecting the vehicle running speed.
The third object of the invention is achieved by the following technical scheme:
a computer readable storage medium storing a computer program that can be loaded by a processor and executes any of the above-described vehicle speed detection methods.
Drawings
Fig. 1 is a schematic diagram of a plurality of image sequences at different speeds, showing image sequence (1), image sequence (2) and image sequence (3), respectively.
Fig. 2 is a flowchart illustrating a vehicle speed detection method according to the present application.
FIG. 3 is a schematic view of the position between the image acquisition module and the vehicle.
Fig. 4 is a sub-flowchart illustrating the determination of the valid feature combinations in the feature total set in the vehicle speed detection method of the present application.
Fig. 5 is a schematic diagram of the positions of the start coordinate and the end coordinate in the reference coordinate system.
Fig. 6 is a sub-flowchart for determining the synchronous speed in the vehicle speed detection method of the present application.
Fig. 7 is a schematic view of an operating state of a lens of the image capturing module.
Fig. 8 is a sub-flowchart illustrating selection of a speed deduction method and deduction of a target speed according to the determined speed deduction method in the vehicle speed detection method of the present application.
Fig. 9 is a relationship diagram of the first base velocity, the second base velocity, the third base velocity, the first acceleration, the second acceleration, and the acceleration change value.
Fig. 10 is a sub-flowchart illustrating the derivation of the target speed according to the determined speed derivation method in the vehicle speed detection method of the present application.
FIG. 11 is a block schematic diagram of a vehicle speed detection system of the present application.
Fig. 12 is a schematic diagram of a computer-readable storage medium storing a computer program that can be loaded by a processor and executes a vehicle speed detection method.
In the figure, 1, an image acquisition module; 11. a camera sub-module; 12. a lens sub-module; 13. an illumination sub-module; 14. a height measurement sub-module; 2, an effective characteristic point obtaining module; 3. a displacement parameter acquisition module; 4. a speed prediction module.
Detailed Description
In the related art, during the process of shooting the chassis of the vehicle, a user usually uses a camera to continuously shoot the chassis of the vehicle in motion in segments to obtain a plurality of segmented pictures, and then the segmented pictures are spliced to form a complete picture of the chassis of the vehicle. However, since the vehicle speed may change during the driving process, the length ratio of the front and rear of the taken segmented picture is not consistent due to the change of the vehicle speed.
Referring to fig. 1, the image sequence (1), the image sequence (2) and the image sequence (3) are all image sequences continuously obtained at the same specific frame rate, i.e. the time intervals between two adjacent segmented photographs coincide, wherein V01, V02 and V03 are all the actual driving speeds of the vehicle, wherein V01 < V02 < V03.
For each segmented photo obtained by the image sequence (1), the system can smoothly splice and form a complete vehicle chassis picture. However, for each segmented photograph obtained from the image sequence (2) or the image sequence (3), the system needs to perform some repairing process on the segmented photograph to try to splice the segmented photographs into a complete vehicle chassis picture.
Therefore, in order to smoothly splice a complete vehicle chassis picture, when the vehicle running speed changes, the frame rate of the picture shot by the camera needs to be synchronously adjusted according to the real-time running speed of the vehicle, the real-time running speed of the vehicle is equivalent to an auxiliary parameter for adjusting the frame rate, and the defect of the change of the vehicle speed is overcome by changing the way of obtaining the time interval between two adjacent segmented pictures in the segmented pictures subsequently, so that the pictures meeting the basic standard requirements can be obtained by subsequent shooting.
For the real-time running speed of the vehicle, a user usually places distance measuring instruments in front of and behind the vehicle respectively, and calculates the actual running speed by continuously measuring the distances between the two distance measuring instruments and the vehicle, but the arrangement of the distance measuring instruments causes obstacles to the running of the vehicle, which is inconvenient.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
Embodiments of the present application are described in further detail below with reference to figures 2-12 of the specification.
Example one
The embodiment of the application provides a vehicle speed detection method, and the main flow of the method is described as follows.
Referring to fig. 2, S01, an image sequence is acquired based on a preset time parameter.
The image sequence comprises a plurality of images, and all the images are obtained by continuously shooting a running vehicle at the same frame rate. In this embodiment, the image sequence includes at least three images. The time parameter refers to a time interval when two adjacent images are acquired, and is associated with the frame rate. It can be understood that, in the process of sequentially and continuously acquiring each image at a specific frame rate, each image corresponds to a time node during acquisition, and a time interval between adjacent time nodes is a time parameter.
Referring to fig. 3, in order to reduce the obstruction to the traveling path of the vehicle, the detection system using the vehicle speed detection method in the present embodiment includes an image acquisition module 1, the image acquisition module 1 is disposed at the bottom of the vehicle, and the image sequence is obtained by the detection system shooting the chassis of the vehicle during traveling.
Referring to fig. 2, S02, effective feature points in the image sequence are acquired, and effective feature combinations are determined.
The characteristic points refer to pixel points which can reflect the essence of the image and can be obtained after the image is processed. The moving distance of the vehicle can be calculated by collecting the same characteristic points on the two images (the same characteristic points refer to two characteristic points with the same corresponding positions on the actual vehicle, but the positions of the two characteristic points in the images are not consistent), and analyzing the position change path of each characteristic point, and the moving speed of the vehicle in the preset time can be determined by combining the time difference between the time nodes of the two images. However, when the feature points are acquired, the system may acquire an incorrect feature point, so that the pair of acquired feature points are actually different, and the acquired feature points are difficult to reflect the moving distance of the vehicle during normal running, and these part of feature points are failure feature points. Therefore, all the feature points can be classified into valid feature points and invalid feature points.
In two adjacent images or two selected images, the same two effective feature points can form an effective feature combination, so that the effective feature combination comprises effective feature points corresponding to different images.
Referring to fig. 4, the specific method of S02 includes:
s021, selecting two images in the image sequence as a basic image group, acquiring the feature points of the selected basic image group, determining feature combinations, and establishing a feature total set containing all the feature combinations.
The basic image group comprises two different images. All the characteristic points can process and collect images in the image sequence through a characteristic point detection algorithm, wherein the characteristic point detection algorithm is a common method in image processing, and the characteristic point detection algorithm is not improved. In this embodiment, two feature points in two adjacent images are combined into one feature combination, where the time parameter is a time interval when the two adjacent images are acquired, and is equivalent to a time difference between two adjacent images.
S022, presetting effective parameters, determining motion parameters of feature points in the same feature combination, and determining effective feature combinations in feature total sets according to the motion parameters and the effective parameters.
The motion parameters can reflect the displacement of two feature points in the same feature combination; the effective parameters are used for comparing with the motion parameters, and then whether the displacement generated by the two characteristic points in the characteristic combination meets a preset standard or not is analyzed, and if yes, the characteristic combination can be determined as the effective characteristic combination.
The feature collection includes all feature combinations, the feature collection is equivalent to a displacement list, the feature combinations are equivalent to columns in the displacement list, and each column includes motion parameters. The feature combinations include valid feature combinations and invalid feature combinations, wherein the valid feature combinations include two valid feature points, and the invalid feature combinations include two invalid feature points.
Referring to fig. 4 and 5, in the specific method of S022, the method includes:
s0221, presetting a reference coordinate system and effective parameters.
The reference coordinate system is a plane rectangular coordinate system, the reference coordinate system is established based on the images, and the positions of the origin of the reference coordinate system in different images are consistent. In the present embodiment, the x-axis of the reference coordinate system extends in a direction parallel to the preset traveling direction during normal traveling of the vehicle.
S0222, determining a starting coordinate and an end coordinate according to the position of the feature point in the feature combination corresponding to the reference coordinate system.
All the feature points can determine corresponding coordinates in the reference coordinate system, and it can be understood that, if the vehicle is in a driving state, the positions of the two feature points in the feature combination in the reference coordinate system are different. Because each feature combination comprises two feature points, each feature combination corresponds to two coordinates which are respectively a start coordinate and an end coordinate, wherein a time node acquired by the feature point corresponding to the start coordinate is positioned before a time node acquired by the feature point corresponding to the end coordinate.
And S0223, determining the motion parameters according to the initial coordinates and the end coordinates.
Wherein the motion parameters include a travel motion parameter and a deviation motion parameter. The moving direction of the feature point is used to reflect the degree of correspondence between the moving direction of the feature point and the preset driving direction of the vehicle, in this embodiment, the direction corresponding to the moving parameter is the x-axis direction of the reference coordinate system, and the moving parameter is the absolute value of the difference between the x-coordinate of the start coordinate and the x-coordinate of the end coordinate.
The deviation motion parameter is used for reflecting the deviation degree between the motion direction of the characteristic point and the preset driving direction of the vehicle, and is equivalent to the deviation amount reflecting the displacement of the characteristic point; in this embodiment, the direction corresponding to the deviation motion parameter is the y-axis direction of the reference coordinate system, and the deviation motion parameter is the absolute value of the difference between the y coordinate of the start coordinate and the y coordinate of the end coordinate.
If the feature point (X, Y) and the feature point ' (X ', Y ') are two feature points in the same feature combination, the value of the travel motion parameter of the same feature combination is the absolute value of the difference between X ' and X, and the value of the deviation motion parameter of the same feature combination is the absolute value of the difference between Y ' and Y.
S0224, determining effective characteristic combinations in the characteristic total set according to the effective parameters, the advancing motion parameters and the deviation motion parameters.
The effective parameters comprise a first effective parameter and a second effective parameter, wherein the first effective parameter corresponds to the y-axis direction of the reference coordinate system. By analyzing all feature combinations through comparison among the first effective parameter, the second effective parameter, the advancing motion parameter and the deviation motion parameter, the effective feature combination can be determined.
Referring to fig. 4 and 5, in the specific method of S0224, the method includes:
s02241, comparing the first effective parameter with the deviation motion parameter, and determining a basic feature combination in the feature total set.
If the deviation motion parameter is greater than or equal to the first effective parameter, the motion of the feature combination corresponding to the deviation motion parameter is seriously deviated from the normal driving direction of the vehicle, so that the feature combination corresponding to the deviation motion parameter is judged to be a failure feature combination; otherwise, judging that the deviation degree of the feature combination corresponding to the deviation motion parameter is within a preset threshold value, and acquiring the feature combination as a basic feature combination.
S02242, determining effective characteristic combinations in the characteristic total set according to the ratio of the advancing motion parameter and the deviation motion parameter and a preset second effective parameter.
The travel motion parameter may reflect a distance traveled by the vehicle in a normal traveling direction, and in a normal traveling state, a movement of the feature point on the x axis should be clearly different from a movement of the feature point on the y axis.
If the ratio of the advancing motion parameter to the deviation motion parameter is smaller than the second effective parameter, the advancing motion parameter and the deviation motion parameter are relatively close to each other, the advancing motion parameter is judged to be inaccurate in acquisition, and the advancing motion parameter and the two acquired feature points are inaccurate in acquisition, so that the basic feature combination corresponding to the advancing motion parameter is determined to be a failure feature combination; if the ratio of the advancing motion parameter to the deviation motion parameter is larger than or equal to the second effective parameter, the advancing motion parameter is in a normal range, and the basic feature combination corresponding to the advancing motion parameter is determined to be an effective feature combination.
Through the screening of first effective parameter, can select out a plurality of basic feature combination from the total set of characteristic, and through the comparison of first effective parameter, can further select a plurality of effective feature combination, through the calculation of effective feature combination, can confirm the displacement of vehicle in normal driving direction more accurately.
Referring to fig. 4 and 5, S03 determines effective motion parameters according to the start coordinates and the end coordinates of the effective feature combinations, and determines displacement parameters according to the respective effective motion parameters.
The effective action parameter refers to the displacement of the characteristic point of the effective characteristic combination in the normal driving direction of the vehicle, and is related to the position of the characteristic point in the effective characteristic combination. Since the advancing motion parameter is the displacement of the feature point on the x-axis, the advancing motion parameter of the valid feature combination can be acquired as the valid motion parameter of the valid feature combination.
The displacement parameter refers to the displacement of the vehicle in the normal driving direction, and the displacement of the characteristic point can reflect the real displacement of the vehicle, so that the displacement parameter can be determined through the effective action parameter. In order to make the displacement parameter more accurate and convincing, the displacement parameter is the average of all effective motion parameters.
In this embodiment, the distance between adjacent units on the x-axis of the reference coordinate system is the length of the pixel of the image, and the movement reflected by the effective motion parameter is equivalent to that the feature point moves by several pixel points on the x-axis, so that in the process of calculating the effective motion parameter, the length of a single pixel is defined as the pixel length m, and the number of pixels between the start coordinate and the end coordinate is defined as the moving pixel N, and the effective motion parameter can be obtained by the formula (1).
Moving pixel n (l) by the effective motion parameter m
Similarly, the average value of the moving pixels N of each effective feature combination is defined as the moving pixels NxThen, the displacement parameter can be obtained through the formula (2).
Shifting the pixel N by the shift parameter (pixel length m)x (2)
Referring to fig. 6, S04, a synchronization speed is determined according to the displacement parameter and the time parameter.
The displacement of the vehicle along the normal driving direction in a specific time can be calculated through the displacement parameter and the time parameter, the basic speed of the vehicle is calculated, and the synchronous speed of the vehicle at a future preset time node can be deduced by introducing a preset acceleration into the basic speed for calculation.
Referring to fig. 6, the specific method of S04 includes:
and S041, determining the time difference value according to the position of the image in the image sequence and a preset time parameter.
The two selected images should have different acquisition times, and the difference between the two acquisition times is a time difference value. In this embodiment, the two selected images are images of two adjacent frames in the image sequence, so the time difference value is equal to the time parameter; in other embodiments, if two images of the image sequence separated by e frames are selected, the time difference is the time parameter.
And S042, determining a height parameter, and determining a basic speed according to the height parameter, the displacement parameter, the time difference value and a preset focal length parameter.
The height parameter refers to a distance between a lens of the image acquisition module 1 (refer to fig. 3) and a chassis of the vehicle, and the image in this embodiment is acquired by the image acquisition module 1 by shooting the vehicle in a spaced manner, so that an actual driving distance of the vehicle can be calculated by using the acquired height parameter, a focal length of the lens of the image acquisition module 1, and a displacement parameter acquired by analyzing a pixel point, and a basic speed of the vehicle can be calculated by using the actual driving distance and a time difference value of the vehicle.
Defining the focal length of the camera lens as the focal length f, the distance between the camera lens and the vehicle as the distance h, and the actual driving distance of the vehicle as the distance L, the distance L can be obtained by the formula (3).
Figure BDA0003084997030000111
If the time difference is defined as Δ t, the base speed can be obtained by the formula (4).
Figure BDA0003084997030000112
And S043, determining a target speed according to the basic speed, storing the target speed into a preset synchronous database, and acquiring the target speed meeting the time node condition from the synchronous database as the synchronous speed.
The synchronous speed refers to a speed capable of reflecting the running speed of the vehicle, and can participate in other technologies which need to introduce the running speed of the vehicle. The synchronous database contains all the target speeds sequentially obtained according to the time sequence. In practical applications such as vehicle chassis shooting technology, after the synchronous speed is determined, the system can adjust the shooting frequency of the camera for shooting the segmented pictures in real time, namely adjust the time interval when two adjacent segmented pictures are obtained, so as to compensate the defect of vehicle speed change in real time.
However, in the process of measuring the speed of the vehicle by combining with an image calculation method, the system consumes time when receiving and sending electric signals and taking pictures, the time causes the difference between the actual time difference and the preset time difference to be tens of milliseconds to hundreds of milliseconds, the vehicle is calculated at the speed of 5km/h, the vehicle often passes through tens of centimeters, the accuracy is not high enough, and the speed synchronization difference exists.
Therefore, if the synchronization speed is determined by combining the image calculation, the risk is high, and in order to alleviate the problem of speed synchronization difference, the scheme also introduces a speed deduction method to predict the target speed. The principle of speed deduction is to predict future changes of the basic speed and the frequency and amplitude of the changes based on the obtained basic speed, thereby deducing target speeds after a certain time, storing the target speeds in a synchronous database, and arranging the target speeds according to time nodes.
When the synchronous speed of the corresponding time node needs to be obtained, the target speed of the selected time node is obtained as the synchronous speed, which is equivalent to calculating the synchronous speed in advance. The method for calculating the image by using the method for calculating the synchronous speed in advance replaces the image calculation method, images do not need to be acquired again when the synchronous speed is calculated subsequently, only preset time difference is used for participation and deduction, the problems of difference between actual time difference and time difference do not exist, and the accuracy of the vehicle speed is improved.
Referring to fig. 8, in the specific method of S043, the method includes:
s0431, storing the basic speed as a target speed into a preset synchronous database, storing the basic speed into a preset basic database, judging whether the basic speed in the basic database meets deduction conditions or not, and executing S0432 if the basic speed in the basic database meets the deduction conditions; and if not, returning to S021, and selecting the next group of basic image groups according to the time sequence.
The synchronous database refers to a database containing all target speeds, and the obtained basic speed can be stored in the synchronous database as an initial target speed. The deduction condition refers to whether there are two basic velocities corresponding to different time periods and having the time difference value coincident in the basic database, and thus the number of basic velocities in the basic database should be 2.
If the number of base velocities in the base database is less than 2, velocity deduction lacks a requirement, and it is necessary to return to step S021 and reselect the base image group. It should be noted that the two reselected images should be the next image of the already selected images, such as image a1, image a2, and image a3 sequentially arranged in time order in the acquired image sequence, and if the current basic image group includes image a1 and image a2, the reselected next basic image group should include image a2 and image a 3.
If the number of base speeds in the base database is equal to 2, the speed deduction satisfies the necessary condition, and the speed deduction can be performed.
It is understood that, since there are only two base speeds at most, the synchronization database directly obtains two target speeds with the base speed being the target speed, and all the target speeds in the synchronization database except the two target speeds are obtained by two base speed deductions.
S0432, determining a speed deduction method according to different basic speeds in the basic database, deducting the target speed according to the determined speed deduction method, and sequentially storing the determined target speed into the synchronous database according to a time sequence.
In the actual driving process, the vehicle is difficult to move linearly at a constant speed completely according to a preset driving direction, still has a certain acceleration, and the acceleration may also change, so that in order to obtain a more accurate target speed, on the basis of the existing two basic speeds, a speed deduction method corresponding to different speed change rules is further required to calculate the target speed. In the present embodiment, the velocity deduction method includes a momentum weighting method and a fluctuation amount weighting method, and the acceleration itself is changed in the deduction of the fluctuation amount weighting method, in which the acceleration is not changed.
Referring to fig. 8 and 9, in the specific method of S0432, the method includes:
s04321, determining a first basic speed and a second basic speed according to different basic speeds in the basic database, and determining a first acceleration according to the first basic speed and the second basic speed.
The basic speed stored in the basic database in the time level is a first basic speed, and the basic speed stored later is a second basic speed; the first base speed corresponds to a base speed of a first base image group, and the second base speed corresponds to a base speed of a next base image group. In this embodiment, only two base speeds are stored in the base database, and when a new base speed is updated into the base database, the base speed that was stored in the base database earlier is eliminated.
If an image a1, an image a2 and an image a3 are sequentially arranged in time sequence in the acquired image sequence, wherein the interval time between the image a1 and the image a2 is a time difference value, the interval time between the image a2 and the image a3 is a time difference value, and the time difference value is an integral multiple of a time parameter;
the first base speed is obtained from the image a1 and the image a2 by the method described in steps S021 to S042;
the second base speed is obtained from the image a2 and the image a3 by the method described in steps S021 to S042.
Further, by comparing the first base speed with the second base speed, it is possible to determine whether the first acceleration will exist between the first base speed and the second base speed.
Defining a first base speed as Vm-3The second base speed is Vm-2The first acceleration is Am-3And the time difference value is delta t, then A can be obtained by the formula (5)m-3
Figure BDA0003084997030000131
S04322, judging whether the first acceleration meets a speed constant condition, if so, returning to S021, and selecting a next group of basic image groups according to the time sequence; otherwise, execute S04323.
If the first acceleration is equal to 0, a constant speed condition is satisfied, which means that after the time t elapses, the vehicle speed remains at the first basic speed, the vehicle is in a constant speed running state, and this is difficult to occur in the actual running process of the vehicle, so it is necessary to return to S021 to recalculate the first acceleration.
If the first acceleration is equal to or greater than 0, the constant speed condition is not satisfied, indicating that the vehicle speed changes with time.
And S04323, determining a third basic speed according to the first acceleration, the second basic speed and the time difference value.
Wherein the third base speed is a base speed to which the second base speed becomes after the elapse of time t. By using the first acceleration, it is possible to deduceThe second base speed is changed after the elapse of time t, thereby obtaining a third base speed. Defining a third base speed as Vm-1Then V can be obtained by the formula (6)m-1
Vm-1=Vm-2+(Am-3*Δt) (6)
And S04324, determining a second acceleration according to the second base speed, the third base speed and the time difference value.
Wherein a second acceleration between the second base velocity and the third base velocity is obtained by a comparison between the second base velocity and the third base velocity. Defining the second acceleration as Am-2Then A can be obtained by the formula (7)m-2
Figure BDA0003084997030000141
S04325, determining an acceleration change value according to the first acceleration and the second acceleration, judging whether the acceleration change value meets a momentum constant condition, and if so, executing S043261; otherwise, executing S043262.
The acceleration change value refers to a difference value between before and after the acceleration changes when the acceleration of the vehicle is changed, and whether the acceleration change value exists or not can be used for judging whether the first acceleration and the second acceleration are equal or not.
If the first acceleration and the second acceleration are equal and the acceleration change value is 0, the momentum constant condition is met, the vehicle is in uniform acceleration motion, and the acceleration is unchanged in the driving process, so that the target speed can be deduced according to a momentum weighting method; on the contrary, if the acceleration variation value is not 0, the momentum constancy condition is not satisfied, and the acceleration itself varies during the running of the vehicle, and the target speed can be derived according to the momentum weighting method.
And S04326, determining the target speed according to the selected speed deduction method, sequentially storing the determined target speeds into a synchronous database according to a time sequence, and executing S0433.
Referring to fig. 8 and 10, in the specific method of S04326, the method includes:
s043261, determining a target speed according to the first basic speed, the first acceleration and the time difference value by a momentum weighting method, sequentially storing the determined target speed into a synchronous database according to a time sequence, and executing S0433.
The momentum weighting method is a method of deriving a velocity using a constant acceleration by eliminating a variation width of the acceleration. Since the vehicle is in the uniform acceleration movement, the first base speed changes in accordance with the first acceleration as time increases, and the target speed corresponding to the time node can be determined.
Defining a target speed as VmThe time node corresponding to the target speed is TmThe time node corresponding to the first basic speed is T0,TmAnd T0The time difference between is tmWherein t ismIs a positive integer multiple of the time parameter, then V can be obtained by equation (8)m
Vm=Vm-3+Am-3*tm (8)
According to the momentum weighting method, the target speeds on different time nodes can be sequentially obtained according to the sequence that the time nodes are gradually increased. After the obtained target speeds are sequentially stored in the synchronous database according to the sequence of gradually increasing the obtaining time, the obtained target speeds are equivalent to that each target speed has only one sequence corresponding to the time, and when the actual running speed of the vehicle on a certain time node needs to be obtained, the target speeds on the corresponding sequences can be obtained from the synchronous database to be used as the synchronous speed capable of representing the actual running speed of the vehicle.
In addition, in other embodiments, the momentum weighting method may also use other formulas such as formula (8-1), formula (8-2), etc. to combine the first base speed and the first acceleration to derive the target speed.
Wherein the content of the first and second substances,
Figure BDA0003084997030000151
in the formula (8-1), β is a momentum adjusting coefficient and has a variation, and different first accelerations correspond to different momentum adjusting coefficients, and each momentum adjusting coefficient is obtained by a user through experimental experience and is preset in the system.
Vm=Vm-3+Am-3*δ*tm (8-2)
In the formula (8-1), δ is a momentum adjusting coefficient and is unique, and is obtained by a user through experimental experience and preset in the system.
And S043262, determining a target speed according to the third basic speed, the time difference value, the acceleration change value and a preset fluctuation coefficient by using a fluctuation amount weighting method, sequentially storing the determined target speed into a preset synchronous database according to a time sequence, and executing S0433.
The fluctuation amount weighting method is a method of weighting the change amplitude of the acceleration into the prediction of the velocity, thereby predicting the change iteration of the acceleration and further obtaining a square acceleration change value of a better velocity value. The variation coefficient refers to an adjusting coefficient which is preset in the system and is used for determining the variation amplitude of the acceleration; the magnitude of the variation coefficient affects the magnitude of the acceleration variation.
Since the variation amplitude of the acceleration is not constant in different time periods, in order to make the predicted acceleration closer to the actual value, the variation coefficient should be changed according to the variation situation of the actual speed, that is, the value of the variation coefficient is different in different time nodes, so that different acceleration variation values have different variation coefficients.
In the embodiment, each fluctuation coefficient is obtained by the user through experimental experience and preset in the system, and in other embodiments, the fluctuation coefficient can also be derived through methods such as big data analysis and mechanical learning.
Specifically, by analyzing the change between the first acceleration and the second acceleration, an acceleration change value can be determined. Defining the acceleration variation value as AAm-3Then AA can be obtained by the formula (9)m-3
AAm-3=Am-2-Am-3 (9)
Defining the variation coefficient as alpha, the value of the variation coefficient is associated with the time node corresponding to the acceleration variation value, i.e. the value of the variation coefficient is different for different time nodes, and V can be obtained by formula (10)m
Figure BDA0003084997030000161
Wherein, Vm-1Is a third base speed, Am-2Is the second acceleration. It can be understood that VmThe corresponding time node is the mth time node on the time axis, Vm-1The corresponding time node is the m-1 time node on the time axis.
It is understood that after the first base speed and the second base speed are determined, the second acceleration, the third base speed and the acceleration variation value can be calculated through the formula (5), the formula (6), the formula (7) and the formula (9), and then the target speed can be calculated through the formula (10) by combining the variation coefficient.
For example, taking m ═ 3 as an example, V0 and V1 are determined by the method of step S021 to step S042;
on the basis that V0 and V1 have been determined, a0 can be obtained by formula (5), V2 by formula (6), a1 by formula (7), and AA0 by formula (9), while on the basis that V2, a1, and AA0 have been determined, V3 can be obtained by formula (10) in combination with the corresponding coefficient of variation;
a2 is obtained by formula (7) on the basis that V2 and V3 have been determined, AA1 is obtained by formula (9) in combination with a1, and V4 is obtained by formula (10) on the basis that V3, a2, AA1, and AA0 have been determined, respectively, in combination with the corresponding coefficient of variation;
a3 is obtained by formula (7) on the basis that V3 and V4 have been determined, AA2 is obtained by formula (9) in combination with a2, and V5 is obtained by formula (10) on the basis that V4, A3, AA2, AA1, and AA0 have been determined;
a4 is obtained by formula (7) on the basis that V4 and V5 have been determined, AA3 is obtained by formula (9) in combination with A3, and V6 is obtained by formula (10) on the basis that V5, a4, AA3, AA2, AA1, and AA0 have been determined;
by the way of analogy, the method can be used,
an-2 is obtained by formula (7) on the basis that Vn-2 and Vn-1 have been determined, AA n-3 is obtained by formula (9) in combination with An-3, and Vn is obtained by formula (10) on the basis that Vn-1, A n-2, AA n-3, AA n-4, AA n-5, …, AAn-n have been determined, in combination with the corresponding coefficient of variation.
From the above-described example derivation, it is understood that the vehicle speed at the next time node from the second base speed can be obtained as long as the first base speed and the second base speed are determined, and therefore the vehicle speed at the next time node can be obtained successively in time series as long as the first base speed and the second base speed are iterated without stopping updating, so that the target speed at the future time node can be obtained gradually. In an embodiment, the initial first base speed and the initial second base speed are determined by analyzing the image feature points, i.e. V0 and V1 need to be obtained in conjunction with image processing, and all the speeds in addition to that can be derived, so that the target speed corresponding to the future time node can be obtained.
According to the fluctuation amount weighting method, the target speeds on different time nodes can be sequentially obtained according to the sequence that the time nodes are gradually increased. After the obtained target speeds are sequentially stored in the synchronous database according to the increasing order of the obtaining time, each target speed has an order corresponding to the time.
And S0433, acquiring the synchronization speed from the synchronization database based on the selected time node.
When the actual running speed of the vehicle at a certain time node needs to be acquired, the target speeds in the corresponding sequence can be acquired from the synchronous database based on the selected time node, and the acquired target speeds can be used as synchronous speeds capable of representing the actual running speed of the vehicle.
The implementation principle of the vehicle speed detection method provided by the embodiment of the application is as follows: by using the method of extracting the characteristic points, the actual running speed of the vehicle can be calculated by continuously shooting the images and analyzing the displacement of the characteristic points, so that the obstruction to the running path of the vehicle in the actual speed measurement can be reduced, and the method is more convenient. In the method for analyzing the characteristic points, the accuracy of the speed obtained after the characteristic points are analyzed is improved and the precision of speed detection is improved by screening the effective characteristic points.
After the basic speed meeting the deduction condition is obtained, the system can automatically deduct each synchronous speed of the vehicle at a future time node through the first two basic speeds and a preset fluctuation coefficient, and each synchronous speed can be used as an auxiliary parameter to participate in analysis in technologies needing the actual running speed of the vehicle to participate in auxiliary, such as a vehicle chassis shooting technology. By utilizing the speed deduction method, on one hand, the synchronous speed can be obtained in advance, and compared with the method for obtaining the synchronous speed by always using the analysis characteristic points, the speed deduction method can more quickly obtain the synchronous speed, so that the synchronous speed can be more quickly transmitted to other equipment for matching; on the other hand, the speed deduction method does not need the system to reacquire images, does not need to reacquire images when the subsequent synchronous speed is calculated, only uses the preset time difference value to participate in deduction, reduces the time consumed by the system when the system receives and transmits electric signals or performs photographing, does not have the problem that the actual time difference and the time difference value have difference, and improves the accuracy of the vehicle speed.
Example two:
referring to fig. 11, in an embodiment, a vehicle speed detection system is provided, which corresponds to the vehicle speed detection method in the first embodiment one to one, and includes an image acquisition module 1, an effective feature point acquisition module 2, a displacement parameter acquisition module 3, and a speed prediction module 4. The functional modules are explained in detail as follows:
the image acquisition module 1 is configured to acquire an image sequence based on a preset time parameter.
The image acquisition module 1 includes a camera sub-module 11, a lens sub-module 12 for increasing a shooting visual field of the camera sub-module 11, an illumination sub-module 13 for increasing an ambient brightness of the shooting visual field, and a height measurement sub-module 14 for detecting a distance between the lens sub-module 12 and the vehicle.
The effective characteristic point obtaining module 2 obtains effective characteristic points in the image sequence and determines effective characteristic combinations, wherein the effective characteristic combinations comprise effective characteristic points corresponding to different images.
And the displacement parameter acquisition module 3 is used for determining the displacement parameters of the effective feature combinations according to the positions of the effective feature points in the same effective feature combination.
And the speed prediction module 4 is used for determining the synchronous speed according to the displacement parameter and the time parameter.
Example three:
referring to fig. 12, in one embodiment, there is provided a computer readable storage medium storing a computer program capable of being loaded by a processor and executing the above-mentioned vehicle speed detection method, the computer program when executed by the processor implementing the steps of: and S01, acquiring the image sequence based on the preset time parameter.
S021, selecting two images in the image sequence as a basic image group, acquiring the feature points of the selected basic image group, determining feature combinations, and establishing a feature total set containing all the feature combinations.
S0221, presetting a reference coordinate system and effective parameters.
S0222, determining a starting coordinate and an end coordinate according to the position of the feature point in the feature combination corresponding to the reference coordinate system.
And S0223, determining the motion parameters according to the initial coordinates and the end coordinates.
S02241, comparing the first effective parameter with the deviation motion parameter, and determining a basic feature combination in the feature total set.
S02242, determining effective characteristic combinations in the characteristic total set according to the ratio of the advancing motion parameter and the deviation motion parameter and a preset second effective parameter.
And S03, determining effective action parameters according to the start coordinates and the end coordinates of the effective characteristic combinations, and determining displacement parameters according to each effective action parameter.
And S041, determining the time difference value according to the position of the image in the image sequence and a preset time parameter.
And S042, determining a height parameter, and determining a basic speed according to the height parameter, the displacement parameter, the time difference value and a preset focal length parameter.
S0431, storing the basic speed as a target speed into a preset synchronous database, storing the basic speed into a preset basic database, judging whether the basic speed in the basic database meets deduction conditions or not, and executing S04321 if the basic speed in the basic database meets the deduction conditions; and if not, returning to S021, and selecting the next group of basic image groups according to the time sequence.
S04321, determining a first basic speed and a second basic speed according to different basic speeds in the basic database, and determining a first acceleration according to the first basic speed and the second basic speed.
S04322, judging whether the first acceleration meets a speed constant condition, if so, returning to S021, and selecting a next group of basic image groups according to the time sequence; otherwise, execute S04323.
And S04323, determining a third basic speed according to the first acceleration, the second basic speed and the time difference value.
And S04324, determining a second acceleration according to the second base speed, the third base speed and the time difference value.
S04325, determining an acceleration change value according to the first acceleration and the second acceleration, judging whether the acceleration change value meets a momentum constant condition, and if so, executing S043261; otherwise, executing S043262.
S043261, determining a target speed according to the first basic speed, the first acceleration and the time difference value by a momentum weighting method, sequentially storing the determined target speed into a synchronous database according to a time sequence, and executing S0433.
And S04362, determining a target speed according to the third basic speed, the time difference value, the acceleration change value and a preset fluctuation coefficient by using a fluctuation amount weighting method, sequentially storing the determined target speed into a preset synchronous database according to a time sequence, and executing S0433.
And S0433, acquiring the synchronization speed from the synchronization database based on the selected time node.
The computer-readable storage medium includes, for example: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are preferred embodiments of the present application, and the scope of the present application is not limited by the embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (11)

1. A vehicle speed detection method characterized by:
acquiring an image sequence based on a preset time parameter;
obtaining effective characteristic points in an image sequence, and determining an effective characteristic combination, wherein the effective characteristic combination comprises effective characteristic points corresponding to different images;
determining the displacement parameters of the effective feature combinations according to the positions of the effective feature points in the same effective feature combination;
and determining a synchronous speed according to the displacement parameter and the time parameter, wherein the synchronous speed can reflect the running speed of the vehicle.
2. The method of claim 1, wherein: the specific method for acquiring effective feature points in an image sequence and determining effective feature combinations comprises the following steps:
acquiring feature points in an image sequence, and determining a feature combination, wherein the feature combination comprises feature points corresponding to different images;
presetting effective parameters, determining the motion parameters of the feature points in the same feature combination, and determining the effective feature combination according to the motion parameters and the effective parameters.
3. The method of claim 2, wherein: the specific method for determining the effective characteristic combination according to the motion parameters and the effective parameters comprises the following steps:
presetting a reference coordinate system and effective parameters;
determining a starting coordinate and a terminal coordinate according to the position of the feature point in the feature combination corresponding to the reference coordinate system;
determining motion parameters according to the starting coordinates and the end coordinates, wherein the motion parameters comprise deviation motion parameters deviating from the preset driving direction of the vehicle;
and determining the effective characteristic combination according to the deviation motion parameter and the effective parameter.
4. The method of claim 2, wherein: the specific method for determining the effective characteristic combination according to the motion parameters and the effective parameters comprises the following steps:
presetting a reference coordinate system and effective parameters;
determining a starting coordinate and a terminal coordinate according to the position of the feature point in the feature combination corresponding to the reference coordinate system;
determining motion parameters according to the starting coordinates and the end coordinates, wherein the motion parameters comprise a traveling motion parameter corresponding to a preset traveling direction of the vehicle and a deviation motion parameter deviating from the preset traveling direction of the vehicle;
and determining the effective characteristic combination according to the effective parameters, the advancing motion parameters and the deviation motion parameters.
5. The method of claim 1, wherein: the specific method for determining the synchronous speed according to the displacement parameter and the time parameter comprises the following steps:
determining a time difference value according to the position of the image in the image sequence and a preset time parameter, wherein the time difference value refers to the time difference between the acquisition times of different images;
determining a height parameter, and determining a basic speed according to the displacement parameter, the time difference value and a preset focal length parameter, wherein the height parameter is the distance between the image acquisition module and the vehicle;
the synchronous speed is determined according to the basic speed.
6. The method of claim 5, wherein: in a particular method of determining a synchronous speed from a base speed, comprising:
determining a first base speed and a second base speed, wherein the first base speed and the second base speed are respectively base speeds corresponding to different time nodes;
determining a first acceleration from the first base velocity and the second base velocity;
and the momentum weighting method is used for determining the target speed according to the first basic speed, the first acceleration and the time difference value.
7. The method of claim 5, wherein: in a particular method of determining a synchronous speed from a base speed, comprising:
determining a first base speed and a second base speed, wherein the first synchronization speed and the second synchronization speed are base speeds corresponding to different time nodes;
determining a first acceleration according to the first basic speed, the second basic speed and the time difference value;
determining a third basic speed according to the first acceleration, the second basic speed and the time difference value;
determining a second acceleration according to the second basic speed, the third basic speed and the time difference value;
determining an acceleration change value according to the first acceleration and the second acceleration;
and the fluctuation weighting method is used for determining the target speed according to the third basic speed, the time difference value, the acceleration change value and a preset fluctuation coefficient.
8. The method of claim 7, wherein: the coefficient of variation can vary from time node to time node.
9. The method according to any one of claims 5 to 7, characterized in that: in a particular method of determining a synchronous speed from a base speed, comprising:
determining a first base speed and a second base speed, wherein the first synchronization speed and the second synchronization speed are base speeds corresponding to different time nodes;
determining a first acceleration from the first base velocity and the second base velocity;
determining a third basic speed according to the first acceleration, the second basic speed and the time difference value;
determining a second acceleration according to the second basic speed, the third basic speed and the time difference value;
determining an acceleration change value according to the first acceleration and the second acceleration, and selecting a speed deduction method based on the acceleration change value, wherein the speed deduction method comprises a momentum weighting method and a fluctuation weighting method;
and determining the target speed according to the selected speed deduction method.
10. A vehicle speed detection system characterized in that: comprises that
The image acquisition module (1) is used for acquiring an image sequence based on a preset time parameter;
the image acquisition module (1) comprises a camera sub-module (11), a lens sub-module (12) used for increasing the shooting visual field of the camera sub-module (11), an illumination sub-module (13) used for improving the ambient brightness of the shooting visual field, and a height measurement sub-module (14) used for detecting the distance between the lens sub-module (12) and a vehicle;
the effective characteristic point acquisition module (2) is used for acquiring effective characteristic points in the image sequence and determining effective characteristic combinations, wherein the effective characteristic combinations comprise effective characteristic points corresponding to different images;
the displacement parameter acquisition module (3) is used for determining the displacement parameters of the effective characteristic combination according to the positions of the effective characteristic points in the same effective characteristic combination; and the number of the first and second groups,
and the speed prediction module (4) is used for determining the synchronous speed according to the displacement parameter and the time parameter.
11. A computer-readable storage medium, in which a computer program is stored which can be loaded by a processor and which executes the method of any one of claims 1 to 9.
CN202110577554.8A 2021-05-26 2021-05-26 Vehicle speed detection method, system and computer readable storage medium Pending CN113484530A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110577554.8A CN113484530A (en) 2021-05-26 2021-05-26 Vehicle speed detection method, system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110577554.8A CN113484530A (en) 2021-05-26 2021-05-26 Vehicle speed detection method, system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113484530A true CN113484530A (en) 2021-10-08

Family

ID=77933239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110577554.8A Pending CN113484530A (en) 2021-05-26 2021-05-26 Vehicle speed detection method, system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113484530A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6628804B1 (en) * 1999-02-19 2003-09-30 Fujitsu Limited Method and apparatus for measuring speed of vehicle
US20140063247A1 (en) * 2012-08-31 2014-03-06 Xerox Corporation Video-based vehicle speed estimation from motion vectors in video streams
CN104050818A (en) * 2014-06-30 2014-09-17 武汉烽火众智数字技术有限责任公司 Moving vehicle speed measurement method based on target tracking and feature point matching
CN104129305A (en) * 2014-08-19 2014-11-05 清华大学 Method for controlling speed of electric car
CN105989593A (en) * 2015-02-12 2016-10-05 杭州海康威视***技术有限公司 Method and device for measuring speed of specific vehicle in video record
CN111965383A (en) * 2020-07-28 2020-11-20 禾多科技(北京)有限公司 Vehicle speed information generation method and device, electronic equipment and computer readable medium
CN112037536A (en) * 2020-07-08 2020-12-04 北京英泰智科技股份有限公司 Vehicle speed measuring method and device based on video feature recognition
CN112162107A (en) * 2020-10-10 2021-01-01 广西信路威科技发展有限公司 Vehicle running speed measuring method and system based on spliced image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6628804B1 (en) * 1999-02-19 2003-09-30 Fujitsu Limited Method and apparatus for measuring speed of vehicle
US20140063247A1 (en) * 2012-08-31 2014-03-06 Xerox Corporation Video-based vehicle speed estimation from motion vectors in video streams
CN104050818A (en) * 2014-06-30 2014-09-17 武汉烽火众智数字技术有限责任公司 Moving vehicle speed measurement method based on target tracking and feature point matching
CN104129305A (en) * 2014-08-19 2014-11-05 清华大学 Method for controlling speed of electric car
CN105989593A (en) * 2015-02-12 2016-10-05 杭州海康威视***技术有限公司 Method and device for measuring speed of specific vehicle in video record
CN112037536A (en) * 2020-07-08 2020-12-04 北京英泰智科技股份有限公司 Vehicle speed measuring method and device based on video feature recognition
CN111965383A (en) * 2020-07-28 2020-11-20 禾多科技(北京)有限公司 Vehicle speed information generation method and device, electronic equipment and computer readable medium
CN112162107A (en) * 2020-10-10 2021-01-01 广西信路威科技发展有限公司 Vehicle running speed measuring method and system based on spliced image

Similar Documents

Publication Publication Date Title
JP3151372B2 (en) Moving object speed detecting apparatus and method
KR101411668B1 (en) A calibration apparatus, a distance measurement system, a calibration method, and a computer readable medium recording a calibration program
US7769227B2 (en) Object detector
CN107272021A (en) The object detection of the image detection region defined using radar and vision
WO2010140314A1 (en) Distance measuring device and distance measuring method
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN112215794B (en) Method and device for detecting dirt of binocular ADAS camera
JPH11252587A (en) Object tracking device
JPH09226490A (en) Detector for crossing object
CN115479602A (en) Visual inertial odometer method fusing event and distance
CN115620518B (en) Intersection traffic conflict judging method based on deep learning
JP2001351193A (en) Device for detecting passenger
CN112633035B (en) Driverless vehicle-based lane line coordinate true value acquisition method and device
CN113959457A (en) Positioning method and device for automatic driving vehicle, vehicle and medium
JP6349272B2 (en) Moving object tracking device
JP2019066451A (en) Image measurement apparatus, image measurement method, imaging apparatus and program
CN103810460B (en) Object tracking method and object tracking device
JPH1047954A (en) Device for measuring distance between vehicles by facet-eye camera
CN113484530A (en) Vehicle speed detection method, system and computer readable storage medium
CN113034398A (en) Method and system for eliminating jelly effect in urban surveying and mapping based on artificial intelligence
CN113177976A (en) Depth estimation method and device, electronic equipment and storage medium
CN109063543B (en) Video vehicle weight recognition method, system and device considering local deformation
CN115482483A (en) Traffic video target tracking device, method and storage medium
CN110146123A (en) A kind of open channel water delivery monitoring system based on multi-information fusion
CN111415327B (en) PCB image block sampling device and method based on correlation analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination