CN115761693A - Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image - Google Patents

Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image Download PDF

Info

Publication number
CN115761693A
CN115761693A CN202211356444.XA CN202211356444A CN115761693A CN 115761693 A CN115761693 A CN 115761693A CN 202211356444 A CN202211356444 A CN 202211356444A CN 115761693 A CN115761693 A CN 115761693A
Authority
CN
China
Prior art keywords
target
tracking
matching
detection
iou
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211356444.XA
Other languages
Chinese (zh)
Inventor
张卫国
陈丽
黄梅珊
欧阳良辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Motor Co Ltd
Original Assignee
GAC Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Motor Co Ltd filed Critical GAC Motor Co Ltd
Priority to CN202211356444.XA priority Critical patent/CN115761693A/en
Publication of CN115761693A publication Critical patent/CN115761693A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a vehicle positioning method for detecting and tracking a station mark point in a panoramic image, which considers motion information and appearance information and associates the motion information and the appearance information. The method has the advantages that the convolutional neural network is used for carrying out feature re-identification on the parking space mark points of each frame of the image to obtain the feature vectors which are used as the tracking input, the feature vectors of each detection result of the current frame and the feature vectors stored in each predicted tracking target are subjected to cascade matching and IOU matching, the probability of successful feature vector matching is increased, the problem of target tracking ID jumping under the shielding condition can be solved, the precision of real-time vehicle positioning is improved, and the cost of the positioning method of the panoramic image is lower.

Description

Panoramic image-based lower station mark point detection and vehicle tracking and positioning method
Technical Field
The invention relates to the technical field of computer vision and automatic parking, in particular to a method for detecting a lower parking place mark point and tracking and positioning a vehicle based on a panoramic image.
Background
Vehicle positioning for automatic parking is a prerequisite for path planning and vehicle control by automatic parking systems. At present, an automatic parking system based on a panoramic all-around image mainly detects parking spaces on the panoramic image and then controls vehicles to finish automation through an automobile positioning method of an ackerman steering model. However, in the parking process of the automobile positioning method of the ackerman steering model, parameters such as turning radius, angular velocity and course angle need to be calculated to obtain the coordinate change value of the automobile, and the problems of large accumulated error, incapability of real-time correction and the like exist, so that large errors exist in subsequent path planning and vehicle control, and the automatic parking effect is influenced.
In order to better position the automobile, various technical means are adopted, for example, an existing image target tracking method based on YOLO includes the following steps: s1, inputting a video; s2, detecting a target by using a target detection network YOLO, and initializing a Kalman filter; s3, detecting the current frame image, if the target is detected, turning to the step S4, otherwise, executing the step S5; s4, calculating the intersection ratio of the detected position of the current frame image target and the predicted position, and if the intersection ratio is larger than a preset threshold value, using the detected position of the target as the position of the target in the current frame; s5, performing key point matching on the position of the target in the previous frame and the predicted position of the target in the current frame, and if the matching pair is greater than a preset threshold value, obtaining the position of the target in the current frame; and S6, checking whether the video is detected to be finished, if so, finishing the tracking, and otherwise, returning to the step S3. According to the method, the key point matching is carried out on the target at the current frame predicted position through the target position at the previous frame and the target obtained through Kalman filtering, so that whether the target exists at the predicted position or not is judged, and the tracking accuracy can be effectively improved. However, the method only considers the information of adjacent frames, and although the limitation of target tracking loss can be effectively avoided by using the key point matching, the situation that the number of times of ID (identity) transformation of tracking loss is large easily occurs under the situation of long-time shielding, and the robustness to omission and shielding is not high.
Disclosure of Invention
The invention provides a method for detecting the lower station mark point and tracking and positioning the vehicle based on the panoramic image, aiming at solving the problems that the tracking lost identity ID conversion times are more and the robustness to omission and shielding is not high easily under the condition of long-time shielding in the technical scheme. The method in the scheme considers the motion information and the appearance information at the same time, can improve the target tracking effect under the shielding condition by correlating the motion information and the appearance information, and reduces the problem of target tracking ID jumping.
The technical scheme adopted by the invention is as follows: a method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic images comprises the following steps:
step 1: before an automobile is automatically parked, acquiring a mapping relation between a pixel coordinate distance in a pixel coordinate system on a panoramic all-around view image and an actual coordinate distance in a world coordinate system;
step 2: when the automobile is automatically parked, according to the sampling time interval, detecting the position information and the characteristic vector of the parking space mark point of the current frame by using a multi-target detector based on a convolutional neural network;
and 3, step 3: initializing a detector, screening the first detection target, re-identifying the characteristics and outputting a second detection target which meets a preset index; wherein the first detection target comprises the feature vectors of the parking space mark points in the step 2;
and 4, step 4: the initial tracker is used for tracking the first tracking target and determining the tracking state of the first tracking target; wherein the first tracking target comprises the second detection target in step 3; automatically generating a tracking ID of a first tracking target when a tracker is initialized; the tracking state of the first tracking target comprises a determined tracking state and an uncertain tracking state;
and 5: predicting the tracking target by using a Kalman filtering algorithm to obtain a predicted tracking target in the next frame; the first tracking target in the uncertain tracking state corresponds to a first uncertain prediction tracking target in the next frame;
step 6: matching the second detection target with the first determined predicted tracking target by adopting a cascade matching and Hungarian algorithm and outputting a first target set generated after matching; wherein the initial frame detection result in step 3 does not participate in matching, and the matching is started from the detection result of the second frame; the first target set comprises a first successful matching target formed by successfully matching a second detection target with a first prediction tracking target, a third detection target which is not successfully matched in the second detection target and a second determination prediction tracking target which is not successfully matched in the first determination prediction tracking target;
and 7: performing IOU matching on the IOU candidate prediction tracking target and a third detection target, and outputting a second target set generated after IOU matching; wherein the IOU candidate predicted tracked targets include a first uncertain predicted tracked target and a second uncertain predicted tracked target; the second target set comprises a second successful matching target formed by successfully carrying out IOU matching on the IOU candidate predicted tracking target and the third detection target, a second IOU candidate predicted tracking target which is not successfully carried out IOU matching in the IOU candidate predicted tracking targets and a fourth detection target which is not successfully carried out IOU matching in the third detection target;
and step 8: and (3) calculating the average value of the coordinate changes of two continuous frames of the same tracking ID in the pixel coordinate system according to the first successful matching target and the second successful matching target, and calculating the position change of the vehicle in the world coordinate system according to the mapping relation between the pixel coordinate distance and the actual coordinate distance obtained in the step (1).
The method uses the convolutional neural network to perform feature re-identification on the parking space mark points of each frame of the image to obtain the feature vector as the input of tracking, and performs cascade matching and IOU matching on the feature vector of each detection result of the current frame and the feature vector stored in each predicted tracking target to increase the probability of successful matching of the feature vector, so that the problem of tracking ID jumping caused by the fact that the tracking target is re-used as the detection target when the tracking target is in the next frame due to matching failure under the shielding condition can be solved, and the precision of real-time positioning of the vehicle is improved. And on the basis of the panoramic image, a convolutional neural network is adopted to detect and track the parking place mark points to realize real-time positioning of the vehicle, the positioning method is based on pure vision, only four fisheye cameras arranged around the vehicle are needed to generate the panoramic all-round looking image, and compared with the positioning sensor commonly used by the current vehicle, the positioning sensor has lower cost.
Preferably, the first successful matching target and the second successful matching target are output to step 4 to become a new tracking target for determining a tracking state, and update iteration of the tracking information is completed.
Preferably, the fourth detection target is output to step 4 to be a tracking target in a new uncertain tracking state. Screening the second IOU candidate predicted tracking target, and selecting to delete or output the second IOU candidate predicted tracking target to the step 4 to enable the second IOU candidate predicted tracking target to become a new tracking target; wherein an uncertain predicted tracked target of the second IOU candidate predicted tracked targets is deleted; and screening a second determined predicted tracking target in the second IOU candidate predicted tracking targets. When screening a second determined predicted tracking target in the second IOU candidate predicted tracking targets, outputting the second determined predicted tracking target of which the matching times do not exceed the threshold value to the step 4 to enable the second determined predicted tracking target to become a new tracking target in a determined tracking state; and deleting the second determined predicted tracking target with the matching times exceeding the threshold, wherein the threshold of the matching times is 100. And constructing a container for each tracking target, storing the feature vector of the latest 100 frames successfully associated with each tracking target, increasing the successful matching probability of the tracking targets, further increasing the number of the successfully matched targets, and enabling the vehicle position change reflected by the average value of the coordinate changes of two continuous frames of the tracking ID to be more accurate.
Preferably, in step 2, a "YOLO v5" algorithm is used as the multi-target detector to simultaneously detect the parking space mark points on the panoramic all-around view image, and obtain the position information of the detection frame. YOLO has the advantages of high speed, strong generalization capability, low background prediction error rate and the like, YOLO v3 realizes multi-scale prediction by using a characteristic pyramid structure, which has good effect on the detection of small targets such as mark points, YOLO v5 realizes self-adaptive anchor frame calculation and self-adaptive gray filling on the basis, and the precision and the speed are greatly improved.
Preferably, in step 3, the position information, confidence and feature vector of the detection target are automatically generated when the detector is initialized; wherein the detection target is selected and represented by the detection frame. The screening of the detection target comprises removing the detection frames with the confidence degrees smaller than a threshold value, and screening by using a non-maximum suppression algorithm to eliminate the condition of a plurality of detection frames on one target. The non-maximum suppression algorithm aims at suppressing non-maximum targets, thereby searching for locally maximum targets.
Preferably, in step 6, the cascade matching includes a motion information correlation and an appearance feature correlation. The appearance characteristic correlation is to calculate a minimum cosine distance matrix between a characteristic vector of each detection result of the current frame and a characteristic vector set stored in each determined prediction tracking target; the correlation of the motion information is that in a cosine distance matrix, the mahalanobis distance between the predicted tracking target and the detection result is calculated and determined, and in the cosine distance matrix, the value of the mahalanobis distance which is larger than the threshold value and correspondingly determines the predicted tracking target is set to be infinite. The mahalanobis distance formula between the predicted tracking target and the detection result is as follows:
Figure SMS_1
wherein d is j Indicates the position of the jth detection frame, y i Indicating the predicted position of the target by the i-th tracker, S i Representing a covariance matrix between the detected position and the average tracking position.
And taking the processed cosine distance matrix as the input of the Hungarian algorithm to obtain a cascade matching result, and removing the matching pairs with larger difference.
Preferably, in step 7, the IOU calculation method is:
Figure SMS_2
wherein, area (Bbx) i ∩Bbx j ) Area (Bbx) representing the intersection of the detected object and the tracked object i ∪Bbx j ) The area of the phase of the detected target and the tracked target is shown.
Preferably, in step 8, when calculating an average position change of the coordinates of the center point of the tracking frame of two consecutive frames with the same tracking ID in the pixel coordinate system, a new vehicle coordinate system needs to be established in the pixel coordinate system and initial coordinates of the vehicle need to be set in the vehicle coordinate system. Because the origin of the coordinate system is not easy to set in the pixel coordinate system, the vehicle coordinate system is reset on the basis of the pixel coordinate system, the origin of the vehicle coordinate system is set according to the actual situation, and the initial coordinates of the vehicle can be conveniently determined in the vehicle coordinate system.
Compared with the prior art, the invention simultaneously considers the motion information and the appearance information and associates the motion information and the appearance information. The method has the advantages that the convolutional neural network is used for carrying out feature re-identification on the parking space mark points of each frame of the image to obtain the feature vectors which are used as the tracking input, the feature vectors of each detection result of the current frame and the feature vectors stored in each predicted tracking target are subjected to cascade matching and IOU matching, the probability of successful feature vector matching is increased, the problem that the tracking ID jumps due to the fact that the tracking target is used as the detection target again when the tracking target fails to be matched in the next frame and is endowed with a new tracking ID under the shielding condition can be solved, the precision of real-time vehicle positioning is improved, and the cost of the positioning method of the panoramic image is lower.
Drawings
Fig. 1 is a flowchart of a method for detecting and tracking and positioning vehicles based on a vehicle location marker in a panoramic image according to the present invention.
FIG. 2 is a schematic diagram of detecting vehicle location markers by using a multi-target detector during automatic parking based on a panoramic image lower location marker detection and vehicle tracking and positioning method.
FIG. 3 is a flow chart of Hungarian algorithm based on a panoramic image lower parking space mark point detection and vehicle tracking and positioning method.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for better illustration of the present embodiment, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of the actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar parts; in the description of the present invention, it should be understood that if there are orientations or positional relationships indicated by the terms "upper", "lower", "left", "right", "long", "short", etc., based on the orientations or positional relationships shown in the drawings, the description is merely for convenience of description and simplification, but it is not intended to indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationships in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and specific meanings of the terms described above can be understood according to specific situations by those skilled in the art.
The technical scheme of the invention is further described in detail by the specific embodiments and the accompanying drawings:
example 1
Fig. 1-2 show an embodiment of a method for detecting a vehicle location marker point and tracking and positioning a vehicle based on a panoramic image, which comprises the following steps:
step 1: before an automobile is automatically parked, acquiring a mapping relation between a pixel coordinate distance in a pixel coordinate system on a panoramic view image and an actual coordinate distance in a world coordinate system;
and 2, step: when an automobile is automatically parked, detecting the position information and the characteristic vector of the parking space mark point of the current frame by using a multi-target detector based on a convolutional neural network according to a sampling time interval;
and step 3: initializing a detector, screening the first detection target, re-identifying the characteristics and outputting a second detection target which meets a preset index; wherein the first detection target comprises the characteristic vector of the parking space mark point in the step 2;
and 4, step 4: the initial tracker is used for tracking the first tracking target and determining the tracking state of the first tracking target; wherein the first tracking target includes the second detection target in step 3; automatically generating a tracking ID of a first tracking target when a tracker is initialized; the tracking state of the first tracking target comprises a determined tracking state and an uncertain tracking state;
and 5: predicting the tracking target by using a Kalman filtering algorithm to obtain a predicted tracking target in the next frame; the first tracking target in the uncertain tracking state corresponds to a first uncertain prediction tracking target in the next frame;
step 6: matching the second detection target with the first determined predicted tracking target by adopting a cascade matching and Hungarian algorithm and outputting a first target set generated after matching; wherein the initial frame detection result in step 3 does not participate in matching, and the matching is started from the detection result of the second frame; the first target set comprises a first successful matching target formed by successfully matching a second detection target with a first prediction tracking target, a third detection target which is not successfully matched in the second detection target and a second determination prediction tracking target which is not successfully matched in the first determination prediction tracking target;
and 7: performing IOU matching on the IOU candidate prediction tracking target and a third detection target, and outputting a second target set generated after IOU matching; wherein the IOU candidate predicted tracked target comprises a first uncertain predicted tracked target and a second uncertain predicted tracked target; the second target set comprises a second successful matching target formed by successfully carrying out IOU matching on the IOU candidate predicted tracking target and the third detection target, a second IOU candidate predicted tracking target which is not successfully carried out IOU matching in the IOU candidate predicted tracking targets and a fourth detection target which is not successfully carried out IOU matching in the third detection target;
and 8: and (3) calculating the average value of the coordinate changes of two continuous frames of the same tracking ID in the pixel coordinate system according to the first successful matching target and the second successful matching target, and calculating the position change of the vehicle in the world coordinate system according to the mapping relation between the pixel coordinate distance and the actual coordinate distance obtained in the step (1).
Specifically, the first successful matching target and the second successful matching target are output to step 4 to be the new tracking target for determining the tracking state.
Specifically, the fourth detection target is output to step 4 to be a tracking target in a new uncertain tracking state. Screening the second IOU candidate prediction tracking target, and selecting to delete or output the second IOU candidate prediction tracking target to the step 4 to enable the second IOU candidate prediction tracking target to become a new tracking target; wherein an uncertain predicted tracked target of the second IOU candidate predicted tracked targets is deleted; and screening a second determined predicted tracking target in the second IOU candidate predicted tracking targets. When screening a second determined predicted tracking target in the second IOU candidate predicted tracking targets, outputting the second determined predicted tracking target of which the matching times do not exceed the threshold to the step 4 to enable the second determined predicted tracking target to become a new tracking target in a determined tracking state; and deleting the second determined predicted tracking target with the matching times exceeding the threshold, wherein the threshold of the matching times is 100.
The beneficial effects of this embodiment: the present invention considers and associates motion information and appearance information. The method has the advantages that the convolutional neural network is used for carrying out feature re-identification on the parking space mark points of each frame of the image to obtain the feature vectors which are used as tracking input, cascade matching and IOU (input object output) matching are carried out on the feature vectors of each detection result of the current frame and the feature vectors stored in each predicted tracking target, the probability of successful feature vector matching is increased, the problem that the tracking ID jumps due to the fact that the tracking target is re-used as the detection target when matching fails in the next frame under the shielding condition and is endowed with a new tracking ID can be solved, the precision of vehicle real-time positioning is improved, and the cost of the positioning method of the panoramic image is lower.
And (4) outputting the first successful matching target and the second successful matching target to a step 4 to enable the first successful matching target and the second successful matching target to become a new tracking target for determining a tracking state, and finishing updating iteration of tracking information. And constructing a container for each tracking target, storing the feature vector of the latest 100 frames successfully associated with each tracking target, increasing the successful matching probability of the tracking targets, further increasing the number of the successfully matched targets, and enabling the vehicle position change reflected by the average value of the coordinate changes of two continuous frames of the tracking ID to be more accurate.
Example 2
An embodiment 2 of a method for detecting and tracking and positioning vehicles based on a vehicle location marker in a panoramic image is further defined in steps 2 to 3 on the basis of the embodiment 1, as shown in fig. 1 to 2.
Specifically, in step 2, a "YOLO v5" algorithm is used as a multi-target detector to simultaneously detect parking space mark points on the panoramic all-around image, and position information of the detection frame is acquired.
Specifically, in step 3, the position information, confidence and feature vector of the detection target are automatically generated during the initialization of the detector; wherein the detection target is selected and represented by the detection frame. The screening of the detection target comprises removing the detection frames with the confidence degrees smaller than a threshold value, and screening by using a non-maximum suppression algorithm to eliminate the condition of a plurality of detection frames on one target. .
The beneficial effects of this embodiment: YOLO has the advantages of high speed, strong generalization capability, low background prediction error rate and the like, YOLO v3 realizes multi-scale prediction by using a characteristic pyramid structure, which has good effect on the detection of small targets such as mark points, YOLO v5 realizes self-adaptive anchor frame calculation and self-adaptive gray filling on the basis, and the precision and the speed are greatly improved. The non-maximum suppression algorithm aims at suppressing non-maximum targets, thereby searching for locally maximum targets.
Example 3
An embodiment 3 of a method for detecting a vehicle location marker point and tracking and positioning a vehicle based on a panoramic image is further defined in the steps 6 to 8 on the basis of the embodiment 1 or the embodiment 2.
Specifically, in step 6, the cascade matching includes a motion information association and an appearance feature association. The appearance characteristic correlation is to calculate a minimum cosine distance matrix between a characteristic vector of each detection result of the current frame and a characteristic vector set stored in each determined prediction tracking target; the correlation of the motion information is that in a cosine distance matrix, the mahalanobis distance between the predicted tracking target and the detection result is calculated and determined, and in the cosine distance matrix, the value of the mahalanobis distance which is correspondingly determined to be the predicted tracking target and is larger than the threshold value is set to be infinite. The mahalanobis distance formula between the predicted tracking target and the detection result is as follows:
Figure SMS_3
wherein d is j Indicates the position of the jth detection frame, y i Indicating the predicted position of the ith tracker to the target, S i Representing a covariance matrix between the detected position and the average tracking position.
And taking the processed cosine distance matrix as the input of the Hungarian algorithm to obtain a cascade matching result, and removing the matching pairs with larger difference.
Specifically, in step 7, the IOU calculation method is:
Figure SMS_4
wherein, area (Bbx) i ∩Bbx j ) Area (Bbx) representing the intersection of the detected target and the tracked target i ∪Bbx j ) Representing the area of the phase of the detected object and the tracked object.
Specifically, in step 8, when the successful matching result in step 6 and the successful matching result of the IOU in step 7 are output to step 4, the successful matching result in step 6 and the successful matching result of the IOU in step 7 are used as tracking targets for determining a tracking state, and update iteration of tracking information is completed. The threshold of the number of matching times of the unsuccessfully matched predicted tracking target is 100.
Specifically, when the average position change of the coordinates of the center point of the tracking frame of the two consecutive frames with the same tracking ID in the pixel coordinate system is calculated in step 8, a new vehicle coordinate system needs to be established in the pixel coordinate system, and the initial coordinates of the vehicle need to be set in the vehicle coordinate system.
The beneficial effects of this embodiment: the matching result is more accurate by calculating the minimum cosine distance matrix between the feature vector of each detection result of the current frame and the feature vector set stored in each predicted tracking target, calculating the mahalanobis distance matrix between the predicted tracking target and the detection result, and taking the result after the linear weighting of the predicted tracking target and the detection result as the input of the Hungarian matching algorithm. The vehicle coordinate system is reset on the basis of the pixel coordinate system, the origin of the vehicle coordinate system is set according to the actual situation, the initial coordinates of the vehicle can be conveniently determined in the vehicle coordinate system, and the origin of the coordinate system can be prevented from being determined badly in the pixel coordinate system.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A method for detecting a vehicle location mark point and tracking and positioning a vehicle based on a panoramic image is characterized by comprising the following steps:
step 1: before an automobile is automatically parked, acquiring a mapping relation between a pixel coordinate distance in a pixel coordinate system on a panoramic view image and an actual coordinate distance in a world coordinate system;
step 2: when an automobile is automatically parked, detecting the position information and the characteristic vector of the parking space mark point of the current frame by using a multi-target detector based on a convolutional neural network according to a sampling time interval;
and step 3: initializing a detector, screening the first detection target, re-identifying the characteristics and outputting a second detection target which meets a preset index; wherein the first detection target comprises the feature vectors of the parking space mark points in the step 2;
and 4, step 4: the initial tracker is used for tracking the first tracking target and determining the tracking state of the first tracking target; wherein the first tracking target includes the second detection target in step 3; automatically generating a tracking ID of a first tracking target when a tracker is initialized; the tracking state of the first tracking target comprises a determined tracking state and an uncertain tracking state;
and 5: predicting the tracking target by using a Kalman filtering algorithm to obtain a predicted tracking target in the next frame; the first tracking target in the uncertain tracking state corresponds to a first uncertain prediction tracking target in the next frame;
step 6: matching the second detection target with the first determined prediction tracking target by adopting a cascade matching and Hungarian algorithm and outputting a first target set generated after matching; wherein the initial frame detection result in step 3 does not participate in matching, and the matching is started from the detection result of the second frame; the first target set comprises a first successful matching target formed by successfully matching a second detection target with a first prediction tracking target, a third detection target which is not successfully matched in the second detection target and a second determination prediction tracking target which is not successfully matched in the first determination prediction tracking target;
and 7: performing IOU matching on the IOU candidate prediction tracking target and a third detection target, and outputting a second target set generated after IOU matching; wherein the IOU candidate predicted tracked target comprises a first uncertain predicted tracked target and a second uncertain predicted tracked target; the second target set comprises a second successful matching target formed by successfully carrying out IOU matching on the IOU candidate prediction tracking target and a third detection target, a second IOU candidate prediction tracking target which is not successfully subjected to IOU matching in the IOU candidate prediction tracking targets and a fourth detection target which is not successfully subjected to IOU matching in the third detection target;
and 8: and (3) calculating the average value of the coordinate changes of two continuous frames of the same tracking ID in the pixel coordinate system according to the first successful matching target and the second successful matching target, and calculating the position change of the vehicle in the world coordinate system according to the mapping relation between the pixel coordinate distance and the actual coordinate distance obtained in the step (1).
2. The panoramic image-based alighting location marking point detection and vehicle tracking and positioning method according to claim 1, wherein the first successful matching target and the second successful matching target are output to step 4 to be a new tracking target for determining a tracking state.
3. The method for detecting the alighting mark point and tracking and positioning the vehicle based on the panoramic image according to claim 1, wherein a fourth detection target is output to step 4 to become a new tracking target in an uncertain tracking state; screening the second IOU candidate predicted tracking target, and selecting to delete or output the second IOU candidate predicted tracking target to the step 4 to enable the second IOU candidate predicted tracking target to become a new tracking target; deleting uncertain predicted tracking targets in the second IOU candidate predicted tracking targets; and screening a second determined predicted tracking target in the second IOU candidate predicted tracking targets.
4. The panorama-image-based alighting location marker point detection and vehicle tracking and positioning method according to claim 3, wherein when a second determined predicted tracked target among the second IOU candidate predicted tracked targets is screened, the second determined predicted tracked target whose number of matching times does not exceed a threshold is output to step 4 to be a tracked target in a new determined tracking state; the second determination that the number of matches exceeds the threshold predicts the track target deletion.
5. The method for detecting the parking space mark points and tracking and positioning the vehicle based on the panoramic image as claimed in claim 1, wherein in step 2, a YOLO v5 algorithm is used as a multi-target detector to simultaneously detect the parking space mark points on the panoramic all-around image, and the position information of the detection frame is obtained.
6. The panoramic image-based vehicle location mark point detection and vehicle tracking and positioning method according to claim 1, wherein in step 3, the detector automatically generates position information and confidence of the detection target during initialization; wherein the detection target is framed and represented by the detection frame.
7. The panoramic image-based vehicle location mark point detection and vehicle tracking and positioning method according to claim 1, wherein in step 3, the screening of the detected object includes removing the detected frames with confidence level less than a threshold value, and using a non-maximum suppression algorithm to perform screening to eliminate the situation of multiple detected frames on one detected object.
8. The panoramic image-based alighting site marker detection and vehicle tracking and positioning method according to claim 1, wherein in step 6, the cascade matching includes motion information association and appearance feature association.
9. The method as claimed in claim 8, wherein the appearance feature association is a minimum cosine distance matrix between the feature vector of each detection result of the current frame and the feature vector set stored in each determined tracking prediction result.
10. The panoramic image-based alighting site marker point detection and vehicle tracking and positioning method according to claim 8, wherein the association of the motion information is in a cosine distance matrix, a mahalanobis distance between the determined tracking prediction result and the detection result is calculated, and a value in the cosine distance matrix, at which the mahalanobis distance corresponding to the determined tracking prediction result is greater than a threshold value, is set to infinity.
CN202211356444.XA 2022-11-01 2022-11-01 Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image Pending CN115761693A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211356444.XA CN115761693A (en) 2022-11-01 2022-11-01 Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211356444.XA CN115761693A (en) 2022-11-01 2022-11-01 Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image

Publications (1)

Publication Number Publication Date
CN115761693A true CN115761693A (en) 2023-03-07

Family

ID=85355067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211356444.XA Pending CN115761693A (en) 2022-11-01 2022-11-01 Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image

Country Status (1)

Country Link
CN (1) CN115761693A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117553756A (en) * 2024-01-10 2024-02-13 中国人民解放军32806部队 Off-target amount calculating method, device, equipment and storage medium based on target tracking

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117553756A (en) * 2024-01-10 2024-02-13 中国人民解放军32806部队 Off-target amount calculating method, device, equipment and storage medium based on target tracking
CN117553756B (en) * 2024-01-10 2024-03-22 中国人民解放军32806部队 Off-target amount calculating method, device, equipment and storage medium based on target tracking

Similar Documents

Publication Publication Date Title
CN108647646B (en) Low-beam radar-based short obstacle optimized detection method and device
CN113269098A (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
CN111127513A (en) Multi-target tracking method
CN111932580A (en) Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN110850403A (en) Multi-sensor decision-level fused intelligent ship water surface target feeling knowledge identification method
JP7078021B2 (en) Object detection device, object detection method and computer program for object detection
CN112836633A (en) Parking space detection method and parking space detection system
CN113313763B (en) Monocular camera pose optimization method and device based on neural network
CN112052802B (en) Machine vision-based front vehicle behavior recognition method
CN110631588B (en) Unmanned aerial vehicle visual navigation positioning method based on RBF network
CN111089590A (en) Method for tracking human leg by mobile robot through fusion of vision and laser
CN112991391A (en) Vehicle detection and tracking method based on radar signal and vision fusion
CN112950696B (en) Navigation map generation method and device and electronic equipment
CN111510704B (en) Method for correcting camera dislocation and device using same
CN110992424B (en) Positioning method and system based on binocular vision
CN114119659A (en) Multi-sensor fusion target tracking method
CN110736456A (en) Two-dimensional laser real-time positioning method based on feature extraction in sparse environment
CN116403139A (en) Visual tracking and positioning method based on target detection
CN115761693A (en) Method for detecting vehicle location mark points and tracking and positioning vehicles based on panoramic image
CN116645396A (en) Track determination method, track determination device, computer-readable storage medium and electronic device
CN116643291A (en) SLAM method for removing dynamic targets by combining vision and laser radar
CN114581678A (en) Automatic tracking and re-identifying method for template feature matching
CN114973195A (en) Vehicle tracking method, device and system based on multi-information fusion
CN111273701A (en) Visual control system and control method for holder
CN113052877A (en) Multi-target tracking method based on multi-camera fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination