CN116343522B - Intelligent park unmanned parking method based on space-time vehicle re-identification - Google Patents

Intelligent park unmanned parking method based on space-time vehicle re-identification Download PDF

Info

Publication number
CN116343522B
CN116343522B CN202310130800.4A CN202310130800A CN116343522B CN 116343522 B CN116343522 B CN 116343522B CN 202310130800 A CN202310130800 A CN 202310130800A CN 116343522 B CN116343522 B CN 116343522B
Authority
CN
China
Prior art keywords
vehicle
parking
parking space
space
lamp post
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310130800.4A
Other languages
Chinese (zh)
Other versions
CN116343522A (en
Inventor
朱忠攀
张智淋
何斌
龚哲飞
张朋朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202310130800.4A priority Critical patent/CN116343522B/en
Publication of CN116343522A publication Critical patent/CN116343522A/en
Application granted granted Critical
Publication of CN116343522B publication Critical patent/CN116343522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/141Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces
    • G08G1/142Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces external to the vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to an intelligent park unmanned parking method based on space-time vehicle re-identification, which comprises the following steps: acquiring image key frame information of a target vehicle, and extracting license plate feature information; recommending an optimal parking space for a target vehicle based on a path planning algorithm, constructing a path based on a unique point position of a lamp post, and guiding the vehicle to stop; based on the image key frame information and license plate characteristic information, performing re-identification association processing by utilizing a vehicle re-identification network, and generating a target vehicle running track by combining the vehicle appearance characteristics, each lamp post geographic tag and the time stamp; detecting a parking space based on a lightweight key point parking space detection algorithm, and judging matching of a vehicle and the parking space by combining a lamp post and a ground lock, and if the matching is successful, judging whether the vehicle is parked independently and whether parking is normal or not; when the target vehicle exits the parking space, the parking time of the vehicle is counted, the parking cost is calculated, and the non-inductive payment is realized. Compared with the prior art, the invention has the advantages of high vehicle positioning precision, strong practicability and the like.

Description

Intelligent park unmanned parking method based on space-time vehicle re-identification
Technical Field
The invention relates to the technical field of unmanned and vehicle re-identification, in particular to an intelligent park unmanned parking method based on space-time vehicle re-identification.
Background
The development of intelligent network-connected automobiles rapidly changes the traveling and living modes of people in the future. Along with the continuous popularization and popularization of unmanned technologies, intelligent automobiles with the L3 grade or more are gradually put into the market, and the intelligent automobiles are firstly landed in scenes such as a closed park. Along with the rising of the sales volume of intelligent automobiles, intelligent automobiles with autonomous parking capability and traditional manned parking can coexist for a long time, and a typical place parking lot serving as a closed park can also meet the requirements of time development. At present, intelligent upgrading of parking lots around unmanned demands is not fully studied. Simply relying on license plate recognition, it is obviously not possible to use in more complex environments and in more intelligent scenarios. How to cope with autonomous random maneuver of someone driving and fully utilize public parking space management is a key challenge to be solved. Specifically, the prior art has the following drawbacks:
1) It is difficult to cope with automatic parking of an unmanned vehicle;
2) The vehicle positioning precision is low, and high-precision automatic parking is difficult to realize;
3) Part of the technology depends on a parking lot structure and large-scale supporting equipment, and the practicability is poor;
4) The lack of an overall planning scheme for unmanned parking is mostly only aimed at the realization of one or more parts, and full-automatic unmanned parking is difficult to realize.
Disclosure of Invention
The invention aims to provide an intelligent park unmanned parking method based on space-time vehicle re-identification, which improves the vehicle positioning accuracy and the feasibility of unmanned parking.
The aim of the invention can be achieved by the following technical scheme:
An intelligent park unmanned parking method based on space-time vehicle re-identification comprises the following steps:
Step S1: when a target vehicle arrives at a parking lot entrance, acquiring first image key frame information of the target vehicle shot by a camera arranged on a lamp post at the entrance, and extracting license plate characteristic information;
Step S2: after the parking lot entrance gate releases the target vehicle, recommending an optimal parking space for the target vehicle based on a path planning algorithm, and constructing a path based on a unique point position of the lamp post to guide the vehicle to stop;
Step S3: invoking a plurality of lamp posts distributed in the intelligent park to acquire second image key frame information in the running process of the target vehicle, and based on the first and second image key frame information and license plate characteristic information, performing re-identification association processing by utilizing a vehicle re-identification network PVRS, generating a running track of the target vehicle by combining the appearance characteristics of the vehicle, geographic tags of the lamp posts and time stamps, calculating the average running speed of the vehicle, and monitoring the running route in real time;
Step S4: detecting a parking space based on a lightweight key point parking space detection algorithm, and matching and judging the vehicle and the parking space by combining the lamp post and the ground lock, if the matching is successful, completing the autonomous parking action of the vehicle and judging whether the parking is standard or not;
Step S5: when the local lock detects that the target vehicle exits the parking space, the parking time of the vehicle is counted, the parking cost is calculated, and the non-inductive payment is realized.
The license plate characteristic information extraction is realized by adopting a single-step target detection network based on YOLOv, and the single-step target detection network improves the YOLOv network by the following steps: replacing the backup with EFFICIENTNET; a data enhancement method of rotation or reflection transformation and noise disturbance is adopted; adding mixup an enhancement method on the basis of mosaics; adding a self-adaptive feature fusion ASFF layer to carry out feature weighted fusion on different levels; the network is trained using image samples that cover multiple situations as a training set, wherein the multiple situations include different lighting conditions, different weather, different angles.
The step S2 includes the steps of:
Step S21: after a target vehicle is released by a gate at the entrance of a parking lot, acquiring a park parking space state sequence X (n) = { X 1,x2,…,xn }, wherein n represents the total number of park parking spaces, X i represents the state score of a parking space with a number i, and when the parking space i is an empty parking space, X i takes 1; when the parking space i is not empty, x i takes floating point number epsilon, and 0< epsilon < 1;
Step S22: calculating a parking space evaluation function f (i):
f(i)=xi*(yi+zi)
wherein y i represents a parking space distance evaluation score, and the shorter the distance from the entrance, the higher the score; z i represents the parking difficulty evaluation score of the parking space, and is obtained by weighting calculation according to the attribute of the parking space and the attribute of the surrounding environment of the parking space;
Step S23: according to a parking space evaluation function f (i), maintaining a priority queue pq formed by all parking spaces, wherein the evaluation function value is large and is close to the head of a team, when the optimal parking space is required to be recommended, the head element of the team is the optimal parking space besti, the starting point is set as an entrance, the end point is set as the optimal parking space, and the optimal parking space is used as the input of a single-source path planning algorithm based on a Dijkstra algorithm;
Step S24: and (3) binding a lamp post beside the road to a point which is close to the lamp post and is positioned in the road by utilizing the uniqueness of the geographical position of the lamp post, using the lamp post as a marking point of the lamp post, calculating the shortest path by using the Dijkstra algorithm according to the marking point of the lamp post and the communication relation between the marking points as the vertex and the edge in the undirected graph, and finishing path planning and guiding the vehicle to stop.
The vehicle re-identification network PVRS includes a self-supervising attention vehicle appearance identification sub-module, a license plate verification sub-module, and a reordering sub-module based on spatiotemporal association information.
The self-supervision attention vehicle appearance recognition submodule carries out feature extraction on the image key frame information based on a self-supervision attention mechanism to obtain vehicle appearance features, and the feature extraction is completed based on self-supervision residual error generation and deep feature extraction, and specifically comprises the following steps:
Step S311: self-supervision residual error generation: the improved VAE architecture is used, the input image is subjected to downsampling through maximum pooling, dimensionality is reduced, the input image is subjected to re-parameterization through mean and covariance of potential features, namely an automatic variable encoder, the potential feature mapping is subjected to upsampling, image reconstruction is carried out, the re-modeling type is pre-trained by using mean square error and KL divergence in the process, and a loss function formula is expressed as follows:
Lconstruct=Lmse+θLkl
Wherein θ is used to adjust the weight ratio of the mean square error and the KL divergence, L mse is the mean square error loss, L kl is the KL divergence loss, and L construct is the loss function of the reconstruction model;
Step S312: deep feature extraction: projecting the vehicle image into a low-dimensional vector space using a single-branch ResNet-50 feature extraction network, and preserving features that effectively characterize the identity of the vehicle;
Step S313: weighting the original image and its residuals using learnable parameters allows the feature extraction network to weight the importance of each input source, where the total loss function is formulated as follows:
L=Ltriplet+Lcrossentropy+μLconstruct
Wherein L triplet represents a triplet loss function, L crossentropy represents a cross entropy loss function, L construct represents a loss function of the reconstruction model, μ is a preconfigured tuning parameter;
in the triplet loss function, for a given anchor, B represents the total number of lots, B i represents the i-th lot, s represents another sample, offset represents the distance edge threshold, P (a) and N (a) represent positive and negative samples, respectively, and Euclid () represents the euclidean distance between the two samples;
In the cross-entropy loss function, AndThe features extracted from the ith image in the training set after BN Neck layers are passed through and corresponding to the group-truth labels, W j and d j are weight vectors and deviations related to the category j in the final classification layer, and N and C represent the total number of samples and the category number in the training process respectively;
Step S314: and adding the Euclidean distances of the extracted vehicle appearance features, and obtaining a similarity list according to the similarity degree to realize the extraction of the vehicle appearance features.
The license plate verification submodule adopts a twin neural network SNN to realize license plate verification, wherein the twin neural network SNN adopts two CNNs with identical network structures, and the CNNs with identical weights are shared in forward and reverse calculation; each CNN consists of two convolution layers, a pooling layer and three full connection layers; the contrast loss function formula is expressed as follows:
Euclid(x1,x2)=||x1-x2||
wherein x 1,x2 represents the feature vectors of the input samples x 1 and x 2 after forward propagation, and euclidean (x 1,x2) represents the euclidean distance between the two feature vectors, and is used for quantitatively representing the similarity of the two vectors, and hp is a super parameter.
The reordering submodule based on the space-time correlation information performs descending order arrangement screening through the following space-time similarity formula:
Wherein a and b respectively represent two images obtained by shooting by cameras of a lamp post L a,Lb, T a,Tb respectively represents time stamps of the two images, T max represents the maximum value of time stamps of all image frames obtained by shooting by all cameras of a park within a period of time T, dist (L a,Lb) represents the physical distance between the two lamp post cameras, D max represents the maximum value of the physical distance between any two cameras in the park, mu represents a weight function, the value is (0, 1), the closer to 1 represents the closer to the empty similarity to the time similarity, and the closer to 0 represents the space-time similarity to the space similarity.
The step S4 includes the steps of:
Step S41: the light-weight key point parking space detection algorithm is adopted to detect the parking space, and specifically comprises the following steps: training a lightweight convolutional neural network with a back bone of MobileNet-v3, and carrying out regression prediction on key points of the parking space; carrying out parking space edge detection on an image frame shot by a lamp post camera by using a Canny operator; taking the regressive key points as vertexes, extending edge sides obtained by the Canny operator, taking intersection points of the extended edges as new vertexes if the number of the predicted key points is less than that of the actual key points of the parking space, otherwise, only extending edges to finish the detection of the parking space;
step S42: the ground lock terminal acquires the current matched vehicle of each parking space;
step S43: the ground lock in a lifted state detects whether the vehicle enters the current parking space range based on ultrasonic ranging, and if so, a camera corresponding to the lamp post is called to shoot the license plate of the vehicle;
Step S44: calling a license plate verification submodule to identify license plate information of a target vehicle, acquiring an identification result of a vehicle re-identification network by a lamp post matched with a ground lock, carrying out matching detection on a vehicle attempting to stop in the parking space, and transmitting detection information into a ground lock controller;
Step S45: the ground lock controller matches the detection information with the current matched vehicle of the parking space in the step S42: if the matching is successful, the ground lock descends, the vehicle stops autonomously, and the vehicle stops into a parking space; otherwise, the ground lock keeps a lifting state and saves the matching failure information;
step S46: in the process of successfully parking the vehicle into the matched parking space, the camera of the intelligent lamp post shoots image frames, judges whether parking is standard or not based on the parking space detection result in the step S41 and sends feedback information to the user terminal;
Step S47: after the vehicle is parked in the parking space, the state of the parking space is updated, and the current time stamp is recorded and used as the parking time stamp.
The calculation formula of the ultrasonic ranging is as follows:
L=T Flying ×V Sound production /2
Where L represents the target-to-transceiver distance, T Flying represents the time of flight, and V Sound production represents the speed of sound propagation in air.
The step S5 includes the steps of:
Step S51: detecting whether the vehicle exits the vehicle space based on ultrasonic ranging, if so, lifting the ground lock, sending an exiting signal to the intelligent lamp post, recording the current timestamp as an exiting timestamp, and updating the vehicle space state;
Step S52: calculating the parking cost of the target vehicle according to the difference between the driving-out time stamp and the parking cost calculation rule;
step S53: when the intelligent lamp post receives the running-out signal from the ground lock, the re-identification network is called again to track and detect the running track of the vehicle until the vehicle runs out of the parking lot;
Step S54: and sending the parking fee to the car owner corresponding to the license plate of the target car through the mobile communication network, deducting the fee from the bound electronic payment account and sending a fee deduction notice to realize the noninductive payment.
Compared with the prior art, the invention has the following beneficial effects:
1) The vehicle positioning accuracy is high: the vehicle re-identification method adopts a screening process from appearance characteristics to license plate information and then to space-time correlation matching degree, and the screening of candidate vehicles is realized from thick to thin, so that the accurate tracking of the vehicle running track can be realized.
2) The scheme has strong integrity: under the large unmanned background, the method covers the technical fields of vehicle re-identification, unmanned path planning, license plate identification, parking space identification, timing and the like, has complete scheme, and provides a feasible strategy and method for unmanned commercialization.
3) The scheme has strong practicability: the intelligent lamp post module can be applied only by laying out the intelligent lamp post module covering all parking spaces on a target park without depending on a specific parking lot structure.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a global layout illustration of the present invention;
FIG. 3 is a diagram of a YOLOv-based license plate recognition network according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of a distribution of marker points based on the geographic location of a light pole in an embodiment of the present invention;
FIG. 5 is a flow diagram of a vehicle re-identification network in one embodiment of the invention;
FIG. 6 is a diagram of a self-supervising attentive vehicle appearance recognition network in accordance with one embodiment of the present invention;
FIG. 7 is a diagram of a twin neural network in a license plate verification sub-module according to one embodiment of the present invention;
FIG. 8 is a schematic diagram of identifying a parking space based on keypoint detection and edge detection in an embodiment of the present invention;
fig. 9 is a block diagram of the overall design of a ground lock smart device in one embodiment of the invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
The embodiment provides an intelligent park unmanned parking method based on space-time vehicle re-identification, and the applied system architecture mainly comprises four modules: the system comprises a vehicle feature extraction module, a vehicle re-identification feature association module, a parking space identification and parking scheduling module and a parking timing non-inductive payment module.
The vehicle feature extraction module is an intelligent terminal module with synchronous time stamps, which is arranged at edge end positions of an intelligent lamp post, an import and export gate and the like, and comprises a video data port, a mounting and fixing mechanical interface, which is used for adaptively connecting cameras of different brands such as sea, large bloom and astronomy, a Hai Si AI chip is adopted to process video streams of the cameras in real time, key frame image information is obtained, feature information such as vehicle brands, vehicle types, vehicle body colors and license plates is extracted, communication among the modules is realized by adopting a 5G Sub-6 GHz chip, terminal positioning is carried out by adopting a Beidou chip with a Xingxing through NebulasIV UC9810, and the vehicle feature extraction Sub-module particularly comprises a license plate feature extraction Sub-module at the portal gate and a vehicle appearance feature extraction Sub-module in a re-recognition stage.
The vehicle re-identification feature association module is deployed in the back-end server system and performs vehicle re-identification association processing based on key frame images and extracted features transmitted by intelligent terminal modules distributed in a park. And constructing a vehicle re-identification database under the intelligent camera at the side based on the cloud storage unit.
The parking space recognition and parking scheduling module is deployed at the position of the intelligent lamp post and the rear end server system by adopting a cloud edge cooperative framework, and the edge side detects the parking space through a lightweight key point detection algorithm.
The parking timing non-inductive payment module is deployed on the rear end server system and the ground lock module, the intelligent ground lock and the rear end system cooperate to complete parking space state monitoring, parking time calculation and parking cost calculation, and the parking timing non-inductive payment module is sent to the user terminal through the communication module to complete non-inductive payment.
Specifically, as shown in fig. 1 and 2, the method comprises the following steps:
step S1: when a target vehicle arrives at the entrance of a parking lot, a license plate feature extraction submodule arranged at the entrance gate acquires first image key frame information of the target vehicle through a camera in an intelligent lamp post, the first image key frame information is transmitted to a rear end module through a communication module, license plate feature information extraction is completed by a license plate extraction submodule of a rear end system, and basic information of the target vehicle is stored in a rear end database.
Step S11: network configuration and preparation dataset of a single-step target detection network based on YOLOv are modified, EFFICIENTNET is preferably adopted as a backbone network model, the overall operation efficiency of the model is improved, and the model structure is shown in figure 3; the adaptation capability of the model is enhanced by adopting a data enhancement method of rotation/reflection transformation and noise disturbance; on the basis of mosaics, mixup enhancement methods are added to improve the quality of data samples; adding a self-adaptive feature fusion ASFF layer to carry out feature weighted fusion on different levels; image samples covering multiple situations are used as training sets, including different lighting conditions, different weather, different angles.
Step S12: and (3) after the processing in the step (S11), utilizing the data set training network after data enhancement to obtain a license plate information recognition model. As shown by the dashed box of the backhaul in fig. 3, the backbone network replacement has fewer parameters and higher operating efficiency for EFFICIENTNET, EFFICIENTNET compared to the backhaul of the YOLOv5 model; as shown by a Neck dotted line box in fig. 3, the last three downsampling results in the backup are input to Neck layers for processing; as shown by the Prediction dashed box in fig. 3, the Prediction section completes three-scale Prediction.
It should be noted that, the EFFICIENTNET network includes multiple versions of B0 to B7, and in the above embodiment, only the B0 version is used for assistance in explanation, and embodiments that use other versions of EFFICIENTNET networks as backup alternatives are also within the scope of the present invention.
Step S13: and C, deploying the license plate information recognition model obtained by training in the step S12 in a back-end system, when a smart lamp post module positioned at an entrance gate of a parking lot sends out a license plate information recognition function request, inputting key frame image information transmitted by the smart lamp post module into the license plate information recognition model by the back-end system, calling, and storing a recognition result in a database.
Step S2: and after the parking lot entrance gate releases the target vehicle, the rear end system calls an optimal parking space recommending sub-module and a path planning sub-module, recommends an optimal parking space for the target vehicle based on a path planning algorithm, and constructs a path based on a unique point of the lamp post to guide the vehicle to stop. Specifically, the method comprises the following steps:
Step S21: after a target vehicle is released by a gate at the entrance of a parking lot, acquiring a park parking space state sequence X (n) = { X 1,x2,…,xn }, wherein n represents the total number of park parking spaces, X i represents the state score of a parking space with a number i, and when the parking space i is an empty parking space, X i takes 1; when the parking space i is not empty, x i takes a floating point number epsilon small enough compared with 1, 0< epsilon < 1.
Step S22: calculating a parking space evaluation function f (i):
f(i)=xi*(yi+zi)
Wherein y i represents a parking space distance evaluation score, and the shorter the distance from the entrance, the higher the score; z i represents the parking difficulty evaluation score of the parking space, and is obtained by weighting calculation according to the attribute of the parking space and the attribute of the surrounding environment of the parking space.
Step S23: according to the parking space evaluation function f (i), a priority queue pq formed by all parking spaces is maintained, the evaluation function value is large and is close to the head of the queue, preferably, when a parked vehicle drives out of the parking space, the updated parking space state is obtained from the rear end, and x i and z i are updated, namely, the arrangement of elements in the priority queue is updated, so that the dynamic recommendation of the optimal parking space is realized. When the optimal parking space is required to be recommended, the first element of the team is the optimal parking space besti, the starting point is set as an entrance, the end point is set as the optimal parking space, and the first element is used as input of a single-source path planning algorithm based on a Dijkstra algorithm.
Step S24: and (3) binding a lamp post beside a road to a point which is close to the lamp post and is positioned in the road by utilizing the uniqueness of the geographical position of the lamp post, and pre-introducing the lamp post marking point and the communication relation between the marking points as the vertex and the side in the undirected graph when the rear-end system is deployed, and calculating the shortest path by using the Dijkstra algorithm to finish path planning and guide the vehicle to stop. Since each step of Dijkstra algorithm can calculate the shortest distance to all other points in the graph, when the optimal parking space is changed, a new planning path can be directly obtained.
The steps S21-S23 are completed by the optimal parking space recommending submodule, and the step S24 is completed by the path planning submodule.
In this embodiment, a specific implementation manner is provided, and fig. 4 is a distribution diagram of a lamp post camera, a mark point and a building parking space in a park, where No. 1 to No. 4 buildings have 4 parking spaces, no. 21 buildings have no parking spaces, and other buildings have 2 parking spaces; the 28 lamp poles are deployed along the garden road, yellow points in the figure are marking points which are close to the yellow points, namely the lamp poles beside the road and are located in the road, and the 12 lamp poles are not distributed beside the garden road, so that the marking points are not generated.
Setting the total number of parking spaces in a park as n and the number of lamp posts as m, constructing the corresponding relation between the parking spaces and the lamp posts according to whether a lamp post camera can shoot a parking space, namely assuming that lamp post l h,…,li (h is less than or equal to i) can shoot a parking space p j,…,pk (j is less than or equal to k), corres _ lamppost ({ p j,…,pk})={lh,…,li }, corres _ lamppost function represents a lamp post set corresponding to a parking space set, setting a parking space state sequence of X (n) = { X 1,x2,…,xn }, wherein X i represents the state fraction of the parking space with the number i, and taking 1 when the parking space i is an empty parking space; when the parking space i is not empty, a floating point epsilon (0 < epsilon < 1) which is smaller than 1 is taken, y i is set to represent the parking space distance evaluation score, the smaller the distance at the entrance is, the higher the score is, z i is set to represent the parking space parking difficulty evaluation score, the parking space is obtained by weighting calculation according to the parking space self attribute and the surrounding environment attribute of the parking space, the parking space comprises but not limited to a parallel parking space, a vertical parking space and a diagonal parking space, different scores are given to the parking space, the scores of the adjacent parking spaces are reduced, and the like.
The number of marking points which are bound by all lamp poles in a park and are close to the lamp poles and are positioned in a road is m, a marking point set is D= { D 1,d2,……,dm }, an entry marking point is defined as D 0, marking points corresponding to the optimal parking space p best are recorded as D best, the space distance between any two marking points can be calculated according to the geographic position information of each marking point, and a single-source shortest path solving problem based on a Dijiestra algorithm can be constructed by the space distance between any two points in D, a starting point D 0 and a terminal point D best: an empty queue S and an empty ascending priority queue U are introduced, the marked point of the shortest path and the corresponding shortest path length are stored in the S, the marked point of the shortest path and the corresponding path length are not found in the U, and the specific steps are as follows:
1) S, storing a starting point d 0, storing the rest mark points into U, and recording the distance from each mark point in U to the starting point, if not, recording as infinity;
2) Taking out the marking point d min of the head of the queue (the shortest path) from the U, storing the marking point d min into the tail of the queue of the S, removing the head of the queue element from the U, and updating the distance from the rest marking points in the U to the starting point;
3) And (2) repeating the step (2) until the priority queue U, namely all the marked points, are traversed.
And all elements in the queue S are sequentially dequeued, namely the shortest distance from the starting point to each marking point, and the shortest distance to the optimal parking space can be obtained only by taking out the shortest path length of the end point d best.
In addition, when the parking space state changes at a certain moment, including but not limited to the situation that another parked vehicle in the parking lot exits the parking space, the parking space state needs to be updated and the parking space evaluation function needs to be recalculated, at this moment, the optimal parking space may change, preferably, after the calculation based on the dijkstra algorithm of the unique point position of the lamp post is completed, since the starting point is fixed, and the shortest distance from the starting point to each point position is also obtained (the element in the queue S), only the end point needs to be modified, and the corresponding shortest distance and the point position path need to be taken out.
Step S3: and calling a plurality of lamp posts distributed in the intelligent park to acquire second image key frame information in the running process of the target vehicle, carrying out re-identification association processing by utilizing a vehicle re-identification feature association module deployed at the rear end through a re-identification network PVRS (Progressive Vehicle Reidentification System) based on the first and second image key frame information and license plate feature information, generating a running track of the target vehicle by combining the appearance features of the vehicle, geographic tags of the lamp posts and time stamps, calculating the average running speed of the vehicle, and carrying out real-time monitoring on the running route.
The vehicle re-identification feature association module includes 31) a self-supervising attentive vehicle appearance identification sub-module, 32) a license plate verification sub-module, and 33) a reordering sub-module based on spatiotemporal association information, as shown in fig. 5.
31 Self-monitoring attention vehicle appearance recognition submodule
The self-supervision attention vehicle appearance recognition submodule performs feature extraction on the image key frame information based on a self-supervision attention mechanism to obtain vehicle appearance features, and in the embodiment, the feature extraction is completed based on self-supervision residual generation and deep feature extraction, and the specific structural design is shown in fig. 6, and the method comprises the following steps:
Step S311: self-supervision residual error generation: the improved VAE architecture is used, the input image is downsampled through maximum pooling, dimensionality is reduced, the input image is re-parameterized through mean and covariance of potential features, namely, a variational automatic encoder, the potential feature mapping is upsampled finally, image reconstruction work is carried out, and the re-modeling type is pre-trained in the process by using mean square error (mse) and KL divergence (KL), as shown in the left half of fig. 6. The loss function formula is expressed as follows:
Lconstruct=Lmse+θLkl
Wherein θ is used to adjust the weight ratio of the mean square error and the KL divergence, L mse is the mean square error loss, L kl is the KL divergence loss, and L construct is the loss function of the reconstructed model.
Step S312: deep feature extraction: using a single-branch ResNet-50 feature extraction network, the vehicle image is projected into a low-dimensional vector space, and features that effectively characterize the identity of the vehicle are retained.
Step S313: weighting the original image and its residuals using learnable parameters allows the feature extraction network to weight the importance of each input source, as shown in the right half of fig. 6. The total loss function in this process is formulated as follows:
L=Ltriplet+Lcrossentropy+μLconstruct
Where L triplet represents the triplet loss function, L crossentropy represents the cross entropy loss function, L construct represents the loss function of the reconstructed model, μ is a preconfigured tuning parameter, and μ is set to 100 in this embodiment.
In the triplet loss function, for a given anchor, B represents the total number of lots, B i represents the i-th lot, s represents another sample, offset represents the distance edge threshold, P (a) and N (a) represent positive and negative samples, respectively, and Euclid () represents the euclidean distance between the two samples.
In the cross-entropy loss function,AndThe features extracted from the ith image in the training set after BN Neck layers are passed through and corresponding to the group-truth labels, W j and d j are weight vectors and deviations related to the category j in the final classification layer, and N and C represent the total number of samples and the category number in the training process respectively.
Step S314: and adding the Euclidean distances of the extracted vehicle appearance features, and obtaining a similarity list according to the similarity degree to realize the extraction of the vehicle appearance features.
Training the neural network extracted from the appearance characteristics of the vehicle according to the steps, and using the obtained model for primarily screening the re-identification vehicle data. However, it is difficult to match images of the same vehicle only by appearance, and therefore, it is also necessary to combine 32) a license plate verification sub-module and 33) a reordering sub-module based on spatiotemporal correlation information to improve vehicle positioning accuracy.
32 License plate verification sub-module
The license plate verification submodule adopts a twin neural network SNN to realize license plate verification, the structure of the twin neural network SNN is shown in fig. 7, two CNNs with identical network structures are adopted, and the CNNs with identical weights are shared in forward and reverse calculation; each CNN consists of two convolution layers, a pooling layer and three full connection layers; in this embodiment, the feature extraction is performed by using two convolution layers with convolution kernels of 5×5 and a pooling layer, and the learning of the measurement space is performed by using three full-connection layers with output channels of 1500, 1000, and 2 in sequence, and if the license plate information of the two input images is consistent, the tag 1 is manually marked, otherwise, the tag 0 is marked. The contrast loss function formula is expressed as follows:
Euclid(x1,x2)=||x1-x2||
Wherein x 1,x2 represents the feature vectors of the input samples x 1 and x 2 after forward propagation, and Euclid (x 1,x2) represents the euclidean distance between the two feature vectors, and is used to quantitatively represent the similarity of the two vectors, hp is a super parameter, and hp in this embodiment takes 1.
Further screening of the possible match data is accomplished using the vehicle card information according to the steps described above.
33 Space-time correlation information based reordering submodule
The reordering sub-module performs descending order arrangement on the vehicle data screened by the space-time correlation matching pairs 31) and 32) to obtain the optimal vehicle, and preferably, the vehicles in the two images with higher space-time similarity are more likely to be the same vehicle, so that the vehicle data screened in the steps S31 and S32 are reordered according to the space-time similarity, and the optimal vehicle is selected. The spatio-temporal similarity formulation is expressed as follows:
Wherein a and b respectively represent two images obtained by shooting by cameras of a lamp post L a,Lb, T a,Tb respectively represents time stamps of the two images, T max represents the maximum value of time stamps of all image frames obtained by shooting by all cameras of a park within a period of time T, dist (L a,Lb) represents the physical distance between the two lamp post cameras, D max represents the maximum value of the physical distance between any two cameras in the park, mu represents a weight function, the values are (0, 1), the closer to 1 represents the closer to the empty similarity to the time similarity, the closer to 0 represents the space-time similarity to the space-time similarity, and STR (a, b) represents the space-time similarity of the two images.
And (3) fusing the features in the space-time similarity sum 31) and the S32) through a post fusion or top-k reordering method, and screening out optimal vehicle weight identification data.
Preferably, when the vehicle re-recognition feature association module gives the optimal re-recognition match b for the image a, i.e. the probability that the target vehicle in both images is the same vehicle is highest, the vehicle stage rate can be calculated by:
wherein T a,Tb represents the time stamps of the two images of the prediction result, and S (L a,Lb) represents the distance in meters between the marker points on the road corresponding to the lamp post L a,Lb in step S24.
Preferably, when the vehicle re-recognition feature association module gives the optimal re-recognition matching b to the image a, that is, the probability that the target vehicle in the two images is the same vehicle is highest, the driving track of the vehicle is generated by judging as follows:
if another lamp post L c exists on the path of the lamp post L a,Lb and the optimal matching record of the vehicle re-identification characteristic association module for the image a and the image c exists in the rear-end database, namely the running track record a- > c of the vehicle exists, updating the running track of the vehicle in the rear-end database to be a- > b- > c; otherwise, the driving track a- > b of the vehicle is directly added into the back-end database.
Step S4: the parking space recognition and parking scheduling module deployed in the intelligent lamp post and the rear end system detects the parking space based on a lightweight key point parking space detection algorithm, and the lamp post and the ground lock are combined to carry out matching judgment on the vehicle and the parking space, and if matching is successful, the autonomous parking action of the vehicle and the judgment on whether parking is standard or not are completed.
The parking space recognition and parking scheduling module comprises a parking space detection sub-module and an autonomous parking and parking judgment sub-module.
The parking space detection sub-module performs step S41: the light-weight key point parking space detection algorithm is adopted to detect the parking space, and specifically comprises the following steps: training a lightweight convolutional neural network with a back set of MobileNet-v3, and carrying out regression prediction on key points (corner points) of the parking space; the parking space edge detection is carried out on the image frames shot by the lamp post camera by using a Canny operator, and the Canny operator is not easy to be interfered by noise, so that the detection effect on the weak edge is good; and (3) taking the regressive key points as vertexes, and extending the edge sides obtained by the Canny operator, wherein if the number of the predicted key points is less than the number of the actual key points of the parking space, the intersection points of the extended edges are taken as new vertexes, otherwise, only the edges are extended, and the parking space detection is completed.
FIG. 8 is a schematic diagram of parking space detection based on key point detection and edge detection operators under a bird's eye view, wherein black and white bottoms represent four corner lines of the parking space; the black points represent the key point predictions obtained based on the key point detection model, and it is to be noted that the key points are defined as the corner points which can be captured by the lamp post camera in the four corner points of the parking space; the black thick solid line represents the edge detected based on the Canny edge detection operator; the black thin dotted line represents a straight line where the parking space edge is obtained after the edge obtained based on the Canny edge detection operator and the key point are cooperated for extension. The method comprises the following specific steps:
1) Training and deploying a model for detecting the angular points of the parking spaces;
2) Inputting an image frame containing a parking space, which is shot by a lamp post camera;
3) Predicting a parking space corner point by using a key point detection model for an input image frame, and detecting a parking space edge by using a Canny operator;
4) And taking the predicted corner points as vertexes, and extending the obtained edge sides, if the number of the predicted key points is less than that of the actual key points of the parking space, taking the intersection points of the extended edges as new vertexes, otherwise, only extending the edges.
Meanwhile, the Canny edge detection operator further comprises the following steps: gaussian smoothing, calculating gradient magnitude and direction, non-maximum suppression of magnitude according to angle, detection and connection of edges with a dual threshold algorithm.
The autonomous parking and parking determination submodule performs steps S42 to S47.
Step S42: the matching condition of each parking space is imported and stored in a rear-end system, and the ground lock terminal sends a parking space matching obtaining request to the rear end through the intelligent lamp post to obtain the current matching vehicle of each parking space.
Step S43: and the ground lock in a lifted state detects whether the vehicle enters the current parking space range based on ultrasonic ranging, and if so, the camera corresponding to the lamp post is called to shoot the vehicle license plate and is sent to the rear-end system.
The calculation formula of ultrasonic ranging is:
L=T Flying ×V Sound production /2
Where L represents the target-to-transceiver distance, T Flying represents the time of flight, and V Sound production represents the speed of sound propagation in air.
Step S44: and the rear end invokes a license plate verification submodule to identify license plate information of the target vehicle, a lamp post matched with the ground lock acquires an identification result of a vehicle re-identification network from a rear end server system, performs matching detection on the vehicle attempting to stop in the parking space, and transmits detection information into a ground lock controller.
Step S45: the ground lock controller matches the detection information with the current matched vehicle of the parking space in the step S42: if the matching is successful, the ground lock descends, the vehicle stops autonomously, and the vehicle stops into a parking space; otherwise, the ground lock keeps a lifting state, and the matching failure information is sent to the corresponding lamp post through radio frequency communication, the lamp post feeds back to the rear end, and the vehicle further obtains a matching result from a rear end system.
Step S46: in the process of successfully matching the parking space when the vehicle is parked, the camera of the intelligent lamp post shoots an image frame and sends the image frame to the back-end system, and the back-end system judges whether parking is standard or not and sends feedback information to the user terminal based on the parking space detection result in the step S41.
Step S47: the ground lock determines whether the vehicle is parked or not through the ultrasonic ranging device, if so, a parking signal is sent to a lamp post through a radio frequency technology, the lamp post records a current time stamp, and the current time stamp is used as a parking time stamp and is sent to a back-end system; meanwhile, the ground lock sends the parking result of the corresponding parking space to the lamp post matched with the ground lock, and then the lamp post is sent to the rear end, and the rear end updates the parking space state.
More preferably, the ground lock module and the intelligent lamp post system cooperatively work, when the lamp post identifies a suspected illegal parking vehicle, information is transmitted to the ground lock terminal, and if the corresponding parking space is in an un-parked state in the rear-end database storage, the vehicle is marked as the illegal parking vehicle; the intelligent lamp post system sends a request of the vehicle re-identification module to the rear end, invokes the vehicle re-identification module, acquires the vehicle track, positions the illegal parking vehicle and performs key information interaction with the rear end system.
Step S5: when the ground lock detects that the target vehicle exits the parking space, the parking timing non-inductive payment module arranged on the intelligent ground lock and the rear end system counts the parking time of the vehicle through the ground lock, and the communication module transmits and transmits the parking fee calculated by the rear end, so that non-inductive payment is realized.
Step S51: the ground lock detects whether the vehicle drives out of the vehicle position based on ultrasonic ranging, if so, the ground lock lifts up and sends a driving-out signal to the intelligent lamp post through the radio frequency communication module, and the lamp post records the current time stamp as the driving-out time stamp and sends the driving-out time stamp to the back-end system; meanwhile, the ground lock sends the vehicle driving-out result of the corresponding parking space to the lamp post matched with the ground lock, and then the lamp post is sent to the rear end, and the rear end updates the parking space state.
Step S52: the back-end system calculates the parking cost of the target vehicle according to the difference between the driving-out time stamp and the parking cost calculation rule.
Step S53: when the intelligent lamp post receives the outgoing signal from the ground lock, the re-identification network is called again to track and detect the running track of the vehicle until the vehicle exits the parking lot.
Step S54: the back-end system sends the parking fee to the car owner corresponding to the license plate of the target car through the mobile communication network, deducts the fee from the bound electronic payment account and sends a fee deduction notice, so that the non-inductive payment is realized.
Steps S4 and S5 both relate to an intelligent ground lock, which is an intelligent device combining license plate recognition technology with parking space lock. The ground lock recognition module comprises an ultrasonic ranging device, a camera, an embedded controller, a motor module and the like, and achieves the functions of license plate recognition, vehicle information discrimination, parking duration recording, background server interaction and the like. The core module of the parking space lock system is an embedded controller and an image acquisition module (camera).
Fig. 9 shows a block diagram of the overall design of a ground lock smart device, in combination with which the following embodiments are given:
adopting an ARM-based microcomputer board, taking an SD/MicroSD card as a memory hard disk, and carrying a 1.2GHz 64-bit 4-core processor, namely raspberry group 3B, as an embedded controller module;
The camera part adopts a wide-angle camera special for raspberry pie, namely a CSI camera, which is developed based on a photosensitive chip OV5647, the resolution of the acquired still picture is 2592 multiplied by 1944, the image reaches 500 ten thousand high-definition pixels, and the characteristics of high transmission speed, sensitive response and the like are achieved.
The ultrasonic ranging module part adopts HC-SR04, the HC-SR04 module transmits ultrasonic waves through the signal transmitting end, the ultrasonic waves are reflected when encountering obstacles, and finally the receiving end monitors the reflected ultrasonic waves; the actual distance of the obstacle is solved by calculating the propagation speed and the flight time of the ultrasonic wave in the air.
And the motor driving module selects a TB6612 motor driving module according to the driving requirement of the remote control parking spot lock.
In this embodiment, the communication module mainly includes a lamp post and ground lock communication protocol, and a lamp post and back end communication protocol.
The lamp post and ground lock communication protocol mainly refers to communication between the lamp post and the ground lock by adopting an RF (radio frequency) technology; the RF radio frequency modulates (amplitude modulation or frequency modulation) an information source (analog signal or digital signal) with high-frequency current to form a radio frequency signal, and transmits the radio frequency signal into the air through an antenna; and (3) performing inverse modulation after receiving the radio frequency signals at a long distance, and recovering the radio frequency signals into an electric information source. This process is known as wireless transmission. The devices on which radio frequency communication mainly depends are a transmitter and a receiver; the radio frequency communication circuit mainly adopts a singlechip, has better compatibility, avoids complicating coding programs, has simple peripheral accessory distribution and is convenient to use; the radio frequency communication mainly adopts a 2.4GHz communication chip, has lower energy consumption and four working modes, integrates a link layer protocol, and improves the data transmission speed.
The lamp post and the back end communication protocol mainly refers to mobile communication between the lamp post and the ground lock, and comprises a 3G/4G/5G mobile network. The communication mode relies on the communication base station, and has the advantages of convenience in communication access, long transmission distance and lower communication cost, and communication transmission of the intelligent lamp post can be established only by an operator network signal. The intelligent lamp pole module selects 4G/5G communication mode for use, and convenient quick network deployment, coverage dead angle is little, can conveniently realize data cloud, dock high in clouds management platform and remote monitoring feedback, improves intelligent management level of intelligent lamp pole system.
The back-end system is used for storing all effective data information related to the global large scene, processing all data operations, and deploying all algorithms and neural networks contained in the technical scheme, wherein the algorithms comprise a license plate information recognition network, a time-space correlation vehicle re-recognition network, a parking space detection algorithm, a path planning algorithm and the like.
In summary, the present invention relates to the relevant fields of vehicle re-recognition, scene text recognition, image processing, multi-sensor data fusion, multi-modal data processing, neural networks, electronics, software, embedded computers, bus communications, and the like. Aiming at the existing intelligent parking lot, a perfect system conception is provided, and powerful support is provided for the popularization of unmanned operation.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by a person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (7)

1. An intelligent park unmanned parking method based on space-time vehicle re-identification is characterized by comprising the following steps:
Step S1: when a target vehicle arrives at a parking lot entrance, acquiring first image key frame information of the target vehicle shot by a camera arranged on a lamp post at the entrance, and extracting license plate characteristic information;
Step S2: after the parking lot entrance gate releases the target vehicle, recommending an optimal parking space for the target vehicle based on a path planning algorithm, and constructing a path based on a unique point position of the lamp post to guide the vehicle to stop;
Step S3: invoking a plurality of lamp posts distributed in the intelligent park to acquire second image key frame information in the running process of the target vehicle, and based on the first and second image key frame information and license plate characteristic information, performing re-identification association processing by utilizing a vehicle re-identification network PVRS, generating a running track of the target vehicle by combining the appearance characteristics of the vehicle, geographic tags of the lamp posts and time stamps, calculating the average running speed of the vehicle, and monitoring the running route in real time;
Step S4: detecting a parking space based on a lightweight key point parking space detection algorithm, and matching and judging the vehicle and the parking space by combining the lamp post and the ground lock, if the matching is successful, completing the autonomous parking action of the vehicle and judging whether the parking is standard or not;
step S5: when the local lock detects that the target vehicle exits the parking space, the parking time of the vehicle is counted, the parking cost is calculated, and the noninductive payment is realized;
the vehicle re-recognition network PVRS comprises a self-supervision attention vehicle appearance recognition sub-module, a license plate verification sub-module and a reordering sub-module based on space-time correlation information;
The self-supervision attention vehicle appearance recognition submodule carries out feature extraction on the image key frame information based on a self-supervision attention mechanism to obtain vehicle appearance features, and the feature extraction is completed based on self-supervision residual error generation and deep feature extraction, and specifically comprises the following steps:
Step S311: self-supervision residual error generation: the improved VAE architecture is used, the input image is subjected to downsampling through maximum pooling, dimensionality is reduced, the input image is subjected to re-parameterization through mean and covariance of potential features, namely an automatic variable encoder, the potential feature mapping is subjected to upsampling, image reconstruction is carried out, the re-modeling type is pre-trained by using mean square error and KL divergence in the process, and a loss function formula is expressed as follows:
Lconstruct=Lmse+θLkl
Wherein θ is used to adjust the weight ratio of the mean square error and the KL divergence, L mse is the mean square error loss, L kl is the KL divergence loss, and L construct is the loss function of the reconstruction model;
Step S312: deep feature extraction: projecting the vehicle image into a low-dimensional vector space using a single-branch ResNet-50 feature extraction network, and preserving features that effectively characterize the identity of the vehicle;
Step S313: weighting the original image and its residuals using learnable parameters allows the feature extraction network to weight the importance of each input source, where the total loss function is formulated as follows:
L=Ltriplet+Lcrossentropy+μLconstruct
Wherein L triplet represents a triplet loss function, L crossentropy represents a cross entropy loss function, L construct represents a loss function of the reconstruction model, μ is a preconfigured tuning parameter;
in the triplet loss function, for a given anchor, B represents the total number of lots, B i represents the i-th lot, s represents another sample, offset represents the distance edge threshold, P (a) and N (a) represent positive and negative samples, respectively, and Euclid () represents the euclidean distance between the two samples;
In the cross-entropy loss function, AndThe features extracted from the ith image in the training set after BN Neck layers are passed through and corresponding to the group-truth labels, W j and d j are weight vectors and deviations related to the category j in the final classification layer, and N and C represent the total number of samples and the category number in the training process respectively;
Step S314: adding the Euclidean distances of the extracted appearance features of the vehicle, and obtaining a similarity list according to the similarity degree to realize the extraction of the appearance features of the vehicle;
the reordering submodule based on the space-time correlation information performs descending order arrangement screening through the following space-time similarity formula:
Wherein a and b respectively represent two images obtained by shooting by cameras of a lamp post L a,Lb, T a,Tb respectively represents time stamps of the two images, T max represents the maximum value of time stamps of all image frames obtained by shooting by all cameras of a park within a period of time T, dist (L a,Lb) represents the physical distance between the two lamp post cameras, D max represents the maximum value of the physical distance between any two cameras in the park, mu represents a weight function, the value is (0, 1), the closer to 1 represents the closer to the empty similarity to the time similarity, and the closer to 0 represents the space-time similarity to the space similarity.
2. The intelligent park unmanned parking method based on space-time vehicle re-identification of claim 1, wherein the license plate feature information extraction is implemented by a single-step target detection network based on YOLOv, and the single-step target detection network improves the YOLOv network by: replacing the backup with EFFICIENTNET; a data enhancement method of rotation or reflection transformation and noise disturbance is adopted; adding mixup an enhancement method on the basis of mosaics; adding a self-adaptive feature fusion ASFF layer to carry out feature weighted fusion on different levels; the network is trained using image samples that cover multiple situations as a training set, wherein the multiple situations include different lighting conditions, different weather, different angles.
3. The smart park unmanned parking method based on space-time vehicle re-identification of claim 1, wherein the step S2 comprises the steps of:
Step S21: after a target vehicle is released by a gate at the entrance of a parking lot, acquiring a park parking space state sequence X (n) = { X 1,x2,…,xn }, wherein n represents the total number of park parking spaces, X i represents the state score of a parking space with a number i, and when the parking space i is an empty parking space, X i takes 1; when the parking space i is not empty, x i takes floating point number epsilon, and 0< epsilon < 1;
Step S22: calculating a parking space evaluation function f (i):
f(i)=xi*(yi+zi)
wherein y i represents a parking space distance evaluation score, and the shorter the distance from the entrance, the higher the score; z i represents the parking difficulty evaluation score of the parking space, and is obtained by weighting calculation according to the attribute of the parking space and the attribute of the surrounding environment of the parking space;
Step S23: according to a parking space evaluation function f (i), maintaining a priority queue pq formed by all parking spaces, wherein the evaluation function value is large and is close to the head of a team, when the optimal parking space is required to be recommended, the head element of the team is the optimal parking space besti, the starting point is set as an entrance, the end point is set as the optimal parking space, and the optimal parking space is used as the input of a single-source path planning algorithm based on a Dijkstra algorithm;
Step S24: and (3) binding a lamp post beside the road to a point which is close to the lamp post and is positioned in the road by utilizing the uniqueness of the geographical position of the lamp post, using the lamp post as a marking point of the lamp post, calculating the shortest path by using the Dijkstra algorithm according to the marking point of the lamp post and the communication relation between the marking points as the vertex and the edge in the undirected graph, and finishing path planning and guiding the vehicle to stop.
4. The intelligent park unmanned parking method based on space-time vehicle re-identification of claim 1, wherein the license plate verification submodule adopts a twin neural network SNN to realize license plate verification, wherein the twin neural network SNN adopts two CNNs with identical network structures, and the CNNs with identical weights are shared in forward and reverse calculation; each CNN consists of two convolution layers, a pooling layer and three full connection layers; the contrast loss function formula is expressed as follows:
Euclid(x1,x2)=||x1-x2||
wherein x 1,x2 represents the feature vectors of the input samples x 1 and x 2 after forward propagation, and euclidean (x 1,x2) represents the euclidean distance between the two feature vectors, and is used for quantitatively representing the similarity of the two vectors, and hp is a super parameter.
5. The smart park unmanned parking method based on space-time vehicle re-identification of claim 1, wherein the step S4 comprises the steps of:
Step S41: the light-weight key point parking space detection algorithm is adopted to detect the parking space, and specifically comprises the following steps: training a lightweight convolutional neural network with a back bone of MobileNet-v3, and carrying out regression prediction on key points of the parking space; carrying out parking space edge detection on an image frame shot by a lamp post camera by using a Canny operator; taking the regressive key points as vertexes, extending edge sides obtained by the Canny operator, taking intersection points of the extended edges as new vertexes if the number of the predicted key points is less than that of the actual key points of the parking space, otherwise, only extending edges to finish the detection of the parking space;
step S42: the ground lock terminal acquires the current matched vehicle of each parking space;
step S43: the ground lock in a lifted state detects whether the vehicle enters the current parking space range based on ultrasonic ranging, and if so, a camera corresponding to the lamp post is called to shoot the license plate of the vehicle;
Step S44: calling a license plate verification submodule to identify license plate information of a target vehicle, acquiring an identification result of a vehicle re-identification network by a lamp post matched with a ground lock, carrying out matching detection on a vehicle attempting to stop in the parking space, and transmitting detection information into a ground lock controller;
Step S45: the ground lock controller matches the detection information with the current matched vehicle of the parking space in the step S42: if the matching is successful, the ground lock descends, the vehicle stops autonomously, and the vehicle stops into a parking space; otherwise, the ground lock keeps a lifting state and saves the matching failure information;
step S46: in the process of successfully parking the vehicle into the matched parking space, the camera of the intelligent lamp post shoots image frames, judges whether parking is standard or not based on the parking space detection result in the step S41 and sends feedback information to the user terminal;
Step S47: after the vehicle is parked in the parking space, the state of the parking space is updated, and the current time stamp is recorded and used as the parking time stamp.
6. The intelligent park unmanned parking method based on space-time vehicle re-identification of claim 5, wherein the calculation formula of the ultrasonic ranging is as follows:
L=T Flying ×V Sound production /2
Where L represents the target-to-transceiver distance, T Flying represents the time of flight, and V Sound production represents the speed of sound propagation in air.
7. The smart park unmanned parking method based on space-time vehicle re-identification of claim 5, wherein the step S5 comprises the steps of:
Step S51: detecting whether the vehicle exits the vehicle space based on ultrasonic ranging, if so, lifting the ground lock, sending an exiting signal to the intelligent lamp post, recording the current timestamp as an exiting timestamp, and updating the vehicle space state;
Step S52: calculating the parking cost of the target vehicle according to the difference between the driving-out time stamp and the parking cost calculation rule;
step S53: when the intelligent lamp post receives the running-out signal from the ground lock, the re-identification network is called again to track and detect the running track of the vehicle until the vehicle runs out of the parking lot;
Step S54: and sending the parking fee to the car owner corresponding to the license plate of the target car through the mobile communication network, deducting the fee from the bound electronic payment account and sending a fee deduction notice to realize the noninductive payment.
CN202310130800.4A 2023-02-17 2023-02-17 Intelligent park unmanned parking method based on space-time vehicle re-identification Active CN116343522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310130800.4A CN116343522B (en) 2023-02-17 2023-02-17 Intelligent park unmanned parking method based on space-time vehicle re-identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310130800.4A CN116343522B (en) 2023-02-17 2023-02-17 Intelligent park unmanned parking method based on space-time vehicle re-identification

Publications (2)

Publication Number Publication Date
CN116343522A CN116343522A (en) 2023-06-27
CN116343522B true CN116343522B (en) 2024-06-28

Family

ID=86890588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310130800.4A Active CN116343522B (en) 2023-02-17 2023-02-17 Intelligent park unmanned parking method based on space-time vehicle re-identification

Country Status (1)

Country Link
CN (1) CN116343522B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112265541A (en) * 2020-10-13 2021-01-26 恒大新能源汽车投资控股集团有限公司 Automatic parking method and device based on PC5 air interface direct communication

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014085316A1 (en) * 2012-11-27 2014-06-05 Cloudparc, Inc. Controlling use of a single multi-vehicle parking space using multiple cameras
KR20200072597A (en) * 2018-12-06 2020-06-23 현대자동차주식회사 Automated Valet Parking System, and infrastructure and vehicle thereof
CN110264727A (en) * 2019-05-27 2019-09-20 同济大学 Multi-mode autonomous intelligence unmanned systems and method towards intelligence community parking application
JP7259698B2 (en) * 2019-10-17 2023-04-18 トヨタ自動車株式会社 automatic parking system
CN111081047A (en) * 2019-12-10 2020-04-28 重庆邮电大学 Accurate intelligent parking management method and management system based on photoelectric image processing
CN111914664A (en) * 2020-07-06 2020-11-10 同济大学 Vehicle multi-target detection and track tracking method based on re-identification
CN112509041B (en) * 2020-11-25 2024-05-03 杭州自动桌信息技术有限公司 Parking-lot-based vehicle positioning method, system and storage medium
CN113052008A (en) * 2021-03-01 2021-06-29 深圳市捷顺科技实业股份有限公司 Vehicle weight recognition method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112265541A (en) * 2020-10-13 2021-01-26 恒大新能源汽车投资控股集团有限公司 Automatic parking method and device based on PC5 air interface direct communication

Also Published As

Publication number Publication date
CN116343522A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
EP3573024B1 (en) Building radar-camera surveillance system
CN110672111B (en) Vehicle driving path planning method, device, system, medium and equipment
CN111081064B (en) Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet
CN108574929A (en) The method and apparatus for reproducing and enhancing for the networking scenario in the vehicle environment in autonomous driving system
CN108983219A (en) A kind of image information of traffic scene and the fusion method and system of radar information
CN106846870A (en) The intelligent parking system and method for the parking lot vehicle collaboration based on centralized vision
CN106740841A (en) Method for detecting lane lines, device and mobile unit based on dynamic control
CN110446160B (en) Deep learning method for vehicle position estimation based on multipath channel state information
CN113593250A (en) Illegal parking detection system based on visual identification
JP7145971B2 (en) Method and Vehicle System for Passenger Recognition by Autonomous Vehicles
CN111907517A (en) Automatic parking control method and system, vehicle and field end edge cloud system
CN112597807B (en) Violation detection system, method and device, image acquisition equipment and medium
CN115205559A (en) Cross-domain vehicle weight recognition and continuous track construction method
CN116013067A (en) Vehicle data processing method, processor and server
CN114905512A (en) Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot
CN110879990A (en) Method for predicting queuing waiting time of security check passenger in airport and application thereof
CN116343522B (en) Intelligent park unmanned parking method based on space-time vehicle re-identification
CN108416632B (en) Dynamic video identification method
CN111739332B (en) Parking lot management system
CN108416880B (en) Video-based identification method
CN115762172A (en) Method, device, equipment and medium for identifying vehicles entering and exiting parking places
CN113624223B (en) Indoor parking lot map construction method and device
CN115412844A (en) Real-time alignment method for vehicle networking beams based on multi-mode information synaesthesia
CN116700228A (en) Robot path planning method, electronic device and readable storage medium
CN114581748A (en) Multi-agent perception fusion system based on machine learning and implementation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant