CN111860072A - Parking control method and device, computer equipment and computer readable storage medium - Google Patents

Parking control method and device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN111860072A
CN111860072A CN201910362438.7A CN201910362438A CN111860072A CN 111860072 A CN111860072 A CN 111860072A CN 201910362438 A CN201910362438 A CN 201910362438A CN 111860072 A CN111860072 A CN 111860072A
Authority
CN
China
Prior art keywords
obstacle
coordinate frame
coordinate
parking control
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910362438.7A
Other languages
Chinese (zh)
Inventor
谷俊
何俏君
尹超凡
李彦琳
付颖
彭斐
毛茜
王薏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN201910362438.7A priority Critical patent/CN111860072A/en
Publication of CN111860072A publication Critical patent/CN111860072A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a parking control method, which comprises the following steps: acquiring image information around a vehicle acquired by a sensor; obtaining an obstacle category and an obstacle coordinate frame according to the image information and an obstacle recognition model trained in advance, wherein the obstacle recognition model is a lightweight convolutional neural network, the obstacle coordinate frame corresponds to the obstacle one by one, and the edge of the obstacle is surrounded by the coordinate frame so as to enable the obstacle to be located in the coordinate frame; and carrying out parking control according to the obstacle type and the obstacle coordinate frame. The invention also discloses a parking control device, a computer device and a computer readable storage medium. The method and the device can provide real-time and accurate obstacle sensing information for automatic parking, and have lower cost and high recognition rate.

Description

Parking control method and device, computer equipment and computer readable storage medium
Technical Field
The present invention relates to the field of intelligent parking technologies, and in particular, to a parking control method, a parking control apparatus, a computer device, and a computer-readable storage medium.
Background
With the increasing population of cities, the number of private cars is increasing, and in contrast, the number of parking spaces is limited and the positions are more and more compact, so that a quick and effective automatic parking technology needs to be adopted to enable the cars to be parked at the correct parking positions.
In the process of parking, the detection of obstacles in a parking space is very important. Most of the existing obstacle detection technologies rely on methods of ultrasonic radar detection, laser radar detection or computer vision detection.
For example, the prior art discloses a method for detecting obstacles and distinguishing a parking space of an intelligent parking system and an implementation system, which rely on vehicle positioning data and radar data, perform coordinate transformation and fitting processing on data points, fit an obstacle external connection rectangle, and determine whether a parking space meets parking conditions, thereby implementing a parking space detection function. However, this solution needs to integrate two-dimensional radar information (e.g. lidar) and positioning information (e.g. GPS global positioning system), and for indoor situations, the positioning information is difficult to obtain, and the lidar is expensive and costly.
In the prior art, an improved mixed Gaussian background modeling method is used for establishing a background, an original video image and a background image are subtracted, foreground and target separation is carried out through acquisition of an Otsu automatic threshold value, and then a barrier is detected through a method for detecting a geometrically constrained barrier by extracting dynamic features and static features. However, the scheme is susceptible to environmental conditions such as shadows and illumination, and the background modeling method is not sensitive to static obstacles and is not suitable for detecting the static obstacles.
However, the above-described technique relies on a plurality of ultrasonic sensors around the vehicle body to capture environmental information around the vehicle body in real time, and estimates the distance from the vehicle to an obstacle and the size of the obstacle using reflected waves from objects (obstacles) present near the vehicle periphery. However, the two technical solutions are limited by the characteristics of the ultrasonic sensor, the phenomenon that the echo cannot be received exists at the rounded corner, and the echo information contains a large amount of interference information, so that the size of the obstacle is not easy to determine.
Disclosure of Invention
The invention aims to provide a parking control method, a computer device and a computer readable storage medium with low cost and high recognition rate, which can provide real-time and accurate obstacle sensing information for automatic parking.
In order to solve the above technical problem, the present invention provides a parking control method, including: acquiring image information around a vehicle acquired by a sensor; obtaining an obstacle category and an obstacle coordinate frame according to the image information and an obstacle recognition model trained in advance, wherein the obstacle recognition model is a lightweight convolutional neural network, the obstacle coordinate frame corresponds to the obstacle one by one, and the edge of the obstacle is surrounded by the coordinate frame so as to enable the obstacle to be located in the coordinate frame; and carrying out parking control according to the obstacle type and the obstacle coordinate frame.
As an improvement of the scheme, the network architecture model of the lightweight convolutional neural network at least comprises 16 layers for feature extraction, and at least two scales of feature maps are adopted for classification and detection box regression.
As an improvement of the above scheme, the 16 layers include 10 convolutional layers and 6 maximum pooling layers, and output feature maps of X channels to implement obstacle classification and coordinate frame regression, where: x ═ a + b + c) × d, where a is the number of coordinate values of the coordinate frame, b is the confidence value of the coordinate frame, c is the number of obstacle categories, and d is the number of prior frames; the two scales comprise that the feature graph with the size of A is subjected to obstacle classification, the feature graph with the size of A is subjected to upsampling and then is merged with the feature graph with the size of 2A to perform coordinate frame regression, wherein A is a numerical value which is reduced by n times according to the size of an original graph, and n is a positive integer.
As an improvement of the above scheme, the method further includes training an obstacle recognition model, specifically including: acquiring a sample data set, a basic convolution neural network model and a loss function; combining the basic convolutional neural network model with the loss function to generate an initial convolutional neural network model; and training the initial convolutional neural network model according to the sample data set to generate an obstacle identification model.
As an improvement of the above scheme, the constructing step of the sample data set includes: acquiring an image sample set, wherein the image sample set comprises a plurality of image samples containing obstacles; labeling each image sample to generate labeling information, wherein the labeling information comprises obstacle category information and coordinate information of a coordinate frame of an obstacle; and combining the image sample sets containing the labeling information into a sample data set.
As an improvement of the above scheme, the loss function is the sum of a coordinate error function of a coordinate frame, a cross-comparison error function of the coordinate frame and a classification error function; the coordinate error function of the coordinate frame is
Figure BDA0002047234750000021
Wherein x isiIs the abscissa, y, of the upper left corner of the coordinate frameiIs the ordinate, ω, of the upper left corner of the coordinate frameiIs the width of the coordinate frame, hiIs the height of the coordinate frame, λcoordThe weight loss of the coordinate frame position is S, the image is divided into S-S grids, B is the number of the prior frames corresponding to each grid,
Figure BDA0002047234750000031
is a first detection parameter; the frame cross-to-parallel ratio error function is
Figure BDA0002047234750000032
Wherein, ciIs the intersection ratio of the ith coordinate frame and the real frame, lambdanoobjFor the missing weight of no target object within a single mesh,
Figure BDA0002047234750000033
is a second detection parameter; the classification error function is
Figure BDA0002047234750000034
Wherein p isiTo identify confidence in the object.
Accordingly, the present invention also provides a parking control apparatus comprising: the acquisition module is used for acquiring image information around the vehicle acquired by the sensor; the identification module is used for obtaining an obstacle category and an obstacle coordinate frame according to the image information and an obstacle identification model trained in advance, wherein the obstacle identification model is a lightweight convolutional neural network, the obstacle coordinate frame corresponds to obstacles one by one, and the edge of the obstacle is surrounded by the coordinate frame so as to enable the obstacle to be located in the coordinate frame; and the control module is used for carrying out parking control according to the obstacle type and the obstacle coordinate frame.
Further, the network architecture model of the lightweight convolutional neural network in the identification module at least comprises 16 layers for feature extraction, and classification and detection box regression are performed by using feature maps of at least two scales.
Correspondingly, the invention further provides computer equipment which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the parking control method when executing the computer program.
Accordingly, the present invention also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the parking control method described above.
The implementation of the invention has the following beneficial effects:
the method introduces a lightweight convolutional neural network frame into the field of parking space obstacle identification, realizes accurate identification of obstacles, can provide real-time and accurate obstacle sensing information for automatic parking, and has higher identification rate on static objects and obstacle types compared with the prior art.
Drawings
Fig. 1 is a flowchart of a parking control method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of the structure of an obstacle recognition model according to the present invention;
FIG. 3 is a flow chart of the steps of constructing the obstacle identification model according to the present invention;
FIG. 4 is a graph of a loss function in the present invention;
FIG. 5 is a flowchart of the construction steps of the sample data set in the present invention;
fig. 6 is a schematic structural view of the parking control apparatus according to the first embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is a flowchart of a parking control method according to a first embodiment of the present invention, which includes:
and S101, acquiring image information around the vehicle acquired by the sensor.
The sensor may be a camera, but is not limited thereto; meanwhile, the number of the sensors is preferably four, and the sensors can be arranged in the front, the rear, the left and the right directions of the shell of the vehicle, so that the all-dimensional acquisition of the image information around the vehicle (namely inside and outside the parking space) is realized.
And S102, obtaining the obstacle type and the obstacle coordinate frame according to the image information and the obstacle recognition model trained in advance.
It should be noted that the obstacle identification model in the present invention is a lightweight convolutional neural network. Specifically, the obstacle recognition model is formed by training a convolutional neural network model through a sample data set, whether an obstacle is contained in image information or not can be detected in real time through the obstacle recognition model, the type and the coordinate of the obstacle can be recognized, and real-time and accurate obstacle sensing information is provided for automatic parking.
Specifically, the parking control apparatus may recognize the obstacle category in the image information according to the obstacle recognition model. Preferably, the obstacle may be a type of ice cream cone, a place holder rod, a ground lock in an open state, a ground lock in a closed state, a pedestrian, a bicycle, an automobile, or the like, but is not limited thereto.
Meanwhile, the parking control device can identify the obstacle coordinate frame according to the obstacle identification model. In the invention, the obstacles are marked in a form of a coordinate frame, the edges of the obstacles are surrounded by the coordinate frame so that the obstacles are completely or mostly positioned in the coordinate frame, and the obstacles correspond to the coordinate frame one by one. The coordinate frame is preferably a rectangular coordinate frame, and the coordinate information of the coordinate frame comprises the pixel coordinate of the upper left corner and the pixel coordinate of the lower right corner of the rectangular coordinate frame. Therefore, the coordinates of the obstacle can be efficiently determined by the coordinate frame coordinate information.
Accordingly, the obstacle identification model may be installed in an in-vehicle computing platform.
As shown in fig. 2, the obstacle recognition model includes at least 16 layers for feature extraction. Specifically, the 16 layers include 10 convolutional layers and 6 maximum pooling layers, and output feature maps of X channels to implement obstacle classification and coordinate frame regression, where: x ═ a + b + c) × d, where a is the number of coordinate values of the coordinate frame, b is the confidence value of the coordinate frame, c is the number of obstacle categories, and d is the number of prior frames; specifically, the value range of b is 0-1.
For example, 3 kinds of prior frames are adopted in the model for prediction, and a rectangular coordinate frame form (including 4 coordinate values such as an abscissa, an ordinate, a width, a height and the like), 1 frame confidence value and 10 kinds of detection probabilities of obstacles are selected. Therefore, each prior frame prediction outputs a (4+1+10) -dimensional vector, i.e., 3 prior frames finally output (4+1+10) × 3 ═ 45 channels.
In order to detect objects with different sizes in an image at the same time, the obstacle identification model performs obstacle classification and coordinate frame regression by using two scales, specifically, the two scales include that a feature map with a size of a performs obstacle classification, the feature map with the size of a is subjected to upsampling and then is combined with a feature map with a size of 2A to perform coordinate frame regression, where a is a numerical value reduced by n times according to the size of an original image, and n is a positive integer. For example, when the model input original size is 416 × 416, and n is 32 times down-sampling is performed to obtain a feature map of size 13 × 13, one feature map of size 13 × 13 is used to classify the obstacle, and the other feature map of size 13 × 13 is up-sampled and merged with a feature map of size 26 × 26, and then coordinate frame regression is performed. Due to the fact that the down-sampling multiple is high, the receptive field is large, and the method is suitable for detecting objects with large sizes in the images. And the other one is used for detecting regression after upsampling the feature map with the size of 13 x 13 and combining the upsampled feature map with the size of 26 x 26, and obtaining the feature map with the size of 26 x 26 after performing 16-time downsampling and final fusion on the original image with the size of 416 x 416.
And S103, performing parking control according to the obstacle type and the obstacle coordinate frame.
After the parking control device identifies the type information and the coordinate information of the obstacles inside and outside the parking space, the type information and the coordinate information of the obstacles are sent to the display to be displayed, so that a user can know the relative positions and sizes of the different obstacles visually, and finally the parking control device can perform parking control conveniently according to the type of the obstacles and the coordinate frame of the obstacles.
The display is preferably an on-vehicle display, but the display is not limited to this, and the display is a device having a display function, so that the flexibility is strong. When the image information is displayed, the obstacle category information, the coordinate frame and the coordinate frame coordinate information are fused with the image information (namely, the obstacle category information, the coordinate frame and the coordinate frame coordinate information are drawn on the image to be displayed), and the intuition is strong.
Therefore, the light-weight convolutional neural network framework is introduced into the field of parking space obstacle identification, the obstacle type and the coordinate frame coordinate are accurately identified through the obstacle identification model, the accurate identification of the obstacle is realized, the real-time and accurate obstacle sensing information can be provided for automatic parking, the high identification rate of the static object is realized, and the parking control device can conveniently carry out parking control according to the obstacle type and the obstacle coordinate frame.
As shown in fig. 3, the training step of the obstacle recognition model includes:
s201, acquiring a sample data set, a basic convolution neural network model and a loss function;
the sample data set includes a plurality of sample data containing obstacles.
In the invention, the number of the sample data sets is 5000, but the invention is not limited by the sample data sets, as long as the training requirements are met; meanwhile, a user can select a basic convolutional neural network model according to actual requirements; in addition, by combining with the characteristics of the sample, the invention designs a unique loss function by extracting edge parameters such as the types of the obstacles and the coordinates of the coordinate frame, wherein the loss function comprises a coordinate error function of the coordinate frame, a cross-comparison error function of the coordinate frame and a classification error function. Specifically, the method comprises the following steps:
the coordinate error function of the coordinate frame is used for expressing the coordinate accumulated error of the coordinate frame and comprises a vertical coordinate error function of the coordinate frame and a horizontal coordinate error function of the coordinate frame and a width error function of the coordinate frame. The coordinate frame has a vertical and horizontal coordinate error function of
Figure BDA0002047234750000061
Wherein x isiDenotes the abscissa of the upper left corner of the coordinate frame, yi denotes the ordinate of the upper left corner of the coordinate frame, λcoordThe weight loss of the coordinate frame position is S, the image is divided into S-S grids, B is the number of the prior frames corresponding to each grid,
Figure BDA0002047234750000062
For the first detection parameter, when the target object is detected in a single grid and the intersection ratio is maximum in the B prior boxes,
Figure BDA0002047234750000063
the value is 1, otherwise, the value is 0; coordinate frame width height error function of
Figure BDA0002047234750000064
Wherein, ω isiDenotes the width, h, of the coordinate frameiRepresenting the height, λ, of the coordinate framecoordThe weight loss of the coordinate frame position is S, the image is divided into S-S grids, B is the number of the prior frames corresponding to each grid,
Figure BDA0002047234750000065
for the first detection parameter, when the target object is detected in a single grid and the intersection ratio is maximum in the B prior boxes,
Figure BDA0002047234750000066
the value is 1, otherwise 0.
The coordinate frame intersection ratio error function is used for expressing the intersection ratio accumulated error of the coordinate frame and has the function formula of
Figure BDA0002047234750000067
Wherein, ciIs the intersection ratio of the ith coordinate frame and the real frame, lambdanoobjThe weight loss of the target object in a single grid is defined, wherein S means that the image is divided into S-S grids, B is the number of the prior frames corresponding to each grid,
Figure BDA0002047234750000068
for the second detection parameter, when the target object is not detected in the single mesh,
Figure BDA0002047234750000069
the value is 1, otherwise 0.
The classification error function is used for expressing the accumulated error of the obstacle classification and has the function formula of
Figure BDA00020472347500000610
Wherein p isiExpressed as confidence in the recognition of the object, S means the division of the image into S-S meshes And when the center of the detected object falls in the ith grid,
Figure BDA00020472347500000611
is 1, otherwise is 0.
Further, the loss function is the sum of a coordinate error function of the coordinate frame, an intersection ratio error function of the coordinate frame and a classification error function, namely the loss function is as follows:
Figure BDA00020472347500000612
Figure BDA0002047234750000071
s202, combining the basic convolutional neural network model with the loss function to generate an initial convolutional neural network model.
And setting a loss function in the basic convolutional neural network model to form an initial convolutional neural network model.
And S203, training the initial convolutional neural network model according to the sample data set to generate an obstacle identification model.
And inputting a sample data set into the initial convolutional neural network model, and iteratively training the initial convolutional neural network model through a loss function to obtain a trained obstacle identification model.
It should be noted that the goal of training is to gradually decrease the loss function to a more stable value, the loss function curve during the training process is shown in fig. 4, as the number of iterations increases, the loss function decreases rapidly and then decreases smoothly, when 500000 iterations are reached, the loss value of the loss function decreases to a stable value (about 0.22), and a weight file with a weight format parameter as suffix is obtained after the training is completed.
In the invention, a sample data set with 5000 samples in total is adopted for training, the training learning rate is 0.001, the number of batch (batch) pictures for each training is 64, and the maximum iteration number is 500000.
Therefore, the method constructs an accurate obstacle identification model by designing a unique loss function, realizes efficient identification of the obstacle, and has high accuracy.
As shown in fig. 5, the step of constructing the sample data set includes:
s301, an image sample set is obtained, wherein the image sample set comprises a plurality of image samples containing obstacles.
The invention collects more than 5000 image samples containing ice cream cones, space occupying plates, space occupying rods, ground locks in an open state, ground locks in a closed state, pedestrians, bicycles, automobiles and other obstacles in different shooting scenes to form an image sample set.
And S302, performing labeling processing on each image sample to generate labeling information.
And labeling each image by using a labeling tool to generate labeling information, wherein the labeling information comprises obstacle category information and coordinate information (upper left pixel coordinate and lower right pixel coordinate of a coordinate frame) of the image. Meanwhile, the annotation information of each image sample is stored in an xml format.
And S303, combining the image sample sets containing the labeling information into a sample data set.
Therefore, the invention realizes the accurate classification and positioning of the obstacles by adopting a characteristic labeling mode, and lays a solid foundation for constructing an accurate obstacle identification model.
Referring to fig. 6, fig. 6 shows a first embodiment of a parking control apparatus 100 of the present invention, which includes:
the acquisition module 1 is used for acquiring the image information around the vehicle acquired by the sensor 1. The sensor may be a camera, but is not limited thereto; meanwhile, the number of the sensors is preferably four, and the sensors can be arranged in the front, the rear, the left and the right directions of the shell of the vehicle, so that the all-dimensional acquisition of the image information around the vehicle (namely inside and outside the parking space) is realized.
And the identification module 2 is used for obtaining the obstacle category and the obstacle coordinate frame according to the image information and an obstacle identification model trained in advance. It should be noted that the obstacle identification model in the present invention is a lightweight convolutional neural network. Specifically, the obstacle recognition model is formed by training a convolutional neural network model through a sample data set, whether an obstacle is contained in image information or not can be detected in real time through the obstacle recognition model, the type and the coordinate of the obstacle can be recognized, and real-time and accurate obstacle sensing information is provided for automatic parking. Preferably, the obstacle recognition model in the present invention includes at least 16 layers for feature extraction. Specifically, the 16 layers include 10 convolutional layers and 6 maximum pooling layers, and output feature maps of X channels to implement obstacle classification and coordinate frame regression, where: x ═ a + b + c) × d, where a is the number of coordinate values of the coordinate frame, b is the confidence value of the coordinate frame, c is the number of obstacle categories, and d is the number of prior frames; specifically, the value range of b is 0-1. Meanwhile, in order to detect objects with different sizes in an image at the same time, the obstacle identification model adopts two scales to perform obstacle classification and coordinate frame regression, specifically, the two scales include that a feature graph with the size of A is used for performing obstacle classification, the feature graph with the size of A is subjected to upsampling and then is combined with a feature graph with the size of 2A to perform coordinate frame regression, wherein A is a numerical value obtained by reducing the size of an original image by n times, and n is a positive integer.
Specifically, the recognition module 2 may recognize the obstacle category information in the image information according to the obstacle recognition model. The obstacle may be a ice cream cone, a place holder rod, a ground lock in an open state, a ground lock in a closed state, a pedestrian, a bicycle, an automobile, or the like, but is not limited thereto.
The recognition module 2 may also recognize coordinate frame coordinate information of the obstacle according to the obstacle recognition model. The coordinate recognition unit 222 performs annotation on the obstacle in the form of a coordinate frame, the edge of the obstacle is surrounded by the coordinate frame so that the obstacle is completely arranged in the coordinate frame, and the obstacle corresponds to the coordinate frame one by one. The coordinate frame is preferably a rectangular coordinate frame, and the coordinate information of the coordinate frame comprises the pixel coordinate of the upper left corner and the pixel coordinate of the lower right corner of the rectangular coordinate frame. Therefore, the coordinates of the obstacle can be efficiently determined by the coordinate frame coordinate information.
The identification module 2 may also generate an obstacle detection result according to the obstacle category information and the coordinate information of the coordinate frame. The obstacle detection result comprises obstacle category information and coordinate frame coordinate information, and when the obstacle detection result is displayed, the obstacle category information, the coordinate frame and the coordinate frame coordinate information are fused with the image information (namely, the obstacle category information, the coordinate frame and the coordinate frame coordinate information are drawn on the image to be displayed), so that the intuition is strong. Meanwhile, the display is preferably an on-vehicle display, but the display is not limited to this, so long as the device has a display function, and the flexibility is strong.
And the control module 3 is used for carrying out parking control according to the obstacle type and the obstacle coordinate frame.
Therefore, the light-weight convolutional neural network framework is introduced into the field of parking space obstacle identification, and the obstacle type and the coordinate frame coordinate are accurately identified through the obstacle identification model, so that the accurate identification of the obstacle is realized, and the real-time and accurate obstacle sensing information can be provided for automatic parking.
Further, the sample data set includes a plurality of sample data including an obstacle. In the invention, the number of the sample data sets is 5000, but the invention is not limited by the sample data sets, as long as the training requirements are met; meanwhile, a user can select a basic convolutional neural network model according to actual requirements. In addition, by combining with the characteristics of the sample, the invention extracts the edge parameters such as the types of the obstacles and the coordinate of the coordinate frame and designs a unique loss function, wherein the loss function comprises a coordinate error function of the coordinate frame, an intersection-parallel ratio error function of the coordinate frame and a classification error function.
It should be noted that the goal of training is to gradually decrease the loss function to a more stable value, the loss function curve during the training process is as shown in fig. 5, as the number of iterations increases, the loss function decreases rapidly and then decreases smoothly, when 500000 iterations are reached, the loss value of the loss function decreases to a stable value (about 0.22), and a weight file with a weight format parameter as suffix is obtained after the training is completed.
In the invention, a sample data set with 5000 samples in total is adopted for training, the training learning rate is 0.001, the number of batch (batch) pictures for each training is 64, and the maximum iteration number is 500000.
Therefore, the method constructs an accurate obstacle identification model by designing a unique loss function, realizes efficient identification of the obstacle, and has high accuracy. For details of model and sample construction, please refer to the foregoing embodiments, which are not repeated herein.
Correspondingly, the invention further provides computer equipment which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the parking control method when executing the computer program. Meanwhile, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, implements the steps of the parking control method described above.
The method can be used for automatic parking and automatic driving, the lightweight convolutional neural network framework is introduced into the field of parking space obstacle identification, accurate identification of obstacles is achieved, real-time and accurate obstacle sensing information can be provided for automatic parking, and compared with the prior art, the method has a high identification rate on static objects and obstacle types. Furthermore, the method designs a unique loss function according to edge parameters such as the types of the obstacles and coordinates of a coordinate frame, constructs an accurate obstacle identification model, realizes efficient identification of the obstacles, and has high accuracy.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A parking control method characterized by comprising:
acquiring image information around a vehicle acquired by a sensor;
obtaining an obstacle category and an obstacle coordinate frame according to the image information and an obstacle recognition model trained in advance, wherein the obstacle recognition model is a lightweight convolutional neural network, the obstacle coordinate frame corresponds to the obstacle one by one, and the edge of the obstacle is surrounded by the coordinate frame so as to enable the obstacle to be located in the coordinate frame;
and carrying out parking control according to the obstacle type and the obstacle coordinate frame.
2. The vehicle parking control method according to claim 1, wherein the network architecture model of the lightweight convolutional neural network includes at least 16 layers for feature extraction, and classification and detection box regression are performed using feature maps of at least two scales.
3. The vehicle parking control method according to claim 2,
The 16 layers comprise 10 convolutional layers and 6 maximum pooling layers, and output a feature map of X channels to realize obstacle classification and coordinate frame regression, wherein:
x ═ a + b + c) × d, where a is the number of coordinate values of the coordinate frame, b is the confidence value of the coordinate frame, c is the number of obstacle categories, and d is the number of prior frames;
the two scales comprise that the feature graph with the size of A is subjected to obstacle classification, the feature graph with the size of A is subjected to upsampling and then is merged with the feature graph with the size of 2A to perform coordinate frame regression, wherein A is a numerical value which is reduced by n times according to the size of an original graph, and n is a positive integer.
4. The vehicle parking control method according to any one of claims 1 to 3, characterized in that the method further includes obstacle recognition model training, specifically including:
acquiring a sample data set, a basic convolution neural network model and a loss function;
combining the basic convolutional neural network model with the loss function to generate an initial convolutional neural network model;
and training the initial convolutional neural network model according to the sample data set to generate an obstacle identification model.
5. The vehicle parking control method according to claim 4, wherein the construction of the sample data set includes:
Acquiring an image sample set, wherein the image sample set comprises a plurality of image samples containing obstacles;
labeling each image sample to generate labeling information, wherein the labeling information comprises obstacle category information and coordinate information of a coordinate frame of an obstacle;
and combining the image sample sets containing the labeling information into a sample data set.
6. The vehicle parking control method according to claim 4, wherein the loss function is a sum of a coordinate frame coordinate error function, a coordinate frame intersection ratio error function, and a classification error function;
the coordinate error function of the coordinate frame is
Figure FDA0002047234740000021
Wherein x isiIs the abscissa, y, of the upper left corner of the coordinate frameiIs the ordinate, ω, of the upper left corner of the coordinate frameiIs the width of the coordinate frame, hiIs the height of the coordinate frame, λcoordThe weight loss of the coordinate frame position is S, the image is divided into S-S grids, B is the number of the prior frames corresponding to each grid,
Figure FDA0002047234740000024
is a first detection parameter;
the frame cross-to-parallel ratio error function is
Figure FDA0002047234740000022
Wherein, CiIs the intersection ratio of the ith coordinate frame and the real frame, lambdanoobjFor the missing weight of no target object within a single mesh,
Figure FDA0002047234740000025
is a second detection parameter;
the above-mentionedA classification error function of
Figure FDA0002047234740000023
Wherein p isiTo identify confidence in the object.
7. A parking control apparatus, characterized by comprising:
the acquisition module is used for acquiring image information around the vehicle acquired by the sensor;
the identification module is used for obtaining an obstacle category and an obstacle coordinate frame according to the image information and an obstacle identification model trained in advance, wherein the obstacle identification model is a lightweight convolutional neural network, the obstacle coordinate frame corresponds to obstacles one by one, and the edge of the obstacle is surrounded by the coordinate frame so as to enable the obstacle to be located in the coordinate frame;
and the control module is used for carrying out parking control according to the obstacle type and the obstacle coordinate frame.
8. The vehicle parking control apparatus according to claim 7, wherein the network architecture model of the lightweight convolutional neural network in the recognition module includes at least 16 layers for feature extraction, and classification and detection box regression are performed using feature maps of at least two scales.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN201910362438.7A 2019-04-30 2019-04-30 Parking control method and device, computer equipment and computer readable storage medium Pending CN111860072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910362438.7A CN111860072A (en) 2019-04-30 2019-04-30 Parking control method and device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910362438.7A CN111860072A (en) 2019-04-30 2019-04-30 Parking control method and device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111860072A true CN111860072A (en) 2020-10-30

Family

ID=72966663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910362438.7A Pending CN111860072A (en) 2019-04-30 2019-04-30 Parking control method and device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111860072A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884831A (en) * 2021-02-02 2021-06-01 清华大学 Method for extracting long-term static characteristics of indoor parking lot based on probability mask
CN113205059A (en) * 2021-05-18 2021-08-03 北京纵目安驰智能科技有限公司 Parking space detection method, system, terminal and computer readable storage medium
CN113264037A (en) * 2021-06-18 2021-08-17 安徽江淮汽车集团股份有限公司 Obstacle recognition method applied to automatic parking
CN113610056A (en) * 2021-08-31 2021-11-05 的卢技术有限公司 Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN114046796A (en) * 2021-11-04 2022-02-15 南京理工大学 Intelligent wheelchair autonomous walking algorithm, device and medium
CN114802261A (en) * 2022-04-21 2022-07-29 合众新能源汽车有限公司 Parking control method, obstacle recognition model training method and device
CN115100377A (en) * 2022-07-15 2022-09-23 小米汽车科技有限公司 Map construction method and device, vehicle, readable storage medium and chip
RU2785822C1 (en) * 2022-11-14 2022-12-14 Ольга Дмитриевна Миронова Way to warn about the presence of an obstacle on the way

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986526A (en) * 2018-07-04 2018-12-11 深圳技术大学(筹) A kind of intelligent parking method and system of view-based access control model sensing tracking vehicle
CN109325418A (en) * 2018-08-23 2019-02-12 华南理工大学 Based on pedestrian recognition method under the road traffic environment for improving YOLOv3
CN109447033A (en) * 2018-11-14 2019-03-08 北京信息科技大学 Vehicle front obstacle detection method based on YOLO

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986526A (en) * 2018-07-04 2018-12-11 深圳技术大学(筹) A kind of intelligent parking method and system of view-based access control model sensing tracking vehicle
CN109325418A (en) * 2018-08-23 2019-02-12 华南理工大学 Based on pedestrian recognition method under the road traffic environment for improving YOLOv3
CN109447033A (en) * 2018-11-14 2019-03-08 北京信息科技大学 Vehicle front obstacle detection method based on YOLO

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CRISTIAN VASILESCU等: "Collaborative Object Recognition for Parking Management", 《THE 15TH INTERNATIONAL SCIENTIFIC CONFERENCE ELEARNING AND SOFTWARE FOR EDUCATION》, pages 194 - 201 *
JOSEPH REDMON等: "You Only Look Once: Unified, Real-Time Object Detection", 《ARXIV》, pages 2 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884831B (en) * 2021-02-02 2022-10-04 清华大学 Method for extracting long-term static characteristics of indoor parking lot based on probability mask
CN112884831A (en) * 2021-02-02 2021-06-01 清华大学 Method for extracting long-term static characteristics of indoor parking lot based on probability mask
CN113205059A (en) * 2021-05-18 2021-08-03 北京纵目安驰智能科技有限公司 Parking space detection method, system, terminal and computer readable storage medium
CN113205059B (en) * 2021-05-18 2024-03-12 北京纵目安驰智能科技有限公司 Parking space detection method, system, terminal and computer readable storage medium
CN113264037A (en) * 2021-06-18 2021-08-17 安徽江淮汽车集团股份有限公司 Obstacle recognition method applied to automatic parking
CN113610056A (en) * 2021-08-31 2021-11-05 的卢技术有限公司 Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN113610056B (en) * 2021-08-31 2024-06-07 的卢技术有限公司 Obstacle detection method, obstacle detection device, electronic equipment and storage medium
CN114046796A (en) * 2021-11-04 2022-02-15 南京理工大学 Intelligent wheelchair autonomous walking algorithm, device and medium
CN114802261A (en) * 2022-04-21 2022-07-29 合众新能源汽车有限公司 Parking control method, obstacle recognition model training method and device
CN114802261B (en) * 2022-04-21 2024-04-19 合众新能源汽车股份有限公司 Parking control method, obstacle recognition model training method and device
CN115100377A (en) * 2022-07-15 2022-09-23 小米汽车科技有限公司 Map construction method and device, vehicle, readable storage medium and chip
CN115100377B (en) * 2022-07-15 2024-06-11 小米汽车科技有限公司 Map construction method, device, vehicle, readable storage medium and chip
RU2785822C1 (en) * 2022-11-14 2022-12-14 Ольга Дмитриевна Миронова Way to warn about the presence of an obstacle on the way

Similar Documents

Publication Publication Date Title
CN111860072A (en) Parking control method and device, computer equipment and computer readable storage medium
CN110163930B (en) Lane line generation method, device, equipment, system and readable storage medium
CN111429514A (en) Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
CN111169468B (en) Automatic parking system and method
CN110879994A (en) Three-dimensional visual inspection detection method, system and device based on shape attention mechanism
CN112836633A (en) Parking space detection method and parking space detection system
CN112740225B (en) Method and device for determining road surface elements
CN115032651A (en) Target detection method based on fusion of laser radar and machine vision
CN115451964B (en) Ship scene simultaneous mapping and positioning method based on multi-mode mixing characteristics
CN113743385A (en) Unmanned ship water surface target detection method and device and unmanned ship
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN114648551B (en) Trajectory prediction method and apparatus
CN115147328A (en) Three-dimensional target detection method and device
CN115953747A (en) Vehicle-end target classification detection method and vehicle-end radar fusion equipment
CN116030130A (en) Hybrid semantic SLAM method in dynamic environment
CN113361528B (en) Multi-scale target detection method and system
CN114048536A (en) Road structure prediction and target detection method based on multitask neural network
US20220164595A1 (en) Method, electronic device and storage medium for vehicle localization
EP4293622A1 (en) Method for training neural network model and method for generating image
CN117115690A (en) Unmanned aerial vehicle traffic target detection method and system based on deep learning and shallow feature enhancement
CN114648639B (en) Target vehicle detection method, system and device
CN115861481A (en) SLAM system based on real-time dynamic object of laser inertia is got rid of
CN113624223B (en) Indoor parking lot map construction method and device
CN113901903A (en) Road identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination