CN108090455B - Cascade mechanism-based parking space line vertex positioning method, system, terminal and medium - Google Patents

Cascade mechanism-based parking space line vertex positioning method, system, terminal and medium Download PDF

Info

Publication number
CN108090455B
CN108090455B CN201711439249.2A CN201711439249A CN108090455B CN 108090455 B CN108090455 B CN 108090455B CN 201711439249 A CN201711439249 A CN 201711439249A CN 108090455 B CN108090455 B CN 108090455B
Authority
CN
China
Prior art keywords
pixel point
pixel points
pixel
parking space
space line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711439249.2A
Other languages
Chinese (zh)
Other versions
CN108090455A (en
Inventor
吴子章
王凡
唐锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zongmu Anchi Intelligent Technology Co ltd
Original Assignee
Beijing Zongmu Anchi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zongmu Anchi Intelligent Technology Co ltd filed Critical Beijing Zongmu Anchi Intelligent Technology Co ltd
Priority to CN201711439249.2A priority Critical patent/CN108090455B/en
Publication of CN108090455A publication Critical patent/CN108090455A/en
Application granted granted Critical
Publication of CN108090455B publication Critical patent/CN108090455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a parking space line vertex positioning method, a system, a terminal and a medium based on a cascading mechanism, which comprise the following steps: inputting a semantic segmentation binary image; traversing pixel points on the semantic segmentation binary image, and finding out pixel points positioned at the edge of the image; calculating a feature descriptor of the pixel point by using at least one level of circular template; filtering the images one by one using the calculated at least one level of feature descriptors; and clustering the pixel points, classifying the pixel points meeting the clustering condition into candidate points of the parking space line vertexes, and ending the flow. The high-dimensional feature descriptors formed by combining the two-stage feature descriptors are more descriptive than the traditional feature descriptors such as SIFT/SURF, and the advantages of hierarchical calculation enable most points to calculate only the features of the first-stage feature descriptors, so that the calculated amount is obviously reduced.

Description

Cascade mechanism-based parking space line vertex positioning method, system, terminal and medium
Technical Field
The invention relates to the technical field of vehicle-mounted electronics, in particular to a parking space line vertex positioning method, a system, a terminal and a medium based on a cascading mechanism.
Background
The ADAS is also called an active safety system, and mainly comprises a vehicle body electronic stabilizing system ESC, an adaptive cruise system ACC, a lane deviation alarm system LDW, a lane keeping system LKA, a forward collision early warning system FCW, a door opening early warning DOW, an automatic emergency braking system AEB, a traffic sign identification TSR, a blind spot detection BSD, a night vision system NV, an automatic parking system APS and the like.
ADAS active safety systems are intended to recognize not only static objects, but also dynamic objects. Deep convolutional neural networks have been very successful in image recognition tasks. But the following problems still remain:
the computation of traversing known feature images from a mass of images by means of several known features is very computationally intensive and the computation cycle of traversing known feature images from a mass of images is long in hardware of the same processing capacity. Therefore, shortening the traversal time of extracted features is a currently urgent problem to be solved.
Disclosure of Invention
In order to solve the above and other potential technical problems, the invention provides a parking space line vertex positioning method, a system, a terminal and a medium based on a cascading mechanism, wherein firstly, a two-stage parking space line vertex candidate point filtering mechanism is used, more than 95% of non-parking space line vertices can be filtered out by primary filtering of a first stage, and a first stage of feature descriptors used at first are filtered by small templates, so that the precision is not high, but the calculation complexity is low, and a large amount of calculation expenditure is saved compared with a large template. Second, second level parking stall line summit filters in order to further increase the accurate positioning to the parking stall line summit on the basis of first level filtering's result, has used big template to calculate the second level feature descriptor of parking stall line summit. The feature descriptors with high dimensionality, which are feature descriptors formed by combining two-stage feature descriptors, are more descriptive than the feature descriptors such as the traditional SIFT/SURF, and the advantages of hierarchical calculation are that most points only calculate the features of the first-stage feature descriptors, so that the calculation amount is obviously reduced.
A parking space line vertex positioning method based on a cascading mechanism comprises the following steps:
s01: inputting a semantic segmentation binary image;
s02: traversing pixel points on the semantic segmentation binary image, and finding out pixel points positioned at the edge of the image;
s03: calculating a feature descriptor of the pixel point by using at least one level of circular template;
s04: filtering the images one by one using the calculated at least one level of feature descriptors;
s05: and clustering the pixel points, classifying the pixel points meeting the clustering condition into candidate points of the parking space line vertexes, and ending the flow.
Further, the feature descriptors comprise a first-stage feature descriptor and a second-stage feature descriptor, the first-stage feature descriptor filters the image, and the second-stage feature descriptor filters the image after the obtained result.
Further, the first level feature descriptor and the second level feature descriptor each include a modulo length constraint and an angular constraint of the feature vector.
Further, the first level feature descriptors and the second level feature descriptors further include location constraints.
Further, the first level of characteristics describe specific steps of sub-module length constraint: the specific steps of the sub-module length constraint condition are described in the first-stage characteristic:
s031: setting a small template with pixel point matrix arrangement, and traversing the binary image by using the small template;
s032: dividing a small template into n dimensions by taking a central point as a circle center, distributing pixel points in the small template to n dimensions, calculating the number of the pixel points in each dimension, setting a dimension pixel point threshold value for each dimension by taking the number of the pixel points as an upper limit, reserving the vector if the number of the white pixel points is larger than the dimension pixel point threshold value, and filtering the vector if the number of the white pixel points is smaller than the dimension pixel point threshold value;
s033: and screening out the pixel points after the first-stage characteristic description sub-module length constraint. .
Further, the first level feature describes specific steps of the sub-angle constraint condition:
s034: calculating the module length constraint to screen out the included angle between the vectors of the pixel points, if the included angle meets the included angle threshold range, reserving the two vectors, and if the included angle does not meet the included angle threshold range, filtering the two vectors;
s035: and screening out pixel points subjected to the angle constraint of the first-stage feature descriptors.
Further, the threshold range of the included angle in the angle constraint condition in the step S034 is 75-105 degrees.
Further, the specific steps of the first-level feature descriptor distance constraint include:
s036: filtering out the pixel points obtained by the angle constraint of the first-level feature descriptors, forming the pixel points into a pixel point set, judging the distance between the position of each pixel point in the pixel point set and the edge of the binary image parking space line where the pixel point is positioned,
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is greater than two pixel points, reserving the pixel point;
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is smaller than two pixel points, filtering the pixel point and jumping to the next pixel point in the pixel point set.
Further, the specific steps of the module length constraint of the second-level feature descriptors are as follows:
s041: setting a large template with pixel point matrix arrangement, traversing the result obtained by the first-stage feature descriptor screening by using the large template;
s042: dividing the large template into m dimensions by taking the center point as the center point, distributing the pixel points in the large template to m dimensions, calculating the number of the pixel points in each dimension, setting a dimension pixel point threshold value for each dimension by taking the number of the pixel points as the upper limit, reserving the vector if the number of the white pixel points is larger than the dimension pixel point threshold value, and filtering the vector if the number of the white pixel points is smaller than the dimension pixel point threshold value;
s043: screening out pixel points after the second-level characteristic description sub-module length constraint; .
Further, the number of rows and columns of the pixel point matrix of the small template is smaller than that of the pixel point matrix of the large template.
Further, the specific step of the angle constraint of the second-level feature descriptors comprises the following steps:
s044: calculating the module length constraint to screen out the included angle between the vectors of the pixel points, if the included angle meets the included angle threshold range, reserving the two vectors, and if the included angle does not meet the included angle threshold range, filtering the two vectors;
s045: and screening out pixel points subjected to the angle constraint of the second-stage feature descriptors.
Further, the threshold range of the included angle in the angle constraint condition in the step S034 is 75-105 degrees.
Further, the second level feature describes the specific steps of sub-distance constraint:
s046: screening out the pixel points obtained by the angle constraint of the second-level feature descriptors, forming the pixel points into a pixel point set, judging the distance between the position of each pixel point in the pixel point set and the edge of the binary image parking space line where the pixel point is positioned,
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is greater than two pixel points, reserving the pixel point;
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is smaller than two pixel points, filtering the pixel point and jumping to the next pixel point in the pixel point set.
Further, clustering the pixel point set formed by the residual pixel points after the second-level feature descriptor screening according to the distance between the positions of each pixel point, and identifying the pixel point areas meeting the parking space line corner clustering condition in the clustering process as candidate points of the parking space line vertices.
A parking space line vertex positioning system based on a cascading mechanism comprises a semantic segmentation module, an image binarization module, a first-level feature description module, a second-level feature description module and a clustering module;
the semantic segmentation module is used for segmenting pixel points in the image, so that all the pixel points in the same object are positioned in one semantic segmentation module;
the image binarization module is used for processing the original image to obtain a binarized image; the method comprises the steps of carrying out a first treatment on the surface of the
The first-stage feature description module screens the binarized image by using a small template and is used for preliminarily filtering pixel points which are not parking space line vertexes in the binarized image; the method comprises the steps of carrying out a first treatment on the surface of the
The second-level feature description module uses a large template to screen pixel points obtained after filtering of a small template, and is used for further filtering the pixel points which are not parking space line vertexes in the binarized image;
the clustering module is used for clustering the pixel point set formed by the residual pixel points after the second-level feature descriptors are screened according to the distance between the positions of each pixel point, and identifying the pixel point areas meeting the parking space line corner point clustering condition in the clustering process as parking space line vertices.
The first-stage feature description module and the second-stage feature description module both comprise a module length constraint module and an angle constraint module. The first-stage feature description module and the second-stage feature description module further comprise a distance constraint module.
The object tracking vehicle-mounted terminal based on the depth characteristic stream is characterized by comprising a processor and a memory, wherein the memory stores program instructions, and the processor runs the program instructions to realize the steps in the method.
A computer-readable storage medium having stored thereon a computer program, characterized by: the program when executed by a processor performs the steps of the method described above.
As described above, the present invention has the following advantageous effects:
firstly, a two-stage carport line vertex candidate point filtering mechanism is used, because the primary filtering of the first stage can filter out more than 95% of non-carport line vertices, and the first-stage feature descriptors used first are filtered by using a small template, the accuracy is not high, but the computational complexity is low, and a large amount of computational cost is saved compared with a large template.
Second, second-level parking space line vertex filtering is to further increase accurate positioning of parking space line vertices based on the result of first-level filtering, and a large template is used for calculating second-level feature descriptors of the parking space line vertices, which are 24 dimensions. Thus, the dimension of the feature descriptor formed by combining the two-stage feature descriptors is 24×24=576, so that the feature descriptor with high dimension is more descriptive than the feature descriptors such as traditional SIFT/SURF. And the advantage of hierarchical computation is that most points compute only the first level feature descriptors, i.e. 24-dimensional features.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a specific flow chart of the present invention.
Fig. 2 shows a flow chart of an embodiment of the invention.
FIG. 3 is a schematic diagram of a small template used for first level feature description sub-template length constraint in one embodiment.
FIG. 4 is a schematic diagram of a large template used for second level feature description sub-template length constraints in one embodiment.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be understood that the structures, proportions, sizes, etc. shown in the drawings are for illustration purposes only and should not be construed as limiting the invention to the extent that it can be practiced, since modifications, changes in the proportions, or otherwise, used in the practice of the invention, are not intended to be critical to the essential characteristics of the invention, but are intended to fall within the spirit and scope of the invention. Also, the terms such as "upper," "lower," "left," "right," "middle," and "a" and the like recited in the present specification are merely for descriptive purposes and are not intended to limit the scope of the invention, but are intended to provide relative positional changes or modifications without materially altering the technical context in which the invention may be practiced.
Referring to fig. 1 to 4, a parking space line vertex positioning method based on a cascading mechanism includes the following steps:
a parking space line vertex positioning method based on a cascading mechanism comprises the following steps:
s01: inputting a semantic segmentation binary image;
s02: traversing pixel points on the semantic segmentation binary image, and finding out pixel points positioned at the edge of the image;
s03: calculating a feature descriptor of the pixel point by using at least one level of circular template;
s04: filtering the images one by one using the calculated at least one level of feature descriptors;
s05: and clustering the pixel points, classifying the pixel points meeting the clustering condition into candidate points of the parking space line vertexes, and ending the flow.
As a preferred embodiment, the feature descriptors include a first-level feature descriptor and a second-level feature descriptor, the image is filtered by the first-level feature descriptor, and the image is filtered by the second-level feature descriptor.
As a preferred embodiment, the first level feature descriptors and the second level feature descriptors each include a modulo length constraint and an angular constraint of the feature vector.
As a preferred embodiment, the first level feature descriptors and second level feature descriptors further include location constraints.
As a preferred embodiment, the first level features describe the specific steps of sub-module length constraint:
the specific steps of the sub-module length constraint condition are described in the first-stage characteristic:
s031: setting a small template with pixel point matrix arrangement, and traversing the binary image by using the small template;
s032: dividing a small template of 19 x 19 pixel points into 24 dimensions by taking a central point as a circle center, averagely distributing the pixel points in the small template to 24 dimensions, calculating the number of the pixel points in each dimension to be 10 pixel points, setting a dimension pixel point threshold value for each dimension by taking the number value of the pixel points as an upper limit, assuming that the dimension pixel point threshold value is 8, if the number of the white pixel points is greater than the dimension pixel point threshold value 8, reserving the vector, and if the number of the white pixel points is less than the dimension pixel point threshold value 8, filtering the vector; the size of the dimension pixel threshold is determined according to the quality of the image.
S033: and screening out the pixel points after the first-stage characteristic description sub-module length constraint. .
As a preferred embodiment, the specific steps of the first-level feature descriptor angle constraint condition are as follows:
s034: calculating the module length constraint to screen out the included angle between the vectors of the pixel points, if the included angle meets the included angle threshold range, reserving the two vectors, and if the included angle does not meet the included angle threshold range, filtering the two vectors;
s035: and screening out pixel points subjected to the angle constraint of the first-stage feature descriptors.
As a preferred embodiment, the threshold range of the included angle in the angle constraint condition in step S034 is 75-105 degrees.
As a preferred embodiment, the specific steps of the first-level feature descriptor distance constraint include:
s036: filtering out the pixel points obtained by the angle constraint of the first-level feature descriptors, forming the pixel points into a pixel point set, judging the distance between the position of each pixel point in the pixel point set and the edge of the binary image parking space line where the pixel point is positioned,
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is greater than a distance threshold value, reserving the pixel point;
if the distance between the position of the pixel and the edge of the binary image parking space line where the pixel is located is smaller than a distance threshold value, filtering the pixel and jumping to the next pixel in the pixel set.
Further, the upper limit of the distance threshold of the first-stage feature descriptor distance constraint is half of the width of a parking space line, and the lower limit of the distance threshold of the first-stage feature descriptor distance constraint is one pixel point.
As a preferred embodiment, the module length constraint of the second-level feature descriptors specifically includes the steps of:
s041: setting a large template with 29 x 29 pixel points arranged in a matrix, traversing the result obtained by first-stage feature descriptor screening by using the large template;
s042: dividing a large template of 29 x 29 pixel points into 24 dimensions by taking a central point as a circle center, averagely distributing the pixel points in the large template to 24 dimensions, calculating the number of the pixel points in each dimension to be 15 pixel points, setting a dimension pixel point threshold value for each dimension by taking the number value of the pixel points as an upper limit, assuming that the dimension pixel point threshold value is 15, if the number of the white pixel points is larger than the dimension pixel point threshold value 15, reserving the vector, and if the number of the white pixel points is smaller than the dimension pixel point threshold value 15, filtering the vector; the size of the dimension pixel threshold is determined according to the quality of the image.
S043: and screening out the pixel points after the second-stage characteristic description sub-module length constraint.
Further, the number of rows and columns of the pixel point matrix of the small template is smaller than that of the pixel point matrix of the large template.
Further, the specific step of the angle constraint of the second-level feature descriptors comprises the following steps:
s044: calculating the module length constraint to screen out the included angle between the vectors of the pixel points, if the included angle meets the included angle threshold range, reserving the two vectors, and if the included angle does not meet the included angle threshold range, filtering the two vectors;
s045: and screening out pixel points subjected to the angle constraint of the second-stage feature descriptors.
As a preferred embodiment, the threshold range of the included angle in the angle constraint condition in step S034 is 75-105 degrees.
As a preferred embodiment, the specific steps of the second-level feature descriptor distance constraint include:
s046: screening out the pixel points obtained by the angle constraint of the second-level feature descriptors, forming the pixel points into a pixel point set, judging the distance between the position of each pixel point in the pixel point set and the edge of the binary image parking space line where the pixel point is positioned,
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is greater than a distance threshold value, reserving the pixel point;
if the distance between the position of the pixel and the edge of the binary image parking space line where the pixel is located is smaller than a distance threshold value, filtering the pixel and jumping to the next pixel in the pixel set.
Further, the upper limit of the distance threshold of the distance constraint of the second-level feature descriptors is half of the width of the parking space line, and the lower limit of the distance threshold of the distance constraint of the second-level feature descriptors is one pixel point.
As a preferred embodiment, the pixel point set formed by the rest pixel points after the second-level feature descriptor screening is clustered according to the distance between the positions of each pixel point, and the pixel point area meeting the parking space line corner point clustering condition in the clustering process is identified as the candidate point of the parking space line vertex.
A parking space line vertex positioning system based on a cascading mechanism comprises a semantic segmentation module, an image binarization module, a first-level feature description module, a second-level feature description module and a clustering module;
the semantic segmentation module is used for segmenting pixel points in the image, so that all the pixel points in the same object are positioned in one semantic segmentation module;
the image binarization module is used for processing the original image to obtain a binarized image; the method comprises the steps of carrying out a first treatment on the surface of the
The first-stage feature description module screens the binarized image by using a small template and is used for preliminarily filtering pixel points which are not parking space line vertexes in the binarized image; the method comprises the steps of carrying out a first treatment on the surface of the
The second-level feature description module uses a large template to screen pixel points obtained after filtering of a small template, and is used for further filtering the pixel points which are not parking space line vertexes in the binarized image;
the clustering module is used for clustering the pixel point set formed by the residual pixel points after the second-level feature descriptors are screened according to the distance between the positions of each pixel point, and identifying the pixel point areas meeting the parking space line corner point clustering condition in the clustering process as parking space line vertices.
The first-stage feature description module and the second-stage feature description module both comprise a module length constraint module and an angle constraint module. The first-stage feature description module and the second-stage feature description module further comprise a distance constraint module.
The object tracking vehicle-mounted terminal based on the depth characteristic stream is characterized by comprising a processor and a memory, wherein the memory stores program instructions, and the processor runs the program instructions to realize the steps in the method.
A computer-readable storage medium having stored thereon a computer program, characterized by: the program when executed by a processor performs the steps of the method described above.
The object tracking vehicle-mounted terminal based on the depth characteristic stream is characterized by comprising a processor and a memory, wherein the memory stores program instructions, and the processor runs the program instructions to realize the steps in the method.
A computer-readable storage medium having stored thereon a computer program, characterized by: the program when executed by a processor performs the steps of the method described above.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims of this invention, which are within the skill of those skilled in the art, be included within the spirit and scope of this invention.

Claims (9)

1. The parking space line vertex positioning method based on the cascade mechanism is characterized by comprising the following steps of:
s01: inputting a semantic segmentation binary image;
s02: traversing pixel points on the semantic segmentation binary image, and finding out pixel points positioned at the edge of the image;
s03: calculating a feature descriptor of the pixel point by using at least one level of circular template;
s04: filtering the images one by one using the calculated at least one level of feature descriptors;
s05: clustering the pixel points, classifying the pixel points meeting the clustering condition into candidate points of the parking space line vertexes, and ending the flow;
the feature descriptors comprise a first-stage feature descriptor and a second-stage feature descriptor, the first-stage feature descriptor filters the image, and the second-stage feature descriptor filters the image after the obtained result.
2. The cascade mechanism-based parking space line vertex positioning method according to claim 1, wherein the first-stage feature descriptors and the second-stage feature descriptors comprise a modular length constraint condition and an angle constraint condition of feature vectors, and the modular length constraint condition is constrained before the angle constraint condition is constrained.
3. The cascade mechanism-based parking space line vertex positioning method according to claim 2, wherein the first-stage feature descriptors and the second-stage feature descriptors further comprise position constraint conditions, and the position constraint conditions are used for position constraint after the operation is completed in a mode length constraint and an angle constraint.
4. The cascade mechanism-based parking space line vertex positioning method according to claim 2, wherein the first-stage feature description sub-model length constraint conditions comprise the following specific steps:
s031: setting a small template with pixel point matrix arrangement, and traversing the binary image by using the small template;
s032: dividing a small template into n dimensions by taking a central point as a circle center, distributing pixel points in the small template to n dimensions, calculating the number of the pixel points in each dimension, setting a dimension pixel point threshold value for each dimension by taking the number of the pixel points as an upper limit, reserving the vector if the number of the white pixel points is larger than the dimension pixel point threshold value, and filtering the vector if the number of the white pixel points is smaller than the dimension pixel point threshold value;
s033: screening out pixel points after the first-stage characteristic description sub-module length constraint;
the specific steps of the mode length constraint condition of the second-level feature descriptors are as follows:
s041: setting a large template with pixel point matrix arrangement, traversing the result obtained by the first-stage feature descriptor screening by using the large template;
s042: dividing the large template into m dimensions by taking the center point as the center point, distributing the pixel points in the large template to m dimensions, calculating the number of the pixel points in each dimension, setting a dimension pixel point threshold value for each dimension by taking the number of the pixel points as the upper limit, reserving the vector if the number of the white pixel points is larger than the dimension pixel point threshold value, and filtering the vector if the number of the white pixel points is smaller than the dimension pixel point threshold value;
s043: screening out pixel points after the second-level characteristic description sub-module length constraint;
the number of rows and columns of the pixel point matrix of the small template is smaller than that of the pixel point matrix of the large template.
5. The method for positioning the parking space line vertex based on the cascade mechanism according to claim 2, wherein the first-level feature description sub-angle constraint conditions comprises the following specific steps:
s034: calculating the module length constraint to screen out the included angle between the vectors of the pixel points, if the included angle meets the included angle threshold range, reserving the two vectors, and if the included angle does not meet the included angle threshold range, filtering the two vectors;
s035: screening out pixel points subjected to the angle constraint of the first-stage feature descriptors;
the specific step of the angle constraint of the second-level feature descriptors comprises the following steps:
s044: calculating the module length constraint to screen out the included angle between the vectors of the pixel points, if the included angle meets the included angle threshold range, reserving the two vectors, and if the included angle does not meet the included angle threshold range, filtering the two vectors;
s045: and screening out pixel points subjected to the angle constraint of the second-stage feature descriptors.
6. The method for positioning the parking space line vertex based on the cascade mechanism according to claim 3, wherein the specific steps of the first-level feature descriptor distance constraint are as follows:
s036: filtering out the pixel points obtained by the angle constraint of the first-level feature descriptors, forming the pixel points into a pixel point set, judging the distance between the position of each pixel point in the pixel point set and the edge of the binary image parking space line where the pixel point is positioned,
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is greater than a distance threshold range, reserving the pixel point;
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is smaller than the distance threshold range, filtering the pixel point and jumping to the next pixel point in the pixel point set;
the specific steps of the second-level feature descriptor distance constraint are as follows:
s046: screening out the pixel points obtained by the angle constraint of the second-level feature descriptors, forming the pixel points into a pixel point set, judging the distance between the position of each pixel point in the pixel point set and the edge of the binary image parking space line where the pixel point is positioned,
if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is greater than a distance threshold range, reserving the pixel point;
and if the distance between the position of the pixel point and the edge of the binary image parking space line where the pixel point is located is smaller than the distance threshold range, filtering the pixel point and jumping to the next pixel point in the pixel point set.
7. The parking space line vertex positioning system based on the cascading mechanism is characterized by comprising a semantic segmentation module, an image binarization module, a first-level feature description module, a second-level feature description module and a clustering module;
the semantic segmentation module is used for segmenting pixel points in the image, so that all the pixel points in the same object are positioned in one semantic segmentation module;
the image binarization module is used for processing the original image to obtain a binarized image;
the first-stage feature description module screens the binarized image by using a small template and is used for preliminarily filtering pixel points which are not parking space line vertexes in the binarized image;
the second-level feature description module uses a large template to screen pixel points obtained after filtering of a small template, and is used for further filtering the pixel points which are not parking space line vertexes in the binarized image;
the clustering module is used for clustering the pixel point set formed by the residual pixel points screened by the second-level characteristic description module according to the distance between the positions of each pixel point, and identifying the pixel point areas meeting the parking space line corner point clustering condition in the clustering process as parking space line vertices;
the first-stage feature description module and the second-stage feature description module comprise a module length constraint module, an angle constraint module and a distance constraint module.
8. A parking space line vertex positioning terminal based on a cascading mechanism, comprising a processor and a memory, wherein the memory stores program instructions, and the processor executes the program instructions to implement the steps in the method according to any one of claims 1 to 6.
9. A computer-readable storage medium having stored thereon a computer program, characterized by: the program, when executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN201711439249.2A 2017-12-27 2017-12-27 Cascade mechanism-based parking space line vertex positioning method, system, terminal and medium Active CN108090455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711439249.2A CN108090455B (en) 2017-12-27 2017-12-27 Cascade mechanism-based parking space line vertex positioning method, system, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711439249.2A CN108090455B (en) 2017-12-27 2017-12-27 Cascade mechanism-based parking space line vertex positioning method, system, terminal and medium

Publications (2)

Publication Number Publication Date
CN108090455A CN108090455A (en) 2018-05-29
CN108090455B true CN108090455B (en) 2023-08-22

Family

ID=62179562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711439249.2A Active CN108090455B (en) 2017-12-27 2017-12-27 Cascade mechanism-based parking space line vertex positioning method, system, terminal and medium

Country Status (1)

Country Link
CN (1) CN108090455B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543609B (en) * 2018-11-22 2022-04-12 四川长虹电器股份有限公司 Method for detecting reversing distance
CN110796063B (en) * 2019-10-24 2022-09-09 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and vehicle for detecting parking space
CN110969655B (en) * 2019-10-24 2023-08-18 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and vehicle for detecting parking space
CN111274974B (en) * 2020-01-21 2023-09-01 阿波罗智能技术(北京)有限公司 Positioning element detection method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366602A (en) * 2012-03-29 2013-10-23 施乐公司 Method of determining parking lot occupancy from digital camera images
CN104916163A (en) * 2015-06-29 2015-09-16 惠州华阳通用电子有限公司 Parking space detection method
CN104933409A (en) * 2015-06-12 2015-09-23 北京理工大学 Parking space identification method based on point and line features of panoramic image
CN107153823A (en) * 2017-05-22 2017-09-12 北京北昂科技有限公司 A kind of view-based access control model associates the lane line feature extracting method of double space
CN107424116A (en) * 2017-07-03 2017-12-01 浙江零跑科技有限公司 Position detecting method of parking based on side ring depending on camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101104609B1 (en) * 2007-10-26 2012-01-12 주식회사 만도 Method and System for Recognizing Target Parking Location

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366602A (en) * 2012-03-29 2013-10-23 施乐公司 Method of determining parking lot occupancy from digital camera images
CN104933409A (en) * 2015-06-12 2015-09-23 北京理工大学 Parking space identification method based on point and line features of panoramic image
CN104916163A (en) * 2015-06-29 2015-09-16 惠州华阳通用电子有限公司 Parking space detection method
CN107153823A (en) * 2017-05-22 2017-09-12 北京北昂科技有限公司 A kind of view-based access control model associates the lane line feature extracting method of double space
CN107424116A (en) * 2017-07-03 2017-12-01 浙江零跑科技有限公司 Position detecting method of parking based on side ring depending on camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Jae Kyu Suhr等.Fully-automatic Recognition of Various Parking Slot Markings in Around View Monitor (AVM) Image Sequences.2012 15th International IEEE Conference on Intelligent Transportation Systems.2012,第1294-1299页. *

Also Published As

Publication number Publication date
CN108090455A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN108090455B (en) Cascade mechanism-based parking space line vertex positioning method, system, terminal and medium
Rezaei et al. Robust vehicle detection and distance estimation under challenging lighting conditions
CN106548127B (en) Image recognition method
Sun et al. A real-time precrash vehicle detection system
US9183447B1 (en) Object detection using candidate object alignment
CN109284664B (en) Driver assistance system and guardrail detection method
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
US8836812B2 (en) Image processing device, image processing method, and image processing program
Kuang et al. Bayes saliency-based object proposal generator for nighttime traffic images
CN111435436B (en) Perimeter anti-intrusion method and device based on target position
Hechri et al. Robust road lanes and traffic signs recognition for driver assistance system
Shah et al. OCR-based chassis-number recognition using artificial neural networks
CN112686248B (en) Certificate increase and decrease type detection method and device, readable storage medium and terminal
CN111284501A (en) Apparatus and method for managing driving model based on object recognition, and vehicle driving control apparatus using the same
CN111079613A (en) Gesture recognition method and apparatus, electronic device, and storage medium
Parvin et al. Vehicle number plate detection and recognition techniques: a review
Jaiswal et al. Comparative analysis of CCTV video image processing techniques and application: a survey
CN116434160A (en) Expressway casting object detection method and device based on background model and tracking
JP2008251029A (en) Character recognition device and license plate recognition system
KR20160067631A (en) Method for recognizing vehicle plate
CN112733652A (en) Image target identification method and device, computer equipment and readable storage medium
TWI603268B (en) Image processing system and method for license plate recognition
JP2020098389A (en) Road sign recognition device and program thereof
Negri Estimating the queue length at street intersections by using a movement feature space approach
Ali et al. Real-time lane markings recognition based on seed-fill algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant