CN114511841A - Multi-sensor fusion idle parking space detection method - Google Patents

Multi-sensor fusion idle parking space detection method Download PDF

Info

Publication number
CN114511841A
CN114511841A CN202210401936.XA CN202210401936A CN114511841A CN 114511841 A CN114511841 A CN 114511841A CN 202210401936 A CN202210401936 A CN 202210401936A CN 114511841 A CN114511841 A CN 114511841A
Authority
CN
China
Prior art keywords
parking space
coordinates
free parking
points
idle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210401936.XA
Other languages
Chinese (zh)
Other versions
CN114511841B (en
Inventor
朱勇
赵明来
李鸿岳
赵彩智
吴毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yutong Bus Co Ltd
Original Assignee
Shenzhen Yutong Zhilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yutong Zhilian Technology Co ltd filed Critical Shenzhen Yutong Zhilian Technology Co ltd
Priority to CN202210401936.XA priority Critical patent/CN114511841B/en
Publication of CN114511841A publication Critical patent/CN114511841A/en
Application granted granted Critical
Publication of CN114511841B publication Critical patent/CN114511841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of advanced assistant driving, in particular to a multi-sensor fusion free parking space detection method, which comprises the following steps: A. firstly, detecting distance information by using an ultrasonic radar, and extracting an initial position of an idle parking space; B. then, obtaining a fish eye camera image, and completing the functions of map building by slam and parking space line identification; C. and extracting the final idle parking space position through a multi-sensor fusion algorithm. D. And displaying the identified free parking spaces on a central control screen. The invention improves the accuracy and robustness of the coordinate estimation of two points at the tail of the parking space line by a multi-sensor fusion method, can simultaneously adapt to various scenes such as an indoor garage, an outdoor parking lot, a wireless parking space or an unclear parking space on the parking space line, and the like, thereby improving the experience and satisfaction of users, having wider commercial value and being widely applied to semi-automatic parking and autonomous parking systems.

Description

Multi-sensor fusion idle parking space detection method
Technical Field
The invention relates to the technical field of advanced auxiliary driving, in particular to a multi-sensor fusion free parking space detection method.
Background
With the development of economy, the living standard of people is gradually improved, more and more families buy cars, and the car quantity increases, the problem of difficult parking is brought, including that cars of others are easily scraped in the parking process, and the cars are not placed according to a uniform standard after being parked, and the like.
Therefore, intelligent parking is a future development direction, the most common scheme in the market at present is to use an ultrasonic radar on the side of a vehicle body to realize idle parking space identification through distance measurement, the information of the ultrasonic radar is simply used and is limited by a plurality of factors, for example, the driving posture of the vehicle is not parallel to a target parking space, the vehicles close to the target parking space are not placed regularly, the driving speed of the vehicle is reduced, the idle parking space is identified by simply using a visual vehicle line, and even if distortion correction is carried out on images at a distance, the problem that the coordinate estimation of two points at the tail of the vehicle line is inaccurate usually occurs.
Disclosure of Invention
The invention aims to make up the defects in the background technology, and provides a multi-sensor fusion idle parking space identification method, which is used for improving the parking space identification rate and accuracy, is suitable for various parking spaces, and can be widely applied to semi-automatic parking and autonomous parking systems.
In order to achieve the purpose, the invention provides the following technical scheme: a multi-sensor integrated idle parking space detection method comprises the following steps:
the method comprises the following steps:
A. firstly, detecting distance information by using an ultrasonic radar, and extracting an initial position of an idle parking space;
B. acquiring a fish eye camera image, and completing the functions of building a map and identifying a parking space line by slam;
C. extracting the final idle parking space position through a multi-sensor fusion algorithm;
D. and displaying the identified free parking spaces on a central control screen.
Preferably, the step a comprises the following steps:
a1, driving the vehicle forwards in parallel to the parking space, and detecting the distance of the obstacle by using a side ultrasonic radar;
a2, dividing the obstacle distance in the step A1 into 75-85 equal interval intervals
Figure DEST_PATH_IMAGE002A
And storing in an array;
a3, counting the continuous intervals with the obstacle distance value of array in the step A2 being greater than or equal to 5m, and acquiring the maximum length maxLen of the continuous intervals;
a4, judging whether the parking space is an idle parking space according to the maximum length maxLen of the continuous interval in the step A3, wherein if the value of maxLen is more than 2m, the interval corresponding to maxLen is the initial position of the idle parking space
Figure DEST_PATH_IMAGE004AA
The step B comprises the following steps:
b1, if the acquisition of the initial position of the free parking space in the step A fails, the next step is not carried out, and the step A is continuously executed;
b2, if the initial position of the free parking space is successfully acquired in the step A, executing the following step B3;
b3, acquiring a fisheye camera image, and performing camera calibration and distortion correction to obtain a distortion corrected image;
b4, using the camera calibration parameters in the step B3 to firstly perform the top view transformation on the distortion correction image in the step B3 to obtain a bird's eye view image, and then performing the initial position of the empty parking space in the step a4 in the bird's eye view image
Figure DEST_PATH_IMAGE004AAA
Identifying a parking space line to obtain an idle parking space;
b5, starting to build a local map by utilizing the slam technology by utilizing the continuous distortion correction image frames in the step B3 to obtain a local map around the vehicle body;
the step C comprises the following steps:
c1, projecting the coordinate points of the free parking spaces extracted in the step B4 to the local map in the step B5 to obtain the coordinates of the projection points;
c2, calculating the free parking space of the local map in the step B5 by using the projection point coordinates obtained in the step C1;
c3, learning the fusion parameters of the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using an MLP multi-layer sensing machine network;
c4, fusing the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using the fusion parameters in the step C3;
and C5, acquiring the fused idle parking space coordinates.
Preferably, in the step B3, the resolution of the fisheye image is 1280 × 720, during the distortion correction, the camera is calibrated to obtain an inner reference K, radial distortion coefficients K1, K2, K3 and an outer reference R, and then the fisheye image is subjected to distortion correction by using a distortion correction formula, where the distortion correction formula is as follows:
Figure DEST_PATH_IMAGE006AA
wherein,
Figure DEST_PATH_IMAGE008AA
is the original position of the distortion point on the camera sensor,
Figure 738204DEST_PATH_IMAGE010
is a new position after the distortion is corrected,
Figure DEST_PATH_IMAGE012A
is the radius from the center point of the camera sensor.
Preferably, in step B4, after obtaining the bird's eye view, the parking space line recognition may be performed, and the specific steps are as follows:
01. collecting a car position sample, and training by using yolov 3;
02. detecting the car positions on the top view by using a trained yolov3 model, and connecting a pair of car positions (bpt 1, bpt 2) by using a straight line to be used as an entrance of a parking space;
03. vector quantity using right parking space as example
Figure DEST_PATH_IMAGE014A
Rotating around the bpt1 anticlockwise, estimating a vehicle position bpt3 according to the length of the parking space, and estimating a vehicle position bpt4 in the same way;
04. connecting bpt1, bpt2, bpt3, bpt4, the vehicle bit line is obtained.
Preferably, in the step B5, the specific steps of the local mapping are as follows:
b5.1, extracting Harris angular points while obtaining the vehicle-to-location line, and performing visual tracking;
b5.2, obtaining an initial value by adopting a loose coupling mode, matching the initial value with the characteristic points in the step B5.1, triangularizing, solving the poses of all frames in the sliding window and the inverse depths of the road mark points, aligning with IMU pre-integration, and recovering the parameters of an alignment scale s, gravity g, IMU speed v and gyroscope bias bg;
b5.3, constructing constraint equations of IMU constraint and visual constraint, and performing back-end nonlinear optimization by using a tight coupling technology to obtain an optimal local map;
and B5.4, searching the optimal free parking space in the local map.
Preferably, in the step C1, the two entry point coordinates bpt1 and bpt2 and the two end point coordinates bpt3 and bpt4 in the step B4 are respectively projected onto the local map in the step B5, so as to obtain four points bspt1, bspt2, bspt3 and bspt 4;
in step C2, the local map of step B5 estimates two end point coordinates spt3 and spt4 according to the parking space length by using the entry point projection coordinates bspt1 and bspt2 of step C1, and the calculation formula is:
spt3=bspt1+lenth;
spt4=bspt2+lenth;
thereby deducing the free parking spaces (bspt 1, bspt2, spt3, spt 4) in the local map of the step B5;
in the step C3, when the MLP multi-layer sensor network is used to learn the fusion parameters of the free parking space coordinate in the step B5 and the free parking space coordinate in the step C2, a large amount of real free parking space coordinate data needs to be labeled in advance, and is represented by GT, and then the error square sum loss function E is used for training, where the formula is:
Figure 233077DEST_PATH_IMAGE016
wherein w is weight of the perceptron, i.e. fusion parameter, a is the coordinates of the idle parking space in the step B5, and B is the coordinates of the idle parking space in the step C2;
in the step C4, the fusion parameter w in the step C3 is used to fuse the two coordinate points at the end of the free parking space in the step B5 and the two coordinate points at the end of the free parking space in the step C2, where the formula is as follows;
Figure 705778DEST_PATH_IMAGE017
wherein,
Figure DEST_PATH_IMAGE019A
for the two last point coordinates bpt3, bpt4 in the above step B4 projected to the projected points bspt3, bspt4 in the local map of the above step B5,
Figure DEST_PATH_IMAGE021A
coordinates of two end points spt3 and spt4 estimated in the step C2, and coordinates of two end points fpt3 and fpt4 of the fused free parking space y;
in the step C5, according to the fusion result of the step C4, the fused free parking space coordinates are bspt1, bspt2, fpt3, and fpt 4.
Preferably, in each of the steps A2
Figure DEST_PATH_IMAGE002AA
The length of the interval is 0.05-0.15m。
Preferably, in the step D, the finally identified rectangular frame of the vacant parking spaces is displayed on a central control screen, and the number of the parking spaces on both sides of the vehicle body is less than or equal to 6.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the visual perception technology and the ultrasonic radar perception technology are closely fused, the acquired position of the idle parking space is more accurate after rich visual information is introduced, the accuracy and robustness of the coordinate estimation of two points at the tail of the parking space line are improved by the fusion method of multiple sensors, and the method can be simultaneously suitable for various scenes such as an indoor garage, an outdoor parking lot, a wireless parking space or an unclear parking space on the vehicle position line, so that the experience and satisfaction of users are improved, and the method has a wider commercial value.
Drawings
FIG. 1 is a schematic view of various types of parking spaces;
FIG. 2 is a flow chart of the steps of the present invention;
FIG. 3 is a schematic diagram of an idle parking space detected by ultrasonic waves;
FIG. 4 is a schematic diagram illustrating fisheye image distortion correction;
FIG. 5 is a schematic view of a parking space line identification;
FIG. 6 is a schematic diagram of a slam vacant parking space;
FIG. 7 is a fusion schematic diagram of the parking space line identifying the free parking spaces and the slam free parking spaces.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A multi-sensor integrated idle parking space detection method comprises the following steps:
A. firstly, the ultrasonic radar is utilized to detect distance information, the initial position of an idle parking space is extracted, and the steps of extracting the idle parking space by the ultrasonic radar are as follows:
a1, driving the vehicle forwards in parallel to the parking space, and detecting the distance of the obstacle by using a side ultrasonic radar;
a2, dividing the obstacle distance in the step A1 into 80 equal interval intervals
Figure DEST_PATH_IMAGE002AAA
Wherein the subscript
Figure 852724DEST_PATH_IMAGE022
Is an index of the interval and is stored in an array, each
Figure DEST_PATH_IMAGE002AAAA
The interval is 0.1 meter;
a3, counting the continuous intervals with the obstacle distance value of array in the step A2 being greater than or equal to 5m, and acquiring the maximum length maxLen of the continuous intervals;
a4, judging whether the parking space is an idle parking space according to the maximum length maxLen of the continuous interval in the step A3, wherein if the value of maxLen is more than 2m, the interval corresponding to maxLen is the initial position of the idle parking space
Figure DEST_PATH_IMAGE004AAAA
Please refer to fig. 3;
B. the method obtains a fish eye camera image, completes the functions of slam image building and parking space line identification, and comprises the following steps:
b1, if the acquisition of the initial position of the free parking space in the step A fails, the next step is not carried out, and the step A is continuously executed;
b2, if the initial position of the free parking space is successfully acquired in the step A, executing the following step B3;
b3, acquiring a fisheye camera image, calibrating the camera and performing distortion correction to obtain a distortion corrected image, and referring to the figure 4;
the fisheye image resolution ratio is 1280 × 720, in the distortion correction process, a camera is calibrated firstly to obtain an internal parameter K, radial distortion coefficients K1, K2, K3 and an external parameter R, and then the fisheye image is subjected to distortion correction by using a distortion correction formula, wherein the distortion correction formula is as follows:
Figure DEST_PATH_IMAGE006AAA
wherein,
Figure DEST_PATH_IMAGE008AAA
is the original position of the distortion point on the camera sensor,
Figure 298575DEST_PATH_IMAGE024
is a new position after the distortion is corrected,
Figure DEST_PATH_IMAGE012AA
is the radius from the center point of the camera sensor.
B4, performing top view transformation on the distortion correction image in the step B3 by using the camera calibration parameters in the step B3 to obtain a bird's-eye view image, and then performing line recognition on the bird's-eye view image;
before identifying the parking space line, the distortion correction image is converted into an aerial view, and the conversion process comprises the following steps:
b4.1, firstly, carrying out inverse projection on points on the distortion correction image to the camera coordinate, wherein the inverse projection formula is as follows:
Figure 287653DEST_PATH_IMAGE026
wherein,
Figure 778808DEST_PATH_IMAGE028
is a homogeneous coordinate point in a pixel coordinate system,
Figure 823118DEST_PATH_IMAGE030
is a point in the coordinate system of the camera,
Figure 552171DEST_PATH_IMAGE032
is the internal reference in step B3
Figure 289314DEST_PATH_IMAGE032
B4.2, then rotating the points in the camera coordinates obtained in the above step B4.1 by using the camera external reference R obtained by the calibration in step B3, wherein the rotation transformation formula is:
Figure 267766DEST_PATH_IMAGE034
wherein,
Figure 596327DEST_PATH_IMAGE036
is a point in the coordinate system of the camera,
Figure 445466DEST_PATH_IMAGE038
the point is obtained after the point under the camera coordinate system is rotated, and R is the external parameter R in the step B3;
b4.3, projecting the points in the step B4.2 to an image coordinate system, wherein the projection formula is as follows:
Figure 884668DEST_PATH_IMAGE040
wherein,
Figure 84837DEST_PATH_IMAGE042
is a point after rotation under the camera coordinate system,
Figure 674212DEST_PATH_IMAGE044
is a homogeneous coordinate point in the pixel coordinate system, and K is the internal reference K in the step B3;
b4.4, summarizing the above steps B4.1, B4.2 and B4.3, the plan view transformation formula can be obtained as:
Figure 174595DEST_PATH_IMAGE046
wherein,
Figure DEST_PATH_IMAGE048
is a homogeneous coordinate point on the distortion corrected image,
Figure DEST_PATH_IMAGE050
is a homogeneous coordinate point on the overlook image, K is the internal reference K in the step B3, and R is the external reference R in the step B3.
After obtaining the aerial view, the parking space line recognition can be carried out, referring to fig. 5, the specific steps are as follows:
01. collecting a car position sample, and training by using yolov 3;
02. detecting the car positions on the top view by using a trained yolov3 model, and connecting a pair of car positions (bpt 1, bpt 2) by using a straight line to be used as an entrance of a parking space;
03. vector quantity using right parking space as example
Figure DEST_PATH_IMAGE014AA
Rotating around the bpt1 anticlockwise, estimating a vehicle position bpt3 according to the length of the parking space, and estimating a vehicle position bpt4 in the same way;
04. connecting bpt1, bpt2, bpt3 and bpt4 to obtain a vehicle position line;
b5, beginning to establish a local map by using the slam technology by using the continuous distortion correction image frames in the step B3, and extracting free parking spaces from the local map, referring to fig. 6;
the slam technology based on vio visual inertial odometer is used for local mapping, scale information cannot be estimated due to the fact that the slam of the monocular camera has a scale problem, vio makes up for the defect of the monocular slam by means of IMU sensor information, the scale information can be accurately estimated, and the specific process of local mapping is as follows:
b5.1, extracting Harris corners and carrying out visual tracking;
b5.2, obtaining an initial value by adopting a loose coupling mode, matching the initial value with the characteristic points in the step B5.1, triangularizing, solving the poses of all frames in the sliding window and the inverse depths of the road mark points, aligning with IMU pre-integration, and recovering an alignment scale s, gravity g, IMU velocity v, gyroscope bias bg and the like;
b5.3, constructing constraint equations of IMU constraint and visual constraint, and performing back-end nonlinear optimization by using a tight coupling technology to obtain an optimal local map;
and B5.4, searching the optimal free parking space in the local map.
C. The method comprises the following steps of extracting the final idle parking space position through a multi-sensor fusion algorithm:
c1, projecting the two entry point coordinates bpt1 and bpt2 and the two tail point coordinates bpt3 and bpt4 in the step B4 to the local map of the step B5 respectively to obtain four points bspt1, bspt2, bspt3 and bspt 4;
c2, projecting coordinates bspt1 and bspt2 at the entry point of the step C1 in the local map of the step B5, estimating two end point coordinates spt3 and spt4 according to the parking space length, and calculating the formula as follows:
spt3=bspt1+lenth;
spt4=bspt2+lenth;
thereby deducing the free parking spaces (bspt 1, bspt2, spt3, spt 4) in the local map of the step B5;
c3, when learning the fusion parameters of the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using an MLP multi-layer sensing machine network, marking a large amount of real free parking space coordinate data in advance, expressing the real free parking space coordinate data by GT, and then training by adopting an error square sum loss function E, wherein the formula is as follows:
Figure DEST_PATH_IMAGE052
wherein w is weight of the perceptron, i.e. fusion parameter, a is the coordinates of the idle parking space in the step B5, and B is the coordinates of the idle parking space in the step C2;
c4, fusing the two coordinate points at the tail of the idle parking space in the step B5 and the two coordinate points at the tail of the idle parking space in the step C2 by using the fusion parameter w in the step C3, wherein the formula is as follows;
Figure 182434DEST_PATH_IMAGE017
wherein,
Figure DEST_PATH_IMAGE019AA
for the two last point coordinates bpt3, bpt4 in the above step B4 projected to the projected points bspt3, bspt4 in the local map of the above step B5,
Figure DEST_PATH_IMAGE021AA
coordinates of two end points spt3 and spt4 estimated in the step C2, and coordinates of two end points fpt3 and fpt4 of the fused free parking space y;
c5, according to the fusion result of the step C4, the fused free parking space coordinates are bspt1, bspt2, fpt3 and fpt4, as shown in FIG. 7;
D. and displaying the finally identified rectangular frame of the idle parking spaces to a central control screen, and displaying 6 parking spaces at most on two sides of the vehicle body so as to facilitate the selection of a user.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (8)

1. The utility model provides a multisensor fuses idle parking stall detection method which characterized in that: the method comprises the following steps:
A. firstly, detecting distance information by using an ultrasonic radar, and extracting an initial position of an idle parking space;
B. acquiring a fish eye camera image, and completing the functions of building a map and identifying a parking space line by slam;
C. extracting the final idle parking space position through a multi-sensor fusion algorithm;
and displaying the identified free parking spaces on a central control screen.
2. The method for detecting the vacant parking space with the multiple sensors integrated as claimed in claim 1, wherein the step A comprises the following steps:
a1, driving the vehicle forwards in parallel to the parking space, and detecting the distance of the obstacle by using a side ultrasonic radar;
a2, dividing the obstacle distance in the step A1 into 75-85 equal interval intervals
Figure DEST_PATH_IMAGE001
And storing in an array;
a3, counting the continuous intervals with the obstacle distance value of array in the step A2 being greater than or equal to 5m, and acquiring the maximum length maxLen of the continuous intervals;
a4, judging whether the parking space is an idle parking space according to the maximum length maxLen of the continuous interval in the step A3, wherein if the value of maxLen is more than 2m, the interval corresponding to maxLen is the initial position of the idle parking space
Figure 436559DEST_PATH_IMAGE002
The step B comprises the following steps:
b1, if the acquisition of the initial position of the free parking space in the step A fails, the next step is not carried out, and the step A is continuously executed;
b2, if the initial position of the free parking space is successfully acquired in the step A, executing the following step B3;
b3, acquiring a fisheye camera image, and performing camera calibration and distortion correction to obtain a distortion corrected image;
b4, using the camera calibration parameters in the step B3 to firstly perform top view transformation on the distortion correction image in the step B3 to obtain a bird's-eye view image, and then obtaining the initial position of the empty parking space in the step a4 in the bird's-eye view image
Figure 769451DEST_PATH_IMAGE002
Identifying a parking space line to obtain an idle parking space;
b5, starting to build a local map by utilizing the slam technology by utilizing the continuous distortion correction image frames in the step B3 to obtain a local map around the vehicle body;
the step C comprises the following steps:
c1, projecting the coordinate points of the free parking spaces extracted in the step B4 to the local map in the step B5 to obtain the coordinates of the projection points;
c2, calculating the free parking space of the local map in the step B5 by using the projection point coordinates obtained in the step C1;
c3, learning the fusion parameters of the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using an MLP multi-layer sensing machine network;
c4, fusing the free parking space coordinates in the step B5 and the free parking space coordinates in the step C2 by using the fusion parameters in the step C3;
and C5, acquiring the fused idle parking space coordinates.
3. The multi-sensor integrated vacant stall detection method according to claim 2, characterized in that: in the step B3, the resolution of the fisheye image is 1280 × 720, and in the process of distortion correction, the camera is calibrated to obtain an internal reference K, radial distortion coefficients K1, K2, K3, and an external reference R, and then the fisheye image is subjected to distortion correction by using a distortion correction formula, where the distortion correction formula is as follows:
Figure 6660DEST_PATH_IMAGE004
wherein,
Figure 766806DEST_PATH_IMAGE006
is the original position of the distortion point on the camera sensor,
Figure 705943DEST_PATH_IMAGE008
is a new position after the distortion is corrected,
Figure 62100DEST_PATH_IMAGE010
is the radius from the center point of the camera sensor.
4. The multi-sensor integrated vacant stall detection method according to claim 2, characterized in that: in step B4, after the bird's-eye view is obtained, the parking space line can be identified, which includes the following steps:
01. collecting a car position sample, and training by using yolov 3;
02. detecting the car positions on the top view by using a trained yolov3 model, and connecting a pair of car positions (bpt 1, bpt 2) by using a straight line to be used as an entrance of a parking space;
03. vector quantity using right parking space as example
Figure 934241DEST_PATH_IMAGE011
Rotating around the bpt1 anticlockwise, estimating a vehicle position bpt3 according to the length of the parking space, and estimating a vehicle position bpt4 in the same way;
04. connecting bpt1, bpt2, bpt3, bpt4, the vehicle bit line is obtained.
5. The multi-sensor integrated vacant stall detection method according to claim 4, characterized in that: in the step B5, the specific steps of local mapping are as follows:
b5.1, extracting Harris angular points while obtaining the vehicle-to-location line, and performing visual tracking;
b5.2, obtaining an initial value by adopting a loose coupling mode, matching the initial value with the characteristic points in the step B5.1, triangularizing, solving the poses of all frames in the sliding window and the inverse depths of the road mark points, aligning with IMU pre-integration, and recovering the parameters of an alignment scale s, gravity g, IMU speed v and gyroscope bias bg;
b5.3, constructing constraint equations of IMU constraint and visual constraint, and performing back-end nonlinear optimization by using a tight coupling technology to obtain an optimal local map;
and B5.4, searching the optimal free parking space in the local map.
6. The method for detecting the vacant parking spaces with the multiple sensors integrated as claimed in claim 5, characterized in that: in the step C1, the two entry point coordinates bpt1 and bpt2 and the two end point coordinates bpt3 and bpt4 in the step B4 are respectively projected onto the local map in the step B5, so as to obtain four points bspt1, bspt2, bspt3 and bspt 4;
in step C2, in the local map of step B5, the entry point projection coordinates bspt1 and bspt2 of step C1 are used, and two end point coordinates spt3 and spt4 are estimated according to the parking space length, and the calculation formula is as follows:
spt3=bspt1+lenth;
spt4=bspt2+lenth;
thereby deducing the free parking spaces (bspt 1, bspt2, spt3, spt 4) in the local map of the step B5;
in the step C3, when the MLP multi-layer sensor network is used to learn the fusion parameters of the free parking space coordinate in the step B5 and the free parking space coordinate in the step C2, a large amount of real free parking space coordinate data needs to be labeled in advance, and is represented by GT, and then the error square sum loss function E is used for training, where the formula is:
Figure 909081DEST_PATH_IMAGE013
wherein w is weight of the perceptron, i.e. fusion parameter, a is the coordinates of the idle parking space in the step B5, and B is the coordinates of the idle parking space in the step C2;
in the step C4, the fusion parameter w in the step C3 is used to fuse the two coordinate points at the end of the free parking space in the step B5 and the two coordinate points at the end of the free parking space in the step C2, where the formula is as follows;
Figure 866672DEST_PATH_IMAGE014
wherein,
Figure 10209DEST_PATH_IMAGE015
for the two last point coordinates bpt3, bpt4 in the above step B4 projected to the projected points bspt3, bspt4 in the local map of the above step B5,
Figure 690851DEST_PATH_IMAGE016
coordinates of two end points spt3 and spt4 estimated in the step C2, and coordinates of two end points fpt3 and fpt4 of the fused free parking space y;
in the step C5, according to the fusion result of the step C4, the fused free parking space coordinates are bspt1, bspt2, fpt3, and fpt 4.
7. The multi-sensor integrated vacant stall detection method according to claim 2, characterized in that: in said step A2, each
Figure 527220DEST_PATH_IMAGE001
The length of the interval is 0.05-0.15 m.
8. The multi-sensor integrated vacant stall detection method according to claim 1, characterized in that: and D, displaying the finally identified rectangular frame of the idle parking spaces to a central control screen, wherein the display number of the parking spaces on two sides of the vehicle body is less than or equal to 6.
CN202210401936.XA 2022-04-18 2022-04-18 Multi-sensor fusion idle parking space detection method Active CN114511841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210401936.XA CN114511841B (en) 2022-04-18 2022-04-18 Multi-sensor fusion idle parking space detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210401936.XA CN114511841B (en) 2022-04-18 2022-04-18 Multi-sensor fusion idle parking space detection method

Publications (2)

Publication Number Publication Date
CN114511841A true CN114511841A (en) 2022-05-17
CN114511841B CN114511841B (en) 2022-07-05

Family

ID=81554914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210401936.XA Active CN114511841B (en) 2022-04-18 2022-04-18 Multi-sensor fusion idle parking space detection method

Country Status (1)

Country Link
CN (1) CN114511841B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010044219A1 (en) * 2010-11-22 2012-05-24 Robert Bosch Gmbh Method for detecting the environment of a vehicle
US20160021288A1 (en) * 2014-07-18 2016-01-21 Seeways Technology Inc. Vehicle-reversing display system capable of automatically switching multiple field-of-view modes and vehicle-reversing image capture device
CN110775052A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 Automatic parking method based on fusion of vision and ultrasonic perception
CN111845723A (en) * 2020-08-05 2020-10-30 北京四维智联科技有限公司 Full-automatic parking method and system
CN111942372A (en) * 2020-07-27 2020-11-17 广州汽车集团股份有限公司 Automatic parking method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010044219A1 (en) * 2010-11-22 2012-05-24 Robert Bosch Gmbh Method for detecting the environment of a vehicle
US20160021288A1 (en) * 2014-07-18 2016-01-21 Seeways Technology Inc. Vehicle-reversing display system capable of automatically switching multiple field-of-view modes and vehicle-reversing image capture device
CN110775052A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 Automatic parking method based on fusion of vision and ultrasonic perception
CN111942372A (en) * 2020-07-27 2020-11-17 广州汽车集团股份有限公司 Automatic parking method and system
CN111845723A (en) * 2020-08-05 2020-10-30 北京四维智联科技有限公司 Full-automatic parking method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
江浩斌等: "基于多传感器数据融合的自动泊车***高精度辨识车位的方法", 《重庆理工大学学报(自然科学)》 *

Also Published As

Publication number Publication date
CN114511841B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN104848851B (en) Intelligent Mobile Robot and its method based on Fusion composition
CN103411553B (en) The quick calibrating method of multi-linear structured light vision sensors
CN106997688B (en) Parking lot parking space detection method based on multi-sensor information fusion
CN112417926B (en) Parking space identification method and device, computer equipment and readable storage medium
CN111169468B (en) Automatic parking system and method
CN111912416B (en) Method, device and equipment for positioning equipment
CN112667837A (en) Automatic image data labeling method and device
CN113903011B (en) Semantic map construction and positioning method suitable for indoor parking lot
CN109471096B (en) Multi-sensor target matching method and device and automobile
CN105511462B (en) A kind of AGV air navigation aids of view-based access control model
CN106651953A (en) Vehicle position and gesture estimation method based on traffic sign
CN108759823B (en) Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
CN111275960A (en) Traffic road condition analysis method, system and camera
CN113220818B (en) Automatic mapping and high-precision positioning method for parking lot
CN114755662A (en) Calibration method and device for laser radar and GPS with road-vehicle fusion perception
CN107607091A (en) A kind of method for measuring unmanned plane during flying flight path
CN110033492B (en) Camera calibration method and terminal
WO2020181426A1 (en) Lane line detection method and device, mobile platform, and storage medium
CN114964236A (en) Mapping and vehicle positioning system and method for underground parking lot environment
CN115761007A (en) Real-time binocular camera self-calibration method
CN111950440A (en) Method, device and storage medium for identifying and positioning door
CN113947714B (en) Multi-mode collaborative optimization method and system for video monitoring and remote sensing
CN110956067A (en) Construction method and device for eyelid curve of human eye
CN105590087B (en) A kind of roads recognition method and device
CN114511841B (en) Multi-sensor fusion idle parking space detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231220

Address after: No. 6, Yutong Road, Guancheng Hui District, Zhengzhou, Henan 450061

Patentee after: Yutong Bus Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: SHENZHEN YUTONG ZHILIAN TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right