CN116524457A - Parking space identification method, system, device, electronic equipment and readable storage medium - Google Patents

Parking space identification method, system, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116524457A
CN116524457A CN202310235372.1A CN202310235372A CN116524457A CN 116524457 A CN116524457 A CN 116524457A CN 202310235372 A CN202310235372 A CN 202310235372A CN 116524457 A CN116524457 A CN 116524457A
Authority
CN
China
Prior art keywords
vehicle
line segment
parking space
current
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310235372.1A
Other languages
Chinese (zh)
Other versions
CN116524457B (en
Inventor
朱松
陶维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imotion Automotive Technology Suzhou Co Ltd
Original Assignee
Imotion Automotive Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imotion Automotive Technology Suzhou Co Ltd filed Critical Imotion Automotive Technology Suzhou Co Ltd
Priority to CN202310235372.1A priority Critical patent/CN116524457B/en
Publication of CN116524457A publication Critical patent/CN116524457A/en
Application granted granted Critical
Publication of CN116524457B publication Critical patent/CN116524457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a parking space identification method, a system, a device, electronic equipment and a computer readable storage medium, based on the technical scheme provided by the application, after a parking space line segment is obtained by utilizing vehicle environment image identification, position prediction is carried out based on information such as an initial position, a current position, a change route and the like of the parking space line segment, a predicted position is obtained, and then the accuracy of parking space line segment identification is determined through a matching result of the current position and the predicted position, so that only the parking space line segment passing through matching is reserved, further, for the reserved parking space line segment, more accurate parking space line segment is obtained through fusion of the predicted position and the current position, and a parking space is generated, obviously, the automatic generation of the parking space is realized by utilizing the parking space line segment obtained through screening integration, the accuracy of the parking space identification result can be effectively ensured, and vehicles can be further ensured to be parked in the parking space in a more accurate posture.

Description

Parking space identification method, system, device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a parking space recognition method, and a parking space recognition system, a device, an electronic apparatus, and a computer readable storage medium.
Background
With rapid development of technology, automatic parking technology about vehicles is becoming mature, and in an automatic parking system, parking space identification is a key step. In the current automatic parking technology, parking functions are rich and various, schemes are different, but most of the automatic parking technology has the problem that the posture of a finally parked vehicle in a parking space is deviated to a certain direction due to inaccurate parking space identification.
Therefore, how to effectively improve the accuracy of the parking space recognition result, and further ensure that the vehicle is parked in the parking space in a more accurate posture is a problem to be solved by those skilled in the art.
Disclosure of Invention
The purpose of the application is to provide a parking space recognition method, which can effectively improve the accuracy of a parking space recognition result, thereby ensuring that a vehicle is parked in a parking space in a more accurate posture; another object of the present application is to provide a parking space recognition device, a system, an electronic device, and a computer readable storage medium, which all have the above beneficial effects.
In a first aspect, the present application provides a parking space identification method, including:
acquiring a vehicle environment image and identifying a first vehicle line segment in the vehicle environment image;
For each first vehicle position line segment, determining an initial position of the first vehicle position line segment at the identification time and a current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time;
position prediction is carried out according to the initial position and the change route, and a predicted position of the current moment is obtained;
reserving a first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment;
for each second bit line segment, fusing the current position and the predicted position of the second bit line segment to obtain a fused position;
and generating a parking space according to the fusion position of each second vehicle space line segment.
Optionally, for each of the second bit line segments, fusing the current position and the predicted position of the second bit line segment to obtain a fused position, including:
determining a first line segment parameter at the current location and a second line segment parameter at the predicted location, respectively, for the second vehicle bit line segment; the line segment parameters comprise an angle between the second vehicle line segment and a vehicle rear axle center line coordinate system, a distance between the second vehicle line segment and a vehicle rear axle center point, and a line segment length of the second vehicle line segment;
Smoothing the first line segment parameter and the second line segment parameter according to preset weights to obtain a fused line segment parameter;
and determining the fusion position of the second vehicle line segment according to the fusion line segment parameters.
Optionally, before the first vehicle position line segment with the current position matched with the predicted position is reserved in all the first vehicle position line segments to obtain a second vehicle position line segment, the method further includes:
acquiring the matching passing times of the first vehicle position line segment;
judging whether the matching passing times reach preset times or not;
if yes, executing the first vehicle position line segments which are matched with the predicted position in all the first vehicle position line segments, and reserving the first vehicle position line segments which are matched with the current position to obtain a second vehicle position line segment.
Optionally, for each first vehicle position line segment, determining an initial position of the first vehicle position line segment at an identification time, and before a current position at a current time, the method further includes:
screening all the first vehicle line segments according to a preset screening index to obtain screened first vehicle line segments; the preset screening index comprises one or more of line segment length, line segment definition, line segment distance and interested area.
Optionally, for each of the second bit line segments, fusing the current position and the predicted position of the second bit line segment to obtain a fused position, and further includes:
adjusting all the second vehicle line segments according to a preset adjustment rule to obtain adjusted second vehicle line segments; the preset adjustment rules comprise one or more of line segment deletion, line segment merging and line segment extension.
Optionally, after the parking space is generated according to the fusion position of each second vehicle line segment, the method further includes:
identifying obstacles for each parking space to obtain a parking space without the obstacles; wherein the obstacle identification comprises an ultrasonic identification and/or a visual identification of the obstacle;
and outputting each parking space capable of being parked.
Optionally, after the outputting each parking space, the method further includes:
determining a target parking space according to the selection instruction;
determining a parking route according to the current pose of the vehicle and the target parking space;
and controlling the vehicle to drive into the target parking space according to the parking route.
Optionally, the controlling the vehicle to drive into the target parking space according to the parking route includes:
Acquiring distance information between the target parking space and the vehicle in real time;
correcting the target parking space according to the distance information to obtain a real-time corrected parking space;
acquiring ultrasonic sensing signals about path obstacles in the running process of the vehicle in real time;
correcting the parking route according to the real-time corrected parking space and the ultrasonic sensing signal to obtain a real-time corrected route;
and controlling the vehicle to drive into the real-time correction parking space according to the real-time correction route.
In a second aspect, the application further discloses a parking space recognition system, including:
an image pickup apparatus for acquiring a vehicle environment image;
and the controller is used for executing the steps of any parking space identification method according to the vehicle environment image.
Optionally, the camera device is a super-fisheye lens and is arranged at the positions of the head, the tail and the left and right inverted mirrors.
Optionally, the parking space recognition system further includes:
an ultrasonic probe for acquiring an ultrasonic detection signal about an obstacle;
the ultrasonic probe is arranged at the left side and the right side of the vehicle head, the vehicle tail and the vehicle.
In a third aspect, the application further discloses a parking stall recognition device, including:
The vehicle environment recognition module is used for acquiring a vehicle environment image and recognizing a first vehicle line segment in the vehicle environment image;
the determining module is used for determining the initial position of the first vehicle position line segment at the identification time and the current position of the first vehicle position line segment at the current time for each first vehicle position line segment so as to obtain a change route from the identification time to the current time;
the prediction module is used for carrying out position prediction according to the initial position and the change route to obtain a predicted position of the current moment;
the reservation module is used for reserving the first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment;
the fusion module is used for fusing the current position and the predicted position of each second vehicle bit line segment to obtain a fusion position;
and the generating module is used for generating a parking space according to the fusion position of each second vehicle line segment.
In a fourth aspect, the present application also discloses an electronic device, including:
a memory for storing a computer program;
and the processor is used for realizing any parking space identification method when executing the computer program.
In a fifth aspect, the present application also discloses a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of any of the parking space identification methods described above.
The application provides a parking space identification method, which comprises the following steps: acquiring a vehicle environment image and identifying a first vehicle line segment in the vehicle environment image; for each first vehicle position line segment, determining an initial position of the first vehicle position line segment at the identification time and a current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time; position prediction is carried out according to the initial position and the change route, and a predicted position of the current moment is obtained; reserving a first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment; for each second bit line segment, fusing the current position and the predicted position of the second bit line segment to obtain a fused position; and generating a parking space according to the fusion position of each second vehicle space line segment.
After the parking space line segments are obtained by utilizing the vehicle environment image recognition, the position prediction is carried out based on the information such as the initial position, the current position and the change route of the parking space line segments to obtain the predicted position, and then the accuracy of the parking space line segment recognition is determined through the matching result of the current position and the predicted position, so that only the parking space line segments passing through the matching are reserved, further, the reserved parking space line segments are fused with the current position to obtain more accurate parking space line segments and generate parking spaces, and obviously, the parking space line segments obtained by screening and integration are utilized to realize automatic generation of parking spaces, so that the accuracy of the parking space recognition result can be effectively ensured, and vehicles can be guaranteed to park in the parking spaces in more accurate postures.
The parking space recognition device, the system, the electronic equipment and the computer readable storage medium provided by the application have the technical effects as well, and the application is not repeated here.
Drawings
In order to more clearly illustrate the prior art and the technical solutions in the embodiments of the present application, the following will briefly describe the drawings that need to be used in the description of the prior art and the embodiments of the present application. Of course, the following figures related to the embodiments of the present application are only some of the embodiments of the present application, and it is obvious to those skilled in the art that other figures can be obtained from the provided figures without any inventive effort, and the obtained other figures also belong to the protection scope of the present application.
Fig. 1 is a schematic flow chart of a parking space recognition method provided by the application;
fig. 2 is a schematic structural diagram of a parking space recognition device provided by the present application;
fig. 3 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
The core of the application is to provide a parking space recognition method, which can effectively improve the accuracy of a parking space recognition result, thereby ensuring that a vehicle is parked in a parking space in a more accurate posture; another core of the present application is to provide a parking space recognition system, a device, an electronic apparatus, and a computer readable storage medium, which all have the above beneficial effects.
In order to more clearly and completely describe the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The embodiment of the application provides a parking space identification method.
Referring to fig. 1, fig. 1 is a flow chart of a parking space recognition method provided in the present application, where the parking space recognition method may include the following steps S101 to S106.
S101: a vehicle environment image is acquired and a first vehicle-location line segment in the vehicle environment image is identified.
The method aims at achieving acquisition of a vehicle environment image and identification processing of parking space line segments in the vehicle environment image, and obtaining a first vehicle space line segment in the vehicle environment image. The vehicle environment image can be acquired by the image pickup device arranged on the vehicle to be parked, of course, the use type of the image pickup device and the installation position of the image pickup device on the vehicle do not affect the implementation of the technical scheme, and a plurality of 195-degree super-fisheye lenses can be used for acquiring the most comprehensive, clear and accurate vehicle environment image and are respectively arranged at the head, the tail and the left and right mirror positions of the vehicle so as to acquire the vehicle environment image.
Further, after the vehicle environment images are acquired, line segment identification (the parking space is formed by combining line segments) can be performed on each vehicle environment image, and each first vehicle line segment is determined. In one possible implementation manner, the first vehicle line segment recognition process may be implemented by using a line segment recognition model based on a deep learning network, specifically, a deep learning network model may be created in advance, then a large amount of data is collected for labeling, and training is performed on the model, where a vehicle line is labeled, and when training is performed, a plurality of points in the line segment are extracted as standards, so that a vehicle line segment recognition model is obtained through training, and when using the vehicle line segment recognition model to perform line segment recognition, a group of points similar to the labeled line segment may be first recognized, and then line segments, that is, the first vehicle line segment, are clustered.
S102: for each first vehicle position line segment, determining the initial position of the first vehicle position line segment at the identification time and the current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time.
The method aims at acquiring various relevant information of the first vehicle position line segment, and mainly comprises an initial position of the first vehicle position line segment at an initial identification time, a current position at a current time and a change route from the initial identification time to the current time. Of course, the number of first vehicle line segments identified in the vehicle environment image is not unique.
It will be appreciated that the vehicle may be in a driving state all the time during parking, and based on this, for each first vehicle position line segment, the initial position at the identification time refers to the position information of the first vehicle position line segment compared with the position of the vehicle itself at the identification time, the current position at the current time refers to the position information of the first vehicle position line segment compared with the position of the vehicle itself at the current time, and the change route is also the change route compared with the vehicle itself. The time length from the identification time to the current time is a preset calculation period.
It should be noted that, the first vehicle position line segment obtained based on the vehicle environment image is a vehicle position line segment under the image coordinate system, in order to realize the vehicle position identification and automatic parking under the world coordinate system, after the first vehicle position line segment is obtained, the first vehicle position line segment may be first subjected to coordinate system conversion, and then converted from the image coordinate system to the world coordinate system, and then the above various information of the first vehicle position line segment under the world coordinate system is obtained. The coordinate system conversion can be realized based on a matrix conversion relation between an image coordinate system and a world coordinate system, and specifically, internal and external parameters of the image pickup device can be acquired, wherein before the image pickup device is installed on a vehicle, the image pickup device can be placed in a calibration box for internal parameter calibration, and internal parameter data comprise, but are not limited to, deviation of an optical center, distortion parameters and the like; after the image pickup apparatus is mounted on the vehicle, the image pickup apparatus is calibrated by the factory production line, and the external parameter, that is, the posture of the image pickup apparatus is compared with that of the vehicle, and then the matrix conversion relation is established based on the internal and external parameters.
S103: and carrying out position prediction according to the initial position and the change route to obtain a predicted position at the current moment.
This step aims at achieving a position prediction, i.e. a predicted position of the first vehicle-position line segment at the current moment is predicted based on its initial position and the course of the change. The predicted position is used for matching with the current position to determine the accuracy of the first vehicle-position line segment recognition result, when the matching is passed, namely, the matching degree of the predicted position and the current position reaches a preset threshold value, the first vehicle-position line segment recognition is determined to be accurate, and when the matching is not passed, namely, the matching degree of the predicted position and the current position does not reach the preset threshold value, the first vehicle-position line segment recognition is determined to be inaccurate. The preset threshold value is set by a technician according to actual requirements, which is not limited in the application.
S104: and reserving the first vehicle position line segments with the current positions matched with the predicted positions in all the first vehicle position line segments to obtain second vehicle position line segments.
The step aims at realizing screening of the first vehicle position line segments, reserving the first vehicle position line segments with high accuracy, deleting the first vehicle position line segments with low accuracy, and finally reserving the first vehicle position line segments as the second vehicle position line segments. The first vehicle line segment with high accuracy, namely the current position, is matched with the predicted position, and the first vehicle line segment with low accuracy, namely the current position, is not matched with the predicted position.
S105: and fusing the current position and the predicted position of each second vehicle bit line segment to obtain a fused position.
The step aims at realizing fusion processing of the position information so as to obtain a second vehicle bit line segment with more accurate position information. Specifically, for each of the second vehicle bit line segments that are retained, the current position and the predicted position of the second vehicle bit line segment may be fused to obtain a fused position, where the fused position is the position information with higher accuracy.
In one possible implementation manner, for each second vehicle bit line segment, fusing the current position and the predicted position of the second vehicle bit line segment to obtain the fused position may include:
determining a first line segment parameter of the second vehicle line segment at the current position and a second line segment parameter of the second vehicle line segment at the predicted position respectively; the line segment parameters comprise an angle between the second vehicle line segment and a line coordinate system in a rear axle of the vehicle, a distance between the second vehicle line segment and a central point of the rear axle of the vehicle and a line segment length of the second vehicle line segment;
smoothing the first line segment parameter and the second line segment parameter according to preset weights to obtain a fused line segment parameter;
and determining the fusion position of the second vehicle line segment according to the fusion line segment parameters.
The embodiment of the application provides an implementation mode for fusion processing of position information, namely a smoothing processing mode of line segment parameters. Specifically, for each of the second vehicle position line segments, parameter information of the second vehicle position line segments at the current position and the predicted position thereof, that is, the first line segment parameter corresponding to the current position and the second line segment parameter corresponding to the predicted position, may be obtained, where specific content of the line segment parameters is not unique, and may include, but is not limited to, an angle between the second vehicle position line segment and a line coordinate system of a rear axle of the vehicle, a distance between the second vehicle position line segment and a center point of the rear axle of the vehicle, a line segment length of the second vehicle position line segment, and the like; further, smoothing the first line segment parameter and the second line segment parameter according to a preset weight to obtain a fused line segment parameter, wherein a specific value of the preset weight can be set according to practical situations, for example, because the current position is an actual position and has higher reliability compared with the predicted position, the weight corresponding to the current position can be set to be larger than the weight corresponding to the predicted position, and the sum of the weight and the weight is 1, so that the fused line segment parameter is obtained by calculation; finally, the fusion position of the second vehicle line segment can be determined according to the fusion line segment parameter, and obviously, the process is the reverse operation of the above-mentioned ' determining the first line segment parameter of the second vehicle line segment at the current position and the second line segment parameter at the predicted position ', respectively '. Therefore, the fusion position of the second vehicle position line segment is determined, and further accuracy guarantee can be provided for the vehicle position identification result.
S106: and generating a parking space according to the fusion position of each second vehicle line segment.
The step aims at realizing automatic generation of parking spaces. Specifically, after obtaining relatively accurate fusion positions of the second vehicle line segments, the second vehicle line segments can be used for generating parking spaces based on the fusion positions, and of course, the number of the parking spaces may not be unique, so that the parking spaces can be selected, and an automatic parking function is realized.
Therefore, according to the parking space identification method provided by the embodiment of the application, after the parking space line segments are obtained by utilizing the vehicle environment image identification, the position prediction is carried out based on the information of the initial position, the current position, the change route and the like of the parking space line segments, the prediction position is obtained, and then the accuracy of the parking space line segment identification is determined through the matching result of the current position and the prediction position, so that only the parking space line segments passing through the matching are reserved, further, the reserved parking space line segments are obtained through the fusion of the prediction position and the current position, and the parking space is generated, and obviously, the automatic generation of the parking space is realized by utilizing the parking space line segments obtained through screening integration, so that the accuracy of the parking space identification result can be effectively ensured, and the vehicle can be further ensured to be parked in the parking space in a more accurate posture.
Based on the above embodiments:
in an embodiment of the present application, the above-mentioned method may further include, before the step of reserving the first vehicle line segment with the current position matching the predicted position in all the first vehicle line segments to obtain the second vehicle line segment:
acquiring the matching passing times of the first vehicle line segment;
judging whether the number of times of matching passing reaches a preset number of times;
if yes, executing the first vehicle position line segments with the current position matched with the predicted position in all the first vehicle position line segments, and obtaining a second vehicle position line segment.
In order to further improve the accuracy of parking space identification, the matching process about the current position and the predicted position may be set to multiple matching, and the first vehicle segment is reserved when multiple matching is successful. Therefore, after each position matching is completed, the number of times of matching passing of the first vehicle position line segment can be counted, whether the number of times of matching passing reaches the preset number of times or not is judged, and the first vehicle position line segment is reserved when the condition is met. Of course, the preset number of times of value does not affect the implementation of the technical scheme, and the technical scheme is set by a technician according to actual requirements, which is not limited in the application.
In an embodiment of the present application, the determining, for each first vehicle-position line segment, an initial position of the first vehicle-position line segment at the identification time, before identifying the change route from the identification time to the current time, at the current position at the current time, may further include:
screening all the first vehicle line segments according to a preset screening index to obtain screened first vehicle line segments; the preset screening index comprises one or more of line segment length, line segment definition, line segment distance and interested area.
It can be understood that many false identifications may exist in the first vehicle position line segments obtained based on deep learning, so that in order to effectively improve the vehicle position identification efficiency and the accuracy of the vehicle position identification result, after each first vehicle position line segment in the vehicle environment image is obtained by identification, all first vehicle position line segments can be screened according to a preset screening index, and the first vehicle position line segments which do not meet the index standard value are removed. The preset screening index may include, but is not limited to, line segment length, line segment definition, distance between line segments, region of interest, and the like. For example, some first vehicle segments that exceed the region of interest, or that are too short in length, or that are too low in line definition, may be preferentially rejected.
In an embodiment of the present application, for each second vehicle bit line segment, fusing the current position and the predicted position of the second vehicle bit line segment to obtain the fused position may further include:
adjusting all the second vehicle line segments according to a preset adjustment rule to obtain adjusted second vehicle line segments; the preset adjustment rules comprise one or more of line segment deletion, line segment combination and line segment extension.
The parking space recognition method provided by the embodiment of the application can further realize adjustment and correction processing of the second vehicle position line segment so as to further improve the accuracy of the parking space recognition result. Specifically, after the fusion position of the second vehicle line segments is obtained, an adjustment correction process may be performed on each second vehicle line segment according to a preset adjustment rule, where the preset adjustment rule may include, but is not limited to, a process rule of line segment deletion, line segment merging, line segment extension, and the like. For example, a second vehicle line segment that is more than a certain distance (e.g., 15 meters) from the center of the rear axle of the vehicle or more than a certain angle (e.g., 175 degrees) from the current vehicle may be deleted, a second vehicle line segment that is less than a certain distance (e.g., 0.5 meters) from the line segment in parallel relationship to each other may be merged, a second vehicle line segment that is about to intersect within a certain distance (e.g., 0.5 meters) may be extended to form an intersection, and so on.
In an embodiment of the present application, after generating the parking space according to the fused position of each second vehicle line segment, the method may further include:
identifying obstacles for each parking space to obtain a parking space without obstacles; wherein the obstacle recognition comprises ultrasonic recognition and/or visual recognition of the obstacle;
outputting each parking space.
It will be appreciated that, for the identified parking spaces, there may be situations where parking is impossible due to the presence of an obstacle (e.g., a pedestrian, an animal, a vehicle, etc.) in the space, so that after each parking space is identified, the obstacle identification may be performed on each parking space to obtain a parkable space without any obstacle, and of course, the number of identified parkable spaces is also not unique. The obstacle recognition can be realized by adopting ultrasonic recognition and/or visual recognition technology about the obstacle, the visual recognition depends on image pickup equipment, the ultrasonic recognition depends on ultrasonic probes, and likewise, in order to ensure the accuracy of a recognition result, a plurality of ultrasonic probes can be arranged on the vehicle and distributed at the positions of the head, the tail and the left and right sides of the vehicle, for example, the head, the tail and the left and right sides of the vehicle are respectively two, and basically all the areas around the vehicle can be covered.
In an embodiment of the present application, after outputting each parking space, the method may further include:
determining a target parking space according to the selection instruction;
determining a parking route according to the current pose of the vehicle and a target parking space;
and controlling the vehicle to drive into the target parking space according to the parking route.
As described above, the number of the selected parkable parking spaces among all the identified parking spaces may not be unique, and in order to achieve automatic parking of the vehicle, the user is required to select a target parkable parking space from all the parkable parking spaces in a self-defined manner for parking; further, after the user selects and determines the target parking space, the parking route of the vehicle can be planned by combining the current pose of the vehicle, so that the vehicle is controlled to park to the target parking space according to the planned parking route. Of course, the number of parking routes may be multiple to provide a reference for the parking process, enabling more accurate automated parking. The selection of the user about the target parking space can be realized through a visual large screen arranged in the vehicle.
In an embodiment of the present application, the controlling the vehicle to drive into the target parkable parking space according to the parking route may include:
Acquiring distance information between a target parking space and a vehicle in real time;
correcting the target parking space according to the distance information to obtain a real-time corrected parking space;
acquiring ultrasonic sensing signals about path obstacles in the running process of the vehicle in real time;
correcting the parking route according to the real-time corrected parking space and the ultrasonic sensing signal to obtain a real-time corrected route;
and controlling the vehicle to drive into the real-time correction parking space according to the real-time correction route.
It can be understood that the precision of the visually identified parking space line segment is related to the position between the parking space line segment and the vehicle, and the closer the line segment is to the camera, the higher the precision is; in addition, certain motion errors can be generated from the beginning of parking to the stopping stage of the vehicle in the parking process; in addition, obstacles may suddenly appear in the field of view during parking. Based on the method, in order to further ensure the accuracy of the parking result, the target parking space and the parking route can be corrected in real time in the parking process.
Specifically, distance information between the target parkable parking space and the vehicle, including but not limited to distance length, distance pose angle and the like, can be obtained in real time, and then the target parkable parking space is corrected according to the distance information to obtain a real-time corrected parking space; meanwhile, the ultrasonic probe is used for detecting the path obstacle in real time to obtain an ultrasonic sensing signal, so that the parking route can be corrected in real time by using the real-time correction parking space and the ultrasonic sensing signal to obtain a real-time correction route, and automatic parking of the vehicle is realized. It should be noted that the above process is performed in real time during the parking process.
The embodiment of the application provides a parking space recognition system.
The parking space recognition system provided by the embodiment of the application may include:
an image pickup apparatus for acquiring a vehicle environment image;
and the controller is used for executing the steps of any parking space identification method according to the vehicle environment image.
In an embodiment of the present application, the image capturing apparatus may be a super-fisheye lens, and is mounted at a head, a tail, and left and right mirror positions.
In one embodiment of the present application, the parking space identification system may further include:
an ultrasonic probe for acquiring an ultrasonic detection signal about an obstacle;
the ultrasonic probe is arranged at the left side and the right side of the vehicle head and the vehicle tail.
For the introduction of the system provided in the embodiment of the present application, reference is made to the above method embodiment, and this application is not repeated herein.
The embodiment of the application provides a parking space recognition device.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a parking space recognition device provided in the present application, where the parking space recognition device may include:
an identification module 1, configured to acquire a vehicle environment image and identify a first vehicle line segment in the vehicle environment image;
a determining module 2, configured to determine, for each first vehicle-position line segment, an initial position of the first vehicle-position line segment at the identification time and a current position of the first vehicle-position line segment at the current time, so as to obtain a change route from the identification time to the current time;
The prediction module 3 is used for carrying out position prediction according to the initial position and the change route to obtain a predicted position at the current moment;
a reserving module 4, configured to reserve, among all the first vehicle line segments, a first vehicle line segment whose current position matches with the predicted position, to obtain a second vehicle line segment;
the fusion module 5 is used for fusing the current position and the predicted position of each second vehicle line segment to obtain a fused position;
and the generating module 6 is used for generating a parking space according to the fusion position of each second vehicle line segment.
Therefore, the parking space recognition device provided by the embodiment of the application performs position prediction based on the information such as the initial position, the current position, the change route and the like of the parking space line segment after the parking space line segment is obtained by utilizing the vehicle environment image recognition, so as to obtain the predicted position, and then determines the accuracy of the parking space line segment recognition through the matching result of the current position and the predicted position, so that only the parking space line segment passing through the matching is reserved, further, the reserved parking space line segment is obtained through the fusion of the predicted position and the current position, and a parking space is generated, and obviously, the parking space is automatically generated by utilizing the parking space line segment obtained through screening integration, so that the accuracy of the parking space recognition result can be effectively ensured, and further, the vehicle can be ensured to be parked in the parking space in a more accurate posture.
In one embodiment of the present application, the above-mentioned fusion module 5 may be specifically configured to determine a first segment parameter at a current position and a second segment parameter at a predicted position of the second vehicle segment, respectively; the line segment parameters comprise an angle between the second vehicle line segment and a line coordinate system in a rear axle of the vehicle, a distance between the second vehicle line segment and a central point of the rear axle of the vehicle and a line segment length of the second vehicle line segment; smoothing the first line segment parameter and the second line segment parameter according to preset weights to obtain a fused line segment parameter; and determining the fusion position of the second vehicle line segment according to the fusion line segment parameters.
In an embodiment of the present application, the parking space identifying device may further include a matching statistics module, configured to, in all the first vehicle position segments, reserve a first vehicle position segment with a current position matched with the predicted position, and obtain a matching passing number of the first vehicle position segment before obtaining the second vehicle position segment; judging whether the number of times of matching passing reaches a preset number of times; if yes, executing the first vehicle position line segments with the current position matched with the predicted position in all the first vehicle position line segments, and obtaining a second vehicle position line segment.
In an embodiment of the present application, the parking space identifying device may further include a screening module, configured to determine, for each first vehicle segment, an initial position of the first vehicle segment at an identifying time, screen, according to a preset screening index, all first vehicle segments before the current position at the current time identifies a change route from the time to the current time, and obtain a screened first vehicle segment; the preset screening index comprises one or more of line segment length, line segment definition, line segment distance and interested area.
In an embodiment of the present application, the parking space identifying device may further include an adjusting module, configured to fuse, for each second vehicle segment, a current position and a predicted position of the second vehicle segment, and after obtaining the fused position, adjust all second vehicle segments according to a preset adjusting rule, to obtain an adjusted second vehicle segment; the preset adjustment rules comprise one or more of line segment deletion, line segment combination and line segment extension.
In an embodiment of the present application, the parking space recognition device may further include an obstacle recognition module, configured to perform obstacle recognition on each parking space after generating the parking space according to the fusion position of each second vehicle line segment, to obtain a parkable parking space without an obstacle; wherein the obstacle recognition comprises ultrasonic recognition and/or visual recognition of the obstacle; outputting each parking space.
In an embodiment of the present application, the parking space identification device may further include a parking module, configured to determine a target parking space according to the selection instruction after outputting each parking space; determining a parking route according to the current pose of the vehicle and a target parking space; and controlling the vehicle to drive into the target parking space according to the parking route.
In an embodiment of the present application, the parking module may be specifically configured to obtain, in real time, distance information between a target parking space and a vehicle; correcting the target parking space according to the distance information to obtain a real-time corrected parking space; acquiring ultrasonic sensing signals about path obstacles in the running process of the vehicle in real time; correcting the parking route according to the real-time corrected parking space and the ultrasonic sensing signal to obtain a real-time corrected route; and controlling the vehicle to drive into the real-time correction parking space according to the real-time correction route.
For the description of the apparatus provided in the embodiment of the present application, reference is made to the above method embodiment, and the description is omitted herein.
The embodiment of the application provides electronic equipment.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device provided in the present application, where the electronic device may include:
a memory for storing a computer program;
and the processor is used for realizing the steps of any parking space identification method when executing the computer program.
As shown in fig. 3, which is a schematic diagram of a composition structure of an electronic device, the electronic device may include: a processor 10, a memory 11, a communication interface 12 and a communication bus 13. The processor 10, the memory 11 and the communication interface 12 all complete communication with each other through a communication bus 13.
In the present embodiment, the processor 10 may be a central processing unit (Central Processing Unit, CPU), an asic, a dsp, a field programmable gate array, or other programmable logic device, etc.
The processor 10 may call a program stored in the memory 11, and in particular, the processor 10 may perform operations in an embodiment of the parking space recognition method.
The memory 11 is used for storing one or more programs, and the programs may include program codes, where the program codes include computer operation instructions, and in this embodiment, at least the programs for implementing the following functions are stored in the memory 11:
acquiring a vehicle environment image and identifying a first vehicle line segment in the vehicle environment image;
for each first vehicle position line segment, determining the initial position of the first vehicle position line segment at the identification time and the current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time;
position prediction is carried out according to the initial position and the change route, and a predicted position at the current moment is obtained;
reserving a first vehicle line segment with the current position matched with the predicted position in all the first vehicle line segments to obtain a second vehicle line segment;
For each second vehicle line segment, fusing the current position and the predicted position of the second vehicle line segment to obtain a fused position;
and generating a parking space according to the fusion position of each second vehicle line segment.
In one possible implementation, the memory 11 may include a storage program area and a storage data area, where the storage program area may store an operating system, and at least one application program required for functions, etc.; the storage data area may store data created during use.
In addition, the memory 11 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage device.
The communication interface 12 may be an interface of a communication module for interfacing with other devices or systems.
Of course, it should be noted that the structure shown in fig. 3 is not limited to the electronic device in the embodiment of the present application, and the electronic device may include more or fewer components than those shown in fig. 3 or may combine some components in practical applications.
Embodiments of the present application provide a computer-readable storage medium.
The computer readable storage medium provided in the embodiments of the present application stores a computer program, where the computer program when executed by a processor may implement the steps of any one of the parking space recognition methods described above.
The computer readable storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
For the introduction of the computer readable storage medium provided in the embodiments of the present application, reference is made to the above method embodiments, and the description is omitted herein.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The technical scheme provided by the application is described in detail. Specific examples are set forth herein to illustrate the principles and embodiments of the present application, and the description of the examples above is only intended to assist in understanding the methods of the present application and their core ideas. It should be noted that it would be obvious to those skilled in the art that various improvements and modifications can be made to the present application without departing from the principles of the present application, and such improvements and modifications fall within the scope of the present application.

Claims (14)

1. The parking space recognition method is characterized by comprising the following steps of:
acquiring a vehicle environment image and identifying a first vehicle line segment in the vehicle environment image;
for each first vehicle position line segment, determining an initial position of the first vehicle position line segment at the identification time and a current position of the first vehicle position line segment at the current time to obtain a change route from the identification time to the current time;
Position prediction is carried out according to the initial position and the change route, and a predicted position of the current moment is obtained;
reserving a first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment;
for each second bit line segment, fusing the current position and the predicted position of the second bit line segment to obtain a fused position;
and generating a parking space according to the fusion position of each second vehicle space line segment.
2. The parking space recognition method according to claim 1, wherein for each of the second vehicle bit line segments, fusing the current position and the predicted position of the second vehicle bit line segment to obtain a fused position includes:
determining a first line segment parameter at the current location and a second line segment parameter at the predicted location, respectively, for the second vehicle bit line segment; the line segment parameters comprise an angle between the second vehicle line segment and a vehicle rear axle center line coordinate system, a distance between the second vehicle line segment and a vehicle rear axle center point, and a line segment length of the second vehicle line segment;
Smoothing the first line segment parameter and the second line segment parameter according to preset weights to obtain a fused line segment parameter;
and determining the fusion position of the second vehicle line segment according to the fusion line segment parameters.
3. The parking space recognition method according to claim 1, wherein the step of reserving the first vehicle position line segment with the current position matched with the predicted position in all the first vehicle position line segments, before obtaining the second vehicle position line segment, further comprises:
acquiring the matching passing times of the first vehicle position line segment;
judging whether the matching passing times reach preset times or not;
if yes, executing the first vehicle position line segments which are matched with the predicted position in all the first vehicle position line segments, and reserving the first vehicle position line segments which are matched with the current position to obtain a second vehicle position line segment.
4. The parking space recognition method according to claim 1, wherein the determining, for each of the first vehicle-space line segments, an initial position of the first vehicle-space line segment at a recognition time, before a current position at a current time, a change route from the recognition time to the current time, further comprises:
screening all the first vehicle line segments according to a preset screening index to obtain screened first vehicle line segments; the preset screening index comprises one or more of line segment length, line segment definition, line segment distance and interested area.
5. The parking space recognition method according to claim 1, wherein for each of the second vehicle bit line segments, fusing the current position and the predicted position of the second vehicle bit line segment to obtain a fused position, further comprising:
adjusting all the second vehicle line segments according to a preset adjustment rule to obtain adjusted second vehicle line segments; the preset adjustment rules comprise one or more of line segment deletion, line segment merging and line segment extension.
6. The parking space recognition method according to claim 1, wherein after generating the parking space according to the fusion position of each of the second vehicle line segments, further comprises:
identifying obstacles for each parking space to obtain a parking space without the obstacles; wherein the obstacle identification comprises an ultrasonic identification and/or a visual identification of the obstacle;
and outputting each parking space capable of being parked.
7. The method of claim 6, wherein after outputting each of the parkable parking spaces, further comprising:
determining a target parking space according to the selection instruction;
determining a parking route according to the current pose of the vehicle and the target parking space;
And controlling the vehicle to drive into the target parking space according to the parking route.
8. The parking space recognition method according to claim 7, wherein the control vehicle driving into the target parkable parking space in accordance with the parking route includes:
acquiring distance information between the target parking space and the vehicle in real time;
correcting the target parking space according to the distance information to obtain a real-time corrected parking space;
acquiring ultrasonic sensing signals about path obstacles in the running process of the vehicle in real time;
correcting the parking route according to the real-time corrected parking space and the ultrasonic sensing signal to obtain a real-time corrected route;
and controlling the vehicle to drive into the real-time correction parking space according to the real-time correction route.
9. A parking space recognition system, comprising:
an image pickup apparatus for acquiring a vehicle environment image;
a controller for executing the steps of the parking space recognition method according to any one of claims 1 to 8 based on the vehicle environment image.
10. The parking space recognition system according to claim 9, wherein the image pickup device is a super fisheye lens mounted at a head, a tail, and left and right mirror positions.
11. The parking space identification system of claim 9, further comprising:
an ultrasonic probe for acquiring an ultrasonic detection signal about an obstacle;
the ultrasonic probe is arranged at the left side and the right side of the vehicle head, the vehicle tail and the vehicle.
12. A parking space recognition device, comprising:
the vehicle environment recognition module is used for acquiring a vehicle environment image and recognizing a first vehicle line segment in the vehicle environment image;
the determining module is used for determining the initial position of the first vehicle position line segment at the identification time and the current position of the first vehicle position line segment at the current time for each first vehicle position line segment so as to obtain a change route from the identification time to the current time;
the prediction module is used for carrying out position prediction according to the initial position and the change route to obtain a predicted position of the current moment;
the reservation module is used for reserving the first vehicle position line segment of which the current position is matched with the predicted position in all the first vehicle position line segments to obtain a second vehicle position line segment;
the fusion module is used for fusing the current position and the predicted position of each second vehicle bit line segment to obtain a fusion position;
And the generating module is used for generating a parking space according to the fusion position of each second vehicle line segment.
13. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the parking space identification method according to any one of claims 1 to 8 when executing the computer program.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the parking space identification method according to any one of claims 1 to 8.
CN202310235372.1A 2023-03-13 2023-03-13 Parking space identification method, system, device, electronic equipment and readable storage medium Active CN116524457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310235372.1A CN116524457B (en) 2023-03-13 2023-03-13 Parking space identification method, system, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310235372.1A CN116524457B (en) 2023-03-13 2023-03-13 Parking space identification method, system, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN116524457A true CN116524457A (en) 2023-08-01
CN116524457B CN116524457B (en) 2023-09-05

Family

ID=87405377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310235372.1A Active CN116524457B (en) 2023-03-13 2023-03-13 Parking space identification method, system, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116524457B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118097999A (en) * 2024-04-29 2024-05-28 知行汽车科技(苏州)股份有限公司 Parking space identification method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7142157B2 (en) * 2004-09-14 2006-11-28 Sirf Technology, Inc. Determining position without use of broadcast ephemeris information
CN114511632A (en) * 2022-01-10 2022-05-17 北京经纬恒润科技股份有限公司 Construction method and device of parking space map

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7142157B2 (en) * 2004-09-14 2006-11-28 Sirf Technology, Inc. Determining position without use of broadcast ephemeris information
CN114511632A (en) * 2022-01-10 2022-05-17 北京经纬恒润科技股份有限公司 Construction method and device of parking space map

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118097999A (en) * 2024-04-29 2024-05-28 知行汽车科技(苏州)股份有限公司 Parking space identification method, device, equipment and medium

Also Published As

Publication number Publication date
CN116524457B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN109813328B (en) Driving path planning method and device and vehicle
CN109800658B (en) Parking space type online identification and positioning system and method based on neural network
CN112417926B (en) Parking space identification method and device, computer equipment and readable storage medium
CN111516673B (en) Lane line fusion system and method based on intelligent camera and high-precision map positioning
CN111862157A (en) Multi-vehicle target tracking method integrating machine vision and millimeter wave radar
CN116524457B (en) Parking space identification method, system, device, electronic equipment and readable storage medium
US11249174B1 (en) Automatic calibration method and system for spatial position of laser radar and camera sensor
JP2022073894A (en) Driving scene classification method, device, apparatus, and readable storage medium
CN110766760B (en) Method, device, equipment and storage medium for camera calibration
CN111376895A (en) Around-looking parking sensing method and device, automatic parking system and vehicle
CN112753038B (en) Method and device for identifying lane change trend of vehicle
CN112000226B (en) Human eye sight estimation method, device and sight estimation system
CN108107897B (en) Real-time sensor control method and device
US20210237737A1 (en) Method for Determining a Lane Change Indication of a Vehicle
CN115861186A (en) Electric power tower detection method and device based on deep learning and unmanned aerial vehicle equipment
CN109886198B (en) Information processing method, device and storage medium
CN113534805B (en) Robot recharging control method, device and storage medium
CN114475593A (en) Travel track prediction method, vehicle, and computer-readable storage medium
CN113954836B (en) Sectional navigation channel changing method and system, computer equipment and storage medium thereof
CN116534059B (en) Adaptive perception path decision method, device, computer equipment and storage medium
CN108363387B (en) Sensor control method and device
EP4386324A1 (en) Method and apparatus for identifying road information, electronic device, vehicle, and medium
CN114863096B (en) Semantic map construction and positioning method and device for indoor parking lot
CN111857113B (en) Positioning method and positioning device for movable equipment
CN113942503A (en) Lane keeping method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant