WO2022266854A1 - 一种车位检测方法及装置 - Google Patents

一种车位检测方法及装置 Download PDF

Info

Publication number
WO2022266854A1
WO2022266854A1 PCT/CN2021/101613 CN2021101613W WO2022266854A1 WO 2022266854 A1 WO2022266854 A1 WO 2022266854A1 CN 2021101613 W CN2021101613 W CN 2021101613W WO 2022266854 A1 WO2022266854 A1 WO 2022266854A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking space
information
coordinates
space point
parking
Prior art date
Application number
PCT/CN2021/101613
Other languages
English (en)
French (fr)
Inventor
白宇材
任建乐
张强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/101613 priority Critical patent/WO2022266854A1/zh
Publication of WO2022266854A1 publication Critical patent/WO2022266854A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas

Definitions

  • the present application relates to the technical field of intelligent driving, in particular to a parking space detection method and device.
  • Automatic parking refers to automatically controlling the vehicle to park in a parking space without manual control, thereby reducing the user's parking operations.
  • parking space detection is required.
  • the parking space detection includes searching for a parking space within the sensing range (that is, the coverage area of the sensor), and continuously providing the distance information of a specific parking space during the parking process.
  • the parking space detection scheme mainly determines four parking space points (or called corner points) of the parking space.
  • corner points four parking space points (or called corner points) of the parking space.
  • the present application provides a parking space detection method and device, which are used to improve the accuracy of the detected parking space information as much as possible.
  • the present application provides a parking space detection method, which may include acquiring image information of the parking space, determining the first parking space point information according to the image information of the parking space, and determining the first parking space frame information according to the image information of the parking space, and then according to The first parking space point information and the first parking space frame information determine the parking space information of the parking space.
  • the method can be executed by a processor on the ego vehicle, or by a cloud server or the like.
  • the parking space information is determined by the first parking space point information and the first parking space frame information, which can realize the complementarity of two different feature information, so as to obtain more accurate parking space information from local to global.
  • the image information of the acquired parking space enters two branches, one of which is used to determine the first parking space point information, and the other is used to determine the first parking space frame information.
  • the network structure (or network model), etc., can adopt a more generalized network structure, which can reduce the demand for the amount of original data under the same detection accuracy.
  • the parking space information may be parking space frame information.
  • the parking space information of the parking space includes but not limited to the coordinates of the parking space point, the central coordinates of the parking space frame, the size (such as length and width) of the parking space frame, the inclination angle of the parking space frame, and the like.
  • n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame can be determined according to the first parking space point information and the first parking space frame information, wherein n and m Both are positive integers; the parking space information can also be determined according to n first parking space point information and m second parking space point information.
  • the parking space information of a single parking space can be obtained based on one parking space frame, and the parking space information corresponding to multiple parking spaces can be further obtained according to the parking space detection method provided in the present application, for example, the parking space information of a parking lot.
  • the first parking space point information includes the coordinates of the first parking space point
  • the second parking space point information includes the coordinates of the second parking space point
  • the coordinates of the n first parking spaces coincide with the coordinates of k parking spaces among the coordinates of the m second parking spaces; when k ⁇ 3, according to the The m pieces of the second parking space point information and/or the n pieces of the first parking space point information determine the parking space information; when k ⁇ 2, the parking space information is determined according to the n first parking space point information.
  • the first parking space point information further includes a first confidence level corresponding to the coordinates of the first parking space point
  • the second parking space point information further includes a second confidence level corresponding to the coordinates of the second parking space point
  • the coordinates of the n first parking spaces coincide with the coordinates of k parking spaces among the coordinates of the m second parking spaces. Based on this, four situations of determining parking space information are exemplarily shown as follows.
  • the parking space information is obtained by weighting the coordinates of the first parking space point and the corresponding second parking space point coordinates, which helps to improve the accuracy of the parking space information.
  • the coordinates of the weighted average parking space point (the coordinate of the first parking space point ⁇ the first confidence degree corresponding to the coordinate of the first parking space point+the corresponding coordinate of the second parking space point ⁇ the second The second confidence level corresponding to the coordinates of the parking space point)/2.
  • determining the parking space information by using the first parking space point information with higher accuracy helps to improve the accuracy of the parking space information.
  • Case 3 if k ⁇ 2, it is determined that the coordinates of n first parking spaces coincide with the coordinates of m second parking spaces, or that there is no coordinates of parking spaces.
  • the mean value of degree is greater than the threshold value, and the parking space information is determined according to the n first parking space point information.
  • the parking space information of the parking spaces can be determined based on the higher confidence degree of parking space point information, thereby can improve Accuracy of parking information.
  • the parking space frame transition information may be determined according to the image information of the parking space
  • the first parking space frame information may be determined according to the parking space frame transition information and the first parking space point information.
  • the first parking space frame information is determined by combining the obtained first parking space point information, that is, the first parking space point information is used as part of the information for determining the first parking space frame information, thereby improving the accuracy of the first parking space frame information.
  • the third parking space point information can be determined according to the first parking space frame information; if it is determined that the first parking space point information exists within the preset range of the third parking space point information, use the first parking space point information Replace the point information of the third parking space to obtain the point information of the second parking space.
  • the first parking space frame information includes the center coordinates of the first parking space frame; according to the first parking space point information, the second parking space frame information is determined, wherein the second parking space frame information includes the second parking space frame Center coordinates: according to the center coordinates of the second parking space frame and the center coordinates of the first parking space frame, determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the first parking space point information and the second parking space point information belonging to the same parking space are determined according to the distance between the center of the second parking space frame and the center of the first parking space frame.
  • the determination process is relatively simple and the accuracy is high.
  • the image information of the parking space may be image information of a panoramic image of the parking space.
  • the image information described in this application may be information contained in images or videos.
  • the generalized model can be used to extract the image information of the panoramic image, so that the demand for the amount of original data can be reduced under the same detection accuracy.
  • the present application provides a parking space detection method, the method comprising acquiring image information of the parking space, determining the first parking space point information according to the image information of the parking space, and determining the first parking space frame according to the image information of the parking space and the first parking space point information information, and determine the parking space information of the parking space according to the first parking space point information and/or the first parking space frame information.
  • the accuracy of the first parking space frame information can be improved, and thus the accuracy of the parking space information can be improved.
  • the parking space information of the parking space is determined by the first parking space point information and the first parking space frame information. complete shooting, etc.), the parking space information can still be determined. In other words, when a certain characteristic information fails, based on this scheme, the parking space information of the parking space can still be identified, thereby improving the recall rate of the parking space detection results.
  • the parking space information may be parking space frame information.
  • the present application provides a detection device, which can be used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection device may be a vehicle or a cloud server, or a module used in the cloud server or the vehicle, such as a chip or a chip system or a circuit.
  • the detection device may include: a processor.
  • the processor may be configured to support the detection device to perform the corresponding functions shown in the first aspect above.
  • the detection device may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the detection device.
  • the processor is used to acquire the image information of the parking space, determine the first parking space point information according to the image information of the parking space, determine the first parking space frame information according to the image information of the parking space, and determine the first parking space point information and the first parking space frame information according to the first parking space point information and the first parking space frame information. Parking space information.
  • the parking space information may be parking space frame information.
  • the processor is specifically configured to: determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame according to the first parking space point information and the first parking space frame information, Both n and m are positive integers; the parking space information is determined according to n first parking space point information and m second parking space point information.
  • the first parking space point information includes the coordinates of the first parking space point; the second parking space point information includes the coordinates of the second parking space point.
  • the processor is specifically configured to: determine that the coordinates of the n first parking space points coincide with the coordinates of k parking space points among the coordinates of the m second parking space points; when k When ⁇ 3, the parking space information can be determined according to the m pieces of the second parking space point information and/or the n first parking space point information; when k ⁇ 2, according to the n first parking space point information The information determines the parking space information.
  • the first parking space point information further includes a first confidence level corresponding to the coordinates of the first parking space point
  • the second parking space point information further includes a second confidence level corresponding to the coordinates of the second parking space point
  • the coordinates of the weighted average parking space point (the coordinate of the first parking space point ⁇ the first confidence degree corresponding to the coordinate of the first parking space point+the corresponding coordinate of the second parking space point ⁇ the second The second confidence level corresponding to the coordinates of the parking space point)/2.
  • the processor is specifically configured to: determine the parking space frame transition information according to the image information of the parking space, and determine the first parking space frame information according to the parking space frame transition information and the first parking space point information.
  • the processor is further configured to: determine the third parking space point information according to the first parking space frame information; if it is determined that the first parking space point information exists within the preset range of the third parking space point information, use The first parking space point information replaces the third parking space point information to obtain the second parking space point information.
  • the first parking space frame information includes the center coordinates of the first parking space frame; the processor is specifically configured to: determine the second parking space frame information according to the first parking space point information, and the second parking space frame information includes the first The center coordinates of the second parking space frame; according to the center coordinates of the second parking space frame and the center coordinates of the first parking space frame, determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the image information of the parking space includes image information of a panoramic image of the parking space.
  • the present application provides a detection device, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection device may include an acquisition module and a processing module, wherein the acquisition module is used to acquire the image information of the parking space; the processing module is used to determine the first parking space point information according to the image information of the parking space; determine the first parking space frame information according to the image information of the parking space ; Determine the parking space information of the parking space according to the first parking space point information and the first parking space frame information.
  • the parking space information may be parking space frame information.
  • the processing module is specifically configured to: determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame according to the first parking space point information and the first parking space frame information, n and m are positive integers; the parking space information is determined according to n first parking space point information and m second parking space point information.
  • the first parking space point information includes the coordinates of the first parking space point; the second parking space point information includes the coordinates of the second parking space point.
  • the processor is specifically configured to: determine that the coordinates of the n first parking space points coincide with the coordinates of k parking space points among the coordinates of the m second parking space points; when k When ⁇ 3, the parking space information can be determined according to the m pieces of the second parking space point information and/or the n first parking space point information; when k ⁇ 2, according to the n first parking space point information The information determines the parking space information.
  • the first parking space point information further includes a first confidence level corresponding to the coordinates of the first parking space point
  • the second parking space point information further includes a second confidence level corresponding to the coordinates of the second parking space point
  • the coordinates of the weighted average parking space point (the coordinate of the first parking space point ⁇ the first confidence degree corresponding to the coordinate of the first parking space point+the corresponding coordinate of the second parking space point ⁇ the second The second confidence level corresponding to the coordinates of the parking space point)/2.
  • the processing module is specifically configured to: determine the parking space frame transition information according to the image information of the parking space, and determine the first parking space frame information according to the parking space frame transition information and the first parking space point information.
  • the processing module is further configured to: determine the third parking space point information according to the first parking space frame information; if it is determined that the first parking space point information exists within the preset range of the third parking space point information, use The first parking space point information replaces the third parking space point information to obtain the second parking space point information.
  • the first parking space frame information includes the center coordinates of the first parking space frame; the processing module is specifically configured to: determine the second parking space frame information according to the first parking space point information, the second parking space frame information includes the first The center coordinates of the second parking space frame; according to the center coordinates of the second parking space frame and the center coordinates of the first parking space frame, determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the image information of the parking space may be image information of a panoramic image of the parking space.
  • the present application provides a detection device, which can be used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection device may be a vehicle or a cloud server, or a module used in the cloud server or the vehicle, such as a chip or a chip system or a circuit.
  • the detection device may include: a processor.
  • the processor may be configured to support the detection device to perform the corresponding functions shown in the first aspect above.
  • the detection device may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the detection device.
  • the processor is used to acquire the image information of the parking space, determine the first parking space point information according to the image information of the parking space; determine the first parking space frame information according to the image information of the parking space and the first parking space point information; determine the first parking space frame information according to the first parking space point information and/or The first parking space frame information determines the parking space information of the parking space.
  • the present application provides a detection device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection device may include an acquisition module and a processing module, wherein the acquisition module is used to acquire the image information of the parking space; the processing module is used to determine the first parking space point information according to the image information of the parking space; according to the image information of the parking space and the first parking space point information Determine the first parking space frame information; determine the parking space information of the parking space according to the first parking space point information and/or the first parking space frame information.
  • the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by the detection device, the detection device performs the above-mentioned first aspect or the first aspect.
  • the method in any possible implementation manner of the first aspect, or causing the detection device to execute the above second aspect or the method in any possible implementation manner of the second aspect.
  • the present application provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by the detection device, the detection device performs any of the above-mentioned first aspect or the first aspect.
  • Figure 1a is a schematic diagram of a parking space frame provided by the present application.
  • Figure 1b is a schematic diagram of a parking space inclination angle provided by the present application.
  • Figure 2a is a schematic diagram of a system architecture provided by the present application.
  • Figure 2b is a schematic diagram of another system architecture provided by the present application.
  • Figure 2c is a schematic diagram of the positional relationship between a vehicle and a sensor provided by the present application.
  • Fig. 3 is a schematic flow chart of a parking space detection method provided by the present application.
  • Fig. 4a is a schematic flowchart of a method for obtaining a panoramic image of a parking space provided by the present application
  • Fig. 4b is a schematic diagram of images obtained from four angles provided by the present application.
  • FIG. 5 is a schematic flow chart of a method for obtaining second parking space point information provided by the present application
  • FIG. 6 is a schematic flowchart of a method for determining that a first parking space point and a second parking space point belong to the same parking space frame provided by the present application;
  • FIG. 7 is a schematic flowchart of a method for determining the parking space information of a parking space based on n first parking space point information and m second parking space point information provided by the present application;
  • Fig. 8a is a schematic flowchart of a method for determining the parking space information of a parking space based on n first parking space point information and m second parking space point information provided by the present application;
  • Fig. 8 b is a schematic flow chart of another method for determining the parking space information of a parking space based on n first parking space point information and m second parking space point information provided by the present application;
  • FIG. 9 is a schematic flow chart of another parking space detection method provided by the present application.
  • FIG. 10 is a schematic flow chart of another parking space detection method provided by the present application.
  • FIG. 11 is a schematic structural diagram of a parking space detection device provided by the present application.
  • FIG. 12 is a schematic structural diagram of a parking space detection device provided by the present application.
  • FIG. 1 a it is a schematic diagram of a parking space frame provided by the present application.
  • P1P2 is called the entrance line
  • P1P4 and P2P3 can both be called the dividing line
  • the intersection point of the entrance line and the dividing line can be called the entrance parking space point (or called the parking mark point), that is, P1 and P2 in Figure 1a are called is the entrance parking spot
  • P3 and P4 can be called non-entrance parking spots.
  • P1, P2, P3 and P4 can be collectively referred to as parking space points
  • parking space points can also be understood as corner points at each corner of the parking space frame, therefore, parking space points can also be called parking space corner points.
  • the dividing line is perpendicular to the entrance line
  • the parking spaces whose dividing line is perpendicular to the entrance line can be called vertical parking spaces or parallel parking spaces.
  • the inclination angle of the parking space or the inclination angle of the dividing line or the inclination angle of the parking space frame refers to the angle between the dividing line and the x-axis of the image.
  • the included angle between the dividing line of the parking space frame and the x-axis of the image is ⁇ , as shown in Figure 1b.
  • This type of algorithm usually uses a template matching method to determine the inclination angle of the parking space.
  • the distance between the parking space and the own car refers to the distance from the center of the own car to the entrance of the parking space.
  • the distance S1 and the distance S2 represent the distance between the parking space and the vehicle.
  • the distance S3 and the distance S4 indicate the distance between the parking space and the vehicle.
  • FIG. 2a is a schematic diagram of an applicable system architecture of the present application.
  • the system may include a vehicle 101 and a vehicle management server 102 .
  • the vehicle 101 refers to a vehicle that has the functions of collecting images of the surrounding environment and remote communication.
  • the vehicle 101 is provided with a sensor 1011, which can realize the collection of the surrounding environment information of the vehicle.
  • the sensor may be, for example, an image acquisition device, and the image acquisition device may be, for example, at least one of a fisheye camera, a monocular camera, and a depth camera.
  • the sensors are set in the four directions of front, rear, left and right of the vehicle (combined with Fig.
  • the remote communication function of the vehicle 101 can generally be realized by a communication module provided on the vehicle 101, and the communication module includes, for example, a telematics box (telematics box, TBOX) or a wireless communication system (see the wireless communication system 244 in FIG. introduction) etc.
  • a communication module includes, for example, a telematics box (telematics box, TBOX) or a wireless communication system (see the wireless communication system 244 in FIG. introduction) etc.
  • the vehicle management server 102 can realize the function of parking space detection.
  • the vehicle management server 102 is a single server, or may be a server cluster composed of a plurality of servers.
  • the vehicle management server 102 may also be, for example, a cloud server (or called cloud, cloud, cloud server, cloud controller, or Internet of Vehicles server, etc.).
  • Cloud server is a general term for devices or devices with data processing capabilities, such as physical devices such as hosts or processors, virtual devices such as virtual machines or containers, and chips or integrated circuits.
  • the vehicle management server 102 may integrate all functions on an independent physical device, or may deploy different functions on different physical devices, which is not limited in this application.
  • one vehicle management server 102 can communicate with multiple vehicles 101 .
  • the number of vehicles 101 , vehicle management servers 102 , and sensors 1011 in the system architecture shown in FIG. 2 a is just an example, and the present application does not limit it.
  • the name of the vehicle management server 102 in the system is only an example, and other possible names may be used for specific implementation, for example, it may also be called a parking space detection device, which is not limited in this application.
  • the vehicle 101 in Fig. 2a described above may be the vehicle in Fig. 2b described below.
  • FIG. 2 b is a schematic diagram of another applicable system architecture of the present application.
  • the vehicle may be configured in a fully or partially autonomous driving mode.
  • Components coupled to or included in vehicle 200 may include propulsion system 210 , sensor system 220 , control system 230 , peripherals 240 , power supply 250 , computer system 260 , and user interface 270 .
  • Components of vehicle 200 may be configured to operate interconnected with each other and/or with other components coupled to various systems.
  • power supply 250 may provide power to all components of vehicle 200 .
  • Computer system 260 may be configured to receive data from and control propulsion system 210 , sensor system 220 , control system 230 , and peripherals 240 .
  • Computer system 260 may also be configured to generate a display of images on user interface 270 and to receive input from user interface 270 .
  • vehicle 200 may include more, fewer or different systems, and each system may include more, fewer or different components.
  • illustrated systems and components may be combined or divided in any manner, which is not specifically limited in the present application.
  • Propulsion system 210 may provide powered motion for vehicle 200 . As shown in FIG. 2 b , propulsion system 210 may include engine/motor 214 , energy source 213 , transmission 212 and wheels/tyres 211 . Additionally, propulsion system 210 may additionally or alternatively include other components than those shown in Figure 2b. This application does not specifically limit it.
  • the sensor system 220 may include several sensors for sensing information about the environment in which the vehicle 200 is located. As shown in FIG. 2 b , the sensors of the sensor system 220 may include a camera sensor 223 . The camera sensor 223 may be used to capture multiple images of the surrounding environment of the vehicle 200 . The camera sensor 223 may be a still camera or a video camera. Further, optionally, the sensor system 220 may also include a global positioning system (Global Positioning System, GPS) 226, an inertial measurement unit (Inertial Measurement Unit, IMU) 225, a laser radar sensor, a millimeter wave radar sensor, and a position sensor for modifying the sensor. And/or toward the brake 221, etc.
  • GPS Global Positioning System
  • IMU inertial measurement unit
  • the millimeter wave radar sensor may utilize radio signals to sense objects within the surrounding environment of the vehicle 200 .
  • millimeter wave radar 222 may be used to sense the speed and/or heading of a target in addition to sensing the target.
  • Lidar 224 may utilize laser light to sense objects in the environment in which vehicle 200 is located.
  • GPS 226 may be any sensor used to estimate the geographic location of vehicle 200. To this end, the GPS 226 may include a transceiver to estimate the position of the vehicle 200 relative to the earth based on satellite positioning data.
  • computer system 260 may be used to estimate the road traveled by vehicle 200 using GPS 226 in conjunction with map data.
  • the IMU 225 can be used to sense changes in position and orientation of the vehicle 200 based on inertial accelerations and any combination thereof.
  • the combination of sensors in IMU 225 may include, for example, accelerometers and gyroscopes. Additionally, other combinations of sensors in the IMU 225 are possible.
  • the control system 230 controls the operation of the vehicle 200 and its components.
  • Control system 230 may include various elements including steering unit 236 , accelerator 235 , braking unit 234 , sensor fusion algorithm 233 , computer vision system 232 , route control system 234 , and obstacle avoidance system 237 .
  • Steering system 236 is operable to adjust the heading of vehicle 200.
  • the throttle 235 is used to control the operating speed of the engine 214 and thus the speed of the vehicle 200 .
  • Control system 230 may additionally or alternatively include other components than those shown in Figure 2b. This application does not specifically limit it.
  • the braking unit 234 is used to control the deceleration of the vehicle 200 .
  • the braking unit 234 may use friction to slow the wheels 211 .
  • the brake unit 234 can convert the kinetic energy of the wheel 211 into electric current.
  • the braking unit 234 may also take other forms to slow down the rotation of the wheels 211 to control the speed of the vehicle 200 .
  • Computer vision system 232 may be operable to process and analyze images captured by camera sensor 223 in order to identify objects and/or features in the environment surrounding vehicle 200 . Objects and/or features may include traffic signals, road boundaries, and obstacles.
  • the computer vision system 232 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM structure from motion
  • computer vision system 232 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 234 is used to determine the travel route of the vehicle 200 .
  • route control system 242 may combine data from sensor system 220, GPS 226, and one or more predetermined maps to determine a travel route (eg, a parking route) for vehicle 200 .
  • Obstacle avoidance system 237 is used to identify, evaluate and avoid or otherwise overcome potential obstacles in the environment of vehicle 200 .
  • control system 230 may additionally or alternatively include components other than those shown and described. Alternatively, some of the components shown above may be reduced.
  • Peripherals 240 may be configured to allow vehicle 200 to interact with external sensors, other vehicles, and/or a user.
  • peripherals 240 may include, for example, a wireless communication system 244 , a touch screen 243 , a microphone 242 and/or a speaker 241 .
  • Peripheral device 240 may additionally or alternatively include other components than those shown in FIG. 2b. This application does not specifically limit it.
  • peripheral device 240 provides a means for a user of vehicle 200 to interact with user interface 270 .
  • touch screen 243 may provide information to a user of vehicle 200 .
  • the user interface 270 can also operate the touch screen 243 to receive user's input.
  • peripheral device 240 may provide a means for vehicle 200 to communicate with other devices located within the vehicle.
  • microphone 242 may receive audio (eg, voice commands or other audio input) from a user of vehicle 200 .
  • speaker 241 may output audio to a user of vehicle 200 .
  • Wireless communication system 244 may communicate wirelessly with one or more devices, either directly or via a communication network.
  • the wireless communication system 244 may use 3G cellular communication, such as code division multiple access (CDMA), EVD0, global system for mobile communications (GSM)/general packet radio service technology (general packet radio service, GPRS), or 4G cellular communication, such as long term evolution (long term evolution, LTE), or 5G cellular communication.
  • the wireless communication system 244 can use WiFi to communicate with a wireless local area network (wireless local area network, WLAN).
  • the wireless communication system 244 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, wireless communication system 244 may include one or more dedicated short range communications (DSRC) devices, which may include public and/or private data communications.
  • DSRC dedicated short range communications
  • Power supply 250 may be configured to provide power to some or all components of vehicle 200 .
  • power source 250 may comprise, for example, a rechargeable lithium-ion or lead-acid battery.
  • one or more battery packs may be configured to provide electrical power.
  • Other power supply materials and configurations are also possible.
  • power source 250 and energy source 213 may be implemented together, as in some all-electric vehicles.
  • Components of vehicle 200 may be configured to function in an interconnected manner with other components within and/or external to their respective systems. To this end, the components and systems of vehicle 200 may be communicatively linked together via a system bus, network, and/or other connection mechanisms.
  • Computer system 260 may include at least one processor 261 executing instructions 2631 stored in a computer-readable medium such as memory 263 .
  • Computer system 260 may also be a plurality of computing devices that control individual components or subsystems of vehicle 200 in a distributed manner.
  • the processor 261 can be any conventional processor, such as a central processing unit (central processing unit, CPU). Alternatively, it can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), graphics processing units (graphic processing units, GPUs), application specific integrated circuits (application specific integrated circuits, ASICs), field programmable Gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • a general-purpose processor can be a microprocessor, or any conventional processor. It should be understood that the present application does not limit the number of sensors and processors included in the above vehicle system.
  • FIG. 2b functionally illustrates the processor, memory, and other elements in the computer system 260, those of ordinary skill in the art will appreciate that the processor, computer, or memory may actually include Multiple processors, computers, or memory within a physical enclosure.
  • memory may be a hard drive or other storage medium located in a different housing than computer system 260 .
  • references to a processor or computer are to be understood to include references to collections of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components, may each have their own processor that only performs calculations related to component-specific functions .
  • the processor may be located remotely from the vehicle and be in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle while others are executed by a remote processor, including taking the necessary steps to perform a single maneuver.
  • memory 263 may contain instructions 2631 (eg, program logic) executable by processor 261 to perform various functions of vehicle 200 , including those described above.
  • Memory 214 may also contain additional instructions, including instructions for sending data to, receiving data from, interacting with, and/or controlling one or more of propulsion system 210, sensor system 220, control system 230, and peripherals 240. instruction.
  • memory 263 may also store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by vehicle 200 and computer system 260 during operation of vehicle 200 in autonomous, semi-autonomous, and/or manual modes.
  • a user interface 270 for providing information to or receiving information from a user of the vehicle 200 .
  • user interface 270 may include one or more input/output devices within set of peripheral devices 240 , such as wireless communication system 244 , touch screen 243 , microphone 242 and speaker 241 .
  • Computer system 260 may control functions of vehicle 200 based on input received from various subsystems (eg, propulsion system 210 , sensor system 220 , and control system 230 ), as well as from user interface 270 .
  • computer system 260 may utilize input from control system 230 in order to control steering unit 236 to avoid obstacles detected by sensor system 220 and obstacle avoidance system 237 .
  • computer system 260 is operable to provide control over many aspects of vehicle 200 and its subsystems.
  • one or more of these components described above may be installed separately from or associated with the vehicle 200 .
  • the memory 263 may exist partially or completely separately from the vehicle 200 .
  • the components described above may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 2b should not be construed as a limitation to the embodiment of the present application.
  • the aforementioned vehicles include but are not limited to unmanned vehicles, smart vehicles (such as automated guided vehicles (AGV)), electric vehicles, digital vehicles, and intelligent manufacturing vehicles.
  • unmanned vehicles such as automated guided vehicles (AGV)
  • smart vehicles such as automated guided vehicles (AGV)
  • electric vehicles such as electric vehicles
  • digital vehicles such as digital vehicles
  • intelligent manufacturing vehicles such as automated guided vehicles (AGV)
  • the parking space detection method provided in the present application can be applied to the fields of advanced driving assistant system (advanced driving assistant system, ADAS), automatic driving system or intelligent driving system, and is especially suitable for functions related to automatic parking. It can also be applied to the use of more advanced functions using parking space information as constraints, such as 3D modeling based on parking space information.
  • advanced driving assistant system advanced driving assistant system
  • ADAS advanced driving assistant system
  • intelligent driving system intelligent driving system
  • the present application proposes a parking space detection method.
  • the parking space detection method can improve the accuracy of the parking space information.
  • FIG. 3 is a schematic flow chart of a parking space detection method provided in the present application.
  • This method can be applied to the parking space management server 102 in the above-mentioned FIG. 2a, or to the vehicle in the above-mentioned FIG. 2b.
  • the parking space management server and the device executing the method in the vehicle can be collectively referred to as a parking space detection device.
  • the parking space detection device may be the parking space management server 102 in FIG. 2a above, or may be the vehicle in FIG. 2b above.
  • the method includes the following steps:
  • step 301 the parking space detection device acquires the image information of the parking space.
  • the parking space detection device may acquire a panoramic image of the parking space.
  • a panoramic image of the parking space please refer to the introduction of FIG. 4a below.
  • feature information of the panoramic image of the parking space may be extracted based on the panoramic image of the parking space, that is, image information of the parking space may be obtained.
  • the panoramic image of the parking space may also be referred to as a panoramic image of the parking space or a bird's-eye view image of the parking space.
  • the panoramic image of the parking space can be used as the input of the feature extraction model.
  • the panoramic image of the parking space is input to the feature extraction model, and the feature extraction model can output the feature information of the panoramic image of the parking space (see Figure 9 below), which feature information refers to points and/or lines and/or circles in the image, etc. .
  • the feature information of the panoramic image refers to information that can characterize the panoramic image itself, such as points and/or lines and/or circles.
  • the feature extraction model may be obtained through supervised learning based on samples, and stored in the parking space detection device or a vehicle communicating with the parking space detection device.
  • step 302 the parking space detection device determines first parking space point information according to the image information of the parking space.
  • the first parking space point information includes the coordinates of the first parking space point. Further, optionally, the first parking space point information also includes a first degree of confidence corresponding to the coordinates of the first parking space point.
  • the coordinates of the first parking space point refer to the pixel coordinates of the first parking space point in the image.
  • the pixel coordinates of the first parking space point on the image refer to the horizontal and vertical coordinates of the pixel of the first parking space point on the image.
  • the parking space detection device may determine N first parking space point information according to the image information of the parking space, and each first parking space point information includes the coordinates of the first parking space point, and further includes the coordinates of the first parking space point.
  • One confidence level, N is a positive integer.
  • the parking space detection device may determine the first parking space point information of the parking space according to the feature information of the panoramic image and the parking space point recognition algorithm.
  • the parking spot recognition algorithm may be an algorithm in a parking spot analysis model (or called a parking spot recognition model or a parking spot resolver).
  • the label of the parking spot analysis model is the coordinates of the parking spot (that is, the coordinates of the parking spot in the image).
  • the output of the parking spot analysis model is the coordinates of the parking spot.
  • the analysis model of the parking space point may be obtained by supervised learning based on samples, and stored in the parking space detection device or a vehicle communicating with the parking space detection device.
  • the feature information of the panoramic image of the parking space can be input into the parking space point analysis model, and the parking space point analysis model performs multi-layer convolution operation, which can output the coordinates of N first parking space points; further, it can also output N The first degree of confidence corresponding to the coordinates of each first parking space point in the coordinates of the first parking space points.
  • the parking spot recognition algorithm may also be a parking spot detection algorithm based on a gray image, a binary image based parking spot detection algorithm, or a contour curve based parking spot detection algorithm.
  • the process of the parking space point detection algorithm based on the contour curve may be, for example, to fit a straight line on the image through image recognition, and then identify the intersection point of the two straight lines as the parking space point.
  • Step 303 the parking space detection device determines first parking space frame information according to the image information of the parking space.
  • the first parking space frame information includes but not limited to the center coordinates of the first parking space frame, the confidence corresponding to the center coordinates of the first parking space frame, the size (such as length and width) of the first parking space frame, and the size correspondence of the first parking space frame. Confidence degree of , the inclination angle of the first parking space frame and the confidence degree corresponding to the inclination angle of the first parking space frame, etc.
  • the parking space detection device can determine the first parking space frame information according to the feature information of the panoramic image and the parking space frame recognition algorithm.
  • the label of the parking space frame analysis model can be the center coordinates of the parking space frame, the parking space frame The length and width of the parking space and the inclination angle of the parking space frame.
  • the output of the parking space frame analysis model is the center coordinates of the parking space frame, the length and width of the parking space frame, and the inclination angle of the parking space frame.
  • the parsing model of the parking space frame may be obtained by supervised learning based on samples, and stored in the parking space detection device or a vehicle communicating with the parking space detection device.
  • the feature information of the panoramic image of the parking space can be input into the parking space frame analysis model, and the parking space frame analysis model performs multi-layer convolution operation to output the first parking space frame information.
  • the analysis model of the parking space frame can output the center coordinates of the first parking space frame, the confidence degree corresponding to the center coordinates of the first parking space frame, the length and width of the first parking space frame, the confidence degree corresponding to the length and width of the first parking space frame, the parking space frame The inclination angle of and the confidence corresponding to the inclination angle of the parking space frame. It should be noted that usually the confidence levels corresponding to the length and width of the first parking space frame are the same.
  • the parking space detection device determines the second vehicle frame information of the parking space according to the feature information of the panoramic image, the parking space frame recognition algorithm, and the first parking space point information.
  • the parking space detection device may determine the transition information of the parking space frame according to the image information of the parking space, and determine the first parking space frame information according to the transition information of the parking space frame and the first parking space point information.
  • the parking space frame recognition algorithm includes algorithms in the parking space frame transition model and the parking space frame analysis model
  • the feature information of the panoramic image of the parking space can be input into the parking space frame transition model, and the parking space frame transition model performs a multi-layer convolution operation
  • the transition information of the parking space frame may be output, wherein the parking space frame transition information may be, for example, line information other than the parking space points, or information within the parking space frame.
  • the transition information of the parking space frame and the first parking space point information are input into the analysis model of the parking space frame, and the analysis model of the parking space frame performs multi-layer convolution operation to output the information of the first parking space frame.
  • the N first parking space point information obtained above is also input as part of the analysis model of the parking space frame. In this way, more accurate frame information of the first parking space can be output.
  • the obtained first parking space frame information may be incomplete due to the limitation of the perception range of the device for collecting image information, resulting in low accuracy of the obtained first parking space frame information.
  • the first parking space frame information is determined in combination with the obtained first parking space point information, which can improve the accuracy of the first parking space frame information.
  • abnormal parking spaces may also be excluded according to the length and width of the first parking space frame.
  • the frame information of the first parking space whose length and width do not fall within a reasonable value range can be eliminated. In this way, the accuracy of the first parking space frame information can be further improved.
  • step 302 may be performed first and then step 303 may be performed, or step 303 may be performed first and then step 302 may be performed, or step 303 and step 302 may be performed simultaneously, which is not limited in this application.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the first parking space frame information.
  • the parking space information of the parking space includes but not limited to the coordinates of the parking space point, the center coordinates of the parking space frame, the size (such as length and width) of the parking space frame, the inclination angle of the parking space frame, and the like.
  • the parking space information may be parking space frame information.
  • the parking space detection device may determine the second parking space point information according to the first parking space frame information.
  • the second parking space point information includes coordinates of M second parking space points, and further includes a second degree of confidence corresponding to the coordinates of each second parking space point.
  • the coordinates of the second parking space point refer to the pixel coordinates of the second parking space point in the image.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the second parking space point information.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the second parking space point information.
  • the parking space information is the final output parking space information, which can be displayed on the touch screen of the vehicle.
  • the coordinates of the parking space points in the detected parking space information are actually the pixel coordinates in the image information, which need to be combined with the internal parameter matrix and external parameter matrix detected during sensor calibration to convert the pixel coordinates into actual coordinates.
  • the vehicle can The location of the parking space is calibrated according to the actual coordinates.
  • the parking space information is determined by the first parking space point information and the first parking space frame information, which can be complemented by two different feature information, thereby obtaining from the local (such as the coordinates of the parking space point) To the global (such as the inclination angle of the dividing line of the parking space, etc.) more accurate parking space information.
  • the image information of the acquired parking space enters two branches, one of which is used to determine the first parking space point information, and the other is used to determine the first parking space frame information, so that when the acquired image information is
  • a network structure or called a network model
  • a more generalized network structure can be used, which can reduce the demand for the amount of original data under the same detection accuracy.
  • FIG. 4 a it is a schematic flowchart of a method for acquiring a panoramic image of a parking space provided by the present application.
  • the method includes the following steps:
  • Step 401 M sensors collect original images respectively, where M is a positive integer.
  • M can be equal to 4.
  • the sensors can be arranged in the front, rear, left, and right directions of the vehicle, and the four sensors can respectively collect original images to obtain 4 original images.
  • M may also be greater than 4, or less than 4.
  • the field of view of the sensor should be greater than 90 degrees as much as possible to achieve all-round coverage around the vehicle.
  • the original image collected by each sensor is a partial angle image (or called a surround view image or a partial bird’s-eye view image) in the panoramic image, and different angle images can identify the vehicle’s surroundings Environmental information from different angles.
  • M is equal to 1
  • the original image collected by the sensor is the panoramic image.
  • step 402 the parking space detection device acquires original images collected by M sensors.
  • the original image collected by the sensor can be sent to the cloud server through the communication module in the vehicle.
  • the original image collected by the sensor can be transmitted to the processor.
  • Step 403 the parking space detection device processes the acquired M original images to obtain M surround-view images.
  • the parking space detection device may firstly perform de-distortion processing on each original image.
  • the original image may be corrected based on built-in correction parameters of the sensor.
  • step 404 the parking space detection device performs homography transformation on the M surround-view images respectively to obtain M top-view images.
  • the look-around image can be a checkerboard pattern
  • the parking space detection device can obtain the actual pixel position information of the checkerboard points, and obtain the checkerboard pattern in the look-around image from the distorted original image according to a preset image processing method.
  • the pixel position information of the point can be obtained in the original image through a preset checkerboard point setting method.
  • there is a preset proportional relationship between the actual pixel position information of the checkerboard points and the pixel position information of the checkerboard points in the surround-view image and the prediction for obtaining the pixel position information of the checkerboard points in the surround-view image from the distorted monitoring image can be obtained.
  • the pixel position information of the checkerboard points in the look-around image can be obtained according to the preset conversion ratio information and the actual pixel position information of the checkerboard points.
  • a homography matrix (Homography matrix) is obtained, and each distorted original image is calculated and processed according to the homography matrix.
  • the process of obtaining surround view images from multiple angles can be understood as performing homography change processing.
  • the look-around image at each angle corresponds to the look-around image in a direction captured by the sensor.
  • the parking space detection device can stitch M top-view images to obtain a panoramic image of the parking space.
  • the parking space detection device can extract images of overlapping regions and non-overlapping regions in M top-view images, perform feature matching on images of overlapping regions, and fuse overlapping regions and non-overlapping regions according to the matching results, thereby Get a panoramic image of the parking space.
  • 4 fisheye cameras are set up around the vehicle.
  • 4 top-view images can be obtained.
  • the 4 top-view images have 4 overlapping areas A, B, C and D, as shown in Figure 4b.
  • the overlapping area is formed by matching and fusing two surround-view images according to image features, that is, feature point extraction and matching are performed on the overlapping area of the two images.
  • the image in the overlapping area A can be the feature matching and fusion of the image in front and the image on the left
  • the overlapping area B can be the feature matching and fusion of the front image and the right image
  • the overlapping area C can be the feature matching and fusion of the left image and the rear image
  • the overlapping area D can be the right image and
  • the images in the rear are subjected to feature matching and fusion, and the original images of the corresponding surround-view images are directly retained in the non-overlapping areas, and finally fused into a panoramic image of the parking space.
  • the panoramic image of the parking space can be passed. Based on the panoramic image determination, more accurate parking space information can be obtained.
  • a possible method for determining the information of the second parking space point is exemplarily shown as follows.
  • FIG. 5 it is a schematic flowchart of a method for acquiring second parking spot information provided by the present application. The method includes the following steps:
  • Step 501 the parking space detection device acquires the first parking space frame information of the parking space.
  • step 501 reference may be made to the introduction of the aforementioned step 303, which will not be repeated here.
  • the parking space detection device may convert the first parking space frame information into third parking space point information.
  • the first parking space frame information includes the center coordinates of the first parking space frame, the length and width of the first parking space frame, the inclination angle of the first parking space frame, and the confidence level corresponding to the center coordinates of the first parking space frame , the confidence degree corresponding to the length and width of the first parking space frame and the confidence degree corresponding to the inclination angle of the first parking space frame.
  • the parking space detection device can determine the third parking space point information based on the center coordinates of the first parking space frame, the inclination angle of the first parking space frame, and the length and width of the first parking space frame.
  • the third parking space point information includes the coordinates of the third parking space point.
  • the parking space detection device can also determine the corresponding position of the third parking space point based on the confidence degree corresponding to the center coordinate of the first parking space frame, the confidence degree corresponding to the inclination angle of the first parking space frame, and the confidence degree corresponding to the length and width of the first parking space frame. third degree of confidence.
  • the parking space detection device can convert each first parking space frame information into a set of third parking space point information, and the third confidence degrees corresponding to the coordinates of each third parking space point in a set of third parking space point information are the same , the third confidence degree is the average of the confidence degree corresponding to the center coordinate of the first parking space frame to which the third parking space point belongs, the length, width and corresponding confidence degree of the first parking space frame, and the confidence degree corresponding to the inclination angle of the first parking space frame value. It can also be understood that the confidence of the coordinates of each third parking space point in a group of third parking space point information transformed from a first parking space frame is the same.
  • the first parking space frame information A includes the confidence degree A1 corresponding to the center coordinates of the first parking space frame, the confidence degree A2 corresponding to the length and width of the first parking space frame, and the confidence degree A3 corresponding to the inclination angle of the first parking space frame.
  • the confidence degree of each third parking space point in the conversion of the first parking space frame information A into the third parking space point information is the same, that is, equal to (A1+A2+A3)/3.
  • Step 503 the parking space detection device traverses the information of each third parking space point, if it is determined that there is a first parking space point within the preset range, then perform the following step 504; if it is determined that there are two or more than two parking space points within the preset range The first parking space point, then perform the following step 505.
  • the preset range may be, for example, about 15 pixels (px).
  • 1px 2 centimeters (cm)
  • 15px corresponds to about 30 centimeters (cm) of the physical parking space.
  • Step 504 the parking space detection device replaces the third parking space point information with the first parking space point information.
  • replacing the third parking space point information with the first parking space point information can be understood as replacing the coordinates of the third parking space point with the coordinates of the first parking space point, and replacing the third parking space with the first confidence level corresponding to the coordinates of the first parking space point The point corresponds to the third confidence level.
  • Step 505 the parking space detection device replaces the third parking space point information with the first parking space point closest to the third parking space point.
  • the third parking space point information can be replaced by the first parking space point information closest to the third parking space point.
  • the coordinates of the first parking space point closest to the third parking space point can be used to replace the coordinates of the third parking space point
  • the first confidence level corresponding to the coordinates of the first parking space point closest to the third parking space point can be used to replace the first parking space point.
  • the third degree of confidence corresponding to the three parking spots can be used to replace the third parking spots.
  • the third parking space point information includes ⁇ coordinates of the third parking space point 31, coordinates of the third parking space point 32, coordinates of the third parking space point 33, and coordinates of the third parking space point 34 ⁇ , and the third parking space point 31 corresponds to The third confidence degree 31, the third parking space point 32 corresponds to the third confidence degree 32, the third parking space point 33 corresponds to the third confidence degree 33, and the third parking space point 34 corresponds to the third confidence degree 34; if the third parking space point 31 information is The first parking space point 11 information is replaced, and the rest are not replaced, then the obtained second parking space point information includes ⁇ the coordinates of the first parking space point 11 and the corresponding first confidence level 11, the coordinates of the third parking space point 32 and the corresponding first The third degree of confidence 32, the coordinates of the third parking space point 33 and the corresponding third degree of confidence 33, the coordinates of the third parking space point 34 and the corresponding third degree of confidence 34 ⁇ .
  • the parking space detection device can record and store the third parking space point information replaced by the first parking space point information in the above step 503 to step 505, and then determine the coordinates of the first parking space point and the second parking space point When the coordinates of are coincident, they can be used directly.
  • the method for obtaining the second parking space point information shown in FIG. 5 above is only a possible example, and the first parking space frame information can also be directly converted into the second parking space information.
  • the above step 502 can be The third parking space point information in is replaced with the second parking space point information, which is not limited in this application.
  • a possible implementation is to determine the parking space information of the parking space according to the first parking space point information and the second parking space point information. Based on this, it is necessary to first determine which first parking space point information and which second parking space point information belong to the same parking space frame. Or it can also be understood that it is necessary to first determine which first parking space point information and which second parking space point information correspond to the same parking space frame.
  • the following exemplarily shows a flow of a method for determining that the first parking space point and the second parking space point belong to the same parking space frame, see FIG. 6 .
  • the method includes the following steps:
  • the parking space detection device may determine second parking space frame information according to the first parking space point information.
  • the process of obtaining the information of the first parking spot can refer to the introduction of the aforementioned step 302 .
  • the parking space detection device may determine at least one second parking space frame information according to the first parking space point information.
  • distance constraints and parking corner types (such as L-type, T-type, or I-type, etc.) can be used to pair pairs of N first parking space points.
  • the parking space detection device can calculate the distance between the two parking space points in the x dimension and the y dimension, and the distance between the two parking space points can be considered to be the length Length of the parking space, for example: when 600cm ⁇ Length ⁇ 800cm, it can be considered that this The parking space formed by two parking space points is long (that is, the dividing line).
  • the parking space formed by these two parking spaces is wide (that is, the entrance line or the non-entry line). Further, after the length and width of the second parking space frame are determined, the center coordinates of the second parking space frame can also be determined. If the calculated Length of the parking space does not meet the above two geometric features, it is considered as an unconventional parking space, or the parking space is detected incorrectly, and it is discarded directly. Since the first confidence levels of each first parking space point are independent, the accuracy of the second parking space frame information determined based on the first parking space point information with two independent confidence levels is relatively high.
  • the parking space detection device can match the first parking space point information and the second parking space point information belonging to the same parking space frame according to the first parking space frame information and the second parking space frame information.
  • the second parking space frame information includes the center coordinates of the second parking space frame
  • the first parking space frame information includes the center coordinates of the first parking space frame
  • the parking space detection device may perform matching according to the center coordinates of the second parking space frame and the center coordinates of the first parking space frame.
  • the center coordinates (x1i, y1i) of the i-th second parking space frame match the center coordinates (x2j, y2j) of the first parking space frame one by one, and the center coordinates (x2j, y2j) of the i-th second parking space frame will be distance (such as The first parking space frame smaller than the preset value is determined to belong to the same parking space frame as the i-th second parking space frame.
  • the method for determining the first parking space point information and the second parking space point information belonging to the same parking space frame shown in FIG. 6 is only a possible example.
  • the first parking space frame information into the second parking space point information
  • the second parking space information with the first parking space point information, for example, the coordinates of the first parking space point and the second parking space point coordinates After a match, it can be determined which second parking space point information and the first parking space point information belong to the same parking space frame, which is not limited in this application.
  • the parking space information of a single parking space can be obtained for the first parking space point information and the second parking space point information corresponding to each parking space frame, according to
  • the parking space detection method provided in the present application may further obtain parking space information corresponding to multiple parking spaces, for example, parking space information of a parking lot.
  • the following exemplarily shows three possible methods for determining parking space information based on n pieces of first parking space point information and m pieces of second parking space point information.
  • n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame are used as an example to introduce.
  • FIG. 7 is a schematic flowchart of a method for determining parking space information of a parking space based on n first parking space point information and m second parking space point information provided by the present application. The method includes the following steps:
  • Step 701 the parking space detection device acquires n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the first parking space point information includes the coordinates of the first parking space point
  • the second parking space point information includes the coordinates of the second parking space point
  • Step 702 the parking space detection device determines that the coordinates of n first parking space points coincide with the coordinates of k parking space points among the coordinates of m second parking space points; if k ⁇ 3, execute the following step 703; if 0 ⁇ k When ⁇ 2, execute step 704 below.
  • the parking space detection device can also match the coordinates of the first parking space points and the coordinates of the second parking space points one by one, so as to determine the coordinates of the n first parking space points and the m second parking space points. Coordinate quantity k of overlapping parking spots in the coordinates.
  • coincidence does not mean absolute complete coincidence, and a certain error range may be allowed.
  • the error can be 20px.
  • Step 703 the parking space detection device determines the parking space information according to the m pieces of second parking space point information and/or the n pieces of first parking space point information.
  • the parking space detection device can determine the parking space information according to the m second parking space point information; or, can determine the parking space information according to the n first parking space point information; or can determine the parking space information according to the m second parking space point information and the n first The parking space point information determines the parking space information.
  • the parking space detection device may determine the parking space information according to the n first parking space point information.
  • the parking space detection device can eliminate the m pieces of second parking space information, and determine the parking space information according to the n pieces of first parking space point information.
  • the two overlapping parking spaces refer to the two parking spaces forming the entrance line.
  • a new parking space may be reselected.
  • the second parking space point information in FIG. 7 is obtained based on the method shown in FIG.
  • the point information of the third parking space is used to determine k.
  • the coordinates of the second parking space point can be used as the coordinates of the parking space point of the parking space; further, the parking space frame, center coordinates, inclination angle, etc. of the parking space can be determined based on the second parking space point information.
  • the parking space detection device finds that less than two (ie, k ⁇ 2) third parking space point information is replaced by the first parking space point information from the stored third parking space point information replaced by the first parking space point information, it can be based on the above steps 704 Determine the parking space information.
  • FIG. 8 a is a schematic flowchart of a method for determining parking space information of a parking space based on n pieces of first parking space point information and m pieces of second parking space point information provided by the present application.
  • the method includes the following steps:
  • step 801 the parking space detection device acquires n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the first parking space point information includes the coordinates of the first parking space point and the first confidence degree corresponding to the coordinates of the first parking space point;
  • the second parking space point information includes the coordinates of the second parking space point and the corresponding second degree of confidence.
  • Step 802 the parking space detection device determines whether the coordinates of the n first parking space points and the coordinates of the m second parking space points coincide with the coordinates of at least three parking space points; if yes, perform step 803; if not, perform step 804 .
  • the parking space detection device determines that the coordinates of n first parking space points coincide with the coordinates of k parking space points among the coordinates of m second parking space points; if k ⁇ 3, perform the following step 803; if k ⁇ 3, execute the following step 804.
  • the coordinates of each second parking space point in the coordinates of the m second parking space points are respectively matched with the coordinates of the n first parking space points one by one; or for the n first parking space points Each first parking space point in the coordinates is matched one by one with the coordinates of the m second parking space points.
  • coincidence does not mean absolute complete coincidence, and a certain error range may be allowed. For example, the error can be 20px.
  • Step 803 for the coordinates of each first parking space point among the coordinates of the n first parking space points and the corresponding coordinates of the second parking space point, the parking space detection device performs a weighted average of the first The coordinates of the parking space point and the corresponding coordinates of the second parking space point obtain the coordinates of the parking space point after weighted average, and determine the parking space information according to the coordinates of the parking space point after the weighted average.
  • the coordinates of the weighted average parking space point (the coordinate of the first parking space point ⁇ the coordinate of the first parking space point The first degree of confidence corresponding to the coordinates + the coordinates of the second parking space point ⁇ the second degree of confidence corresponding to the coordinates of the corresponding second parking space point)/2.
  • the coordinates of the weighted average parking space point (the coordinates of the second parking space point ⁇ the second confidence degree corresponding to the coordinates of the second parking space point + the coordinates of the corresponding first parking space point ⁇ the second confidence level corresponding to the coordinates of the first parking space point)/2.
  • the first parking space point information includes the coordinates (x11, y11) of the first parking space point and the first confidence level corresponding to the coordinates (x11, y11) of the first parking space point is A11
  • the first confidence degree corresponding to the coordinates (x12, y12) of the first parking space point and the coordinates (x12, y12) of the first parking space point is A12
  • the coordinates (x13, y13) of the first parking space point and the coordinates of the first parking space point is A13
  • the first confidence level corresponding to the coordinates (x14, y14) of the first parking space point and the coordinates (x14, y14) of the first parking space point is A14.
  • the second parking space point information includes the coordinates (x21, y21) of the second parking space point and the second confidence level corresponding to the coordinates (x21, y21) of the second parking space point is A21, the coordinates (x22, y22) of the second parking space point and The second confidence degree corresponding to the coordinates (x22, y22) of the second parking space point is A22, and the second confidence degree corresponding to the coordinates (x23, y23) of the second parking space point and the coordinates (x23, y23) of the second parking space point is A23, the coordinates (x24, y24) of the second parking space point and the second confidence degree corresponding to the coordinates (x24, y24) of the second parking space point are A24; according to the first confidence degree and the second confidence degree, the weighted average of 4
  • the coordinates of a parking space point and the coordinates of 4 second parking space points can obtain the coordinates of the weighted average parking space point as:
  • the parking space information can be determined according to the weighted averaged coordinates of the parking space points.
  • the parking space information includes the coordinates of the parking space points, and the coordinates of the parking space points after the weighted average can be the coordinates of the parking space points in the determined parking space information.
  • the parking space information also includes the parking space frame information, that is, the length and width of the parking space frame, the central coordinates of the parking space frame, and the inclination angle of the parking space frame can be determined according to the coordinates of the parking space points after weighted average.
  • first parking space point information and the second parking space point information used to represent the same physical parking space point of the parking space correspond to each other.
  • first parking space point information used to represent the physical parking space point in the upper left corner corresponds to the second parking space point information.
  • the coordinates (x11, y11) of the first parking space point and the coordinates (x21, y21) of the second parking space point correspond to the same physical parking space point
  • the coordinates (x12, y12) of the first parking space point and the second parking space point The coordinates (x22, y22) of the corresponding to the same physical parking point, the coordinates of the first parking point (x13, y13) and the coordinates (x23, y23) of the second parking point correspond to the same physical parking point
  • the coordinates of the first parking point (x14, y14) and the coordinates (x24, y24) of the second parking space point correspond to the same physical parking space point.
  • the first parking space point information includes the coordinates (x11, y11) of the first parking space point and the first confidence level corresponding to the coordinates (x11, y11) of the first parking space point is A11
  • the coordinates (x12, y12) of the first parking space point and the first confidence degree corresponding to the coordinates (x12, y12) of the first parking space point are A12
  • the first confidence level corresponding to the coordinates (x13, y13) of is A13.
  • the second parking space point information includes the coordinates (x21, y21) of the second parking space point and the second confidence level corresponding to the coordinates (x21, y21) of the second parking space point is A21, the coordinates (x22, y22) of the second parking space point and The second confidence degree corresponding to the coordinates (x22, y22) of the second parking space point is A22, and the second confidence degree corresponding to the coordinates (x23, y23) of the second parking space point and the coordinates (x23, y23) of the second parking space point is A23, the coordinates (x24, y24) of the second parking space point and the second confidence degree corresponding to the coordinates (x24, y24) of the second parking space point are A24; according to the first confidence degree and the second confidence degree, the weighted average of three
  • the coordinates of a parking space point and the corresponding coordinates of the 3 second parking space points can obtain the coordinates of the weighted average parking space point as follows:
  • the parking space information can be determined according to the weighted averaged coordinates of the parking space points.
  • the parking space information includes the coordinates of the parking space points, and the weighted average coordinates of the three parking space points can be the coordinates of the three parking space points in the determined parking space information.
  • the coordinates of the remaining one parking space point can be calculated based on the weighted average coordinates of the three parking space points; or, the coordinates (x24, y24) of the second parking space point can be directly used as the coordinates of the fourth parking space point in the parking space information .
  • the parking space information also includes the parking space frame information, that is, the length and width of the parking space frame, the central coordinates of the parking space frame, and the inclination angle of the parking space frame can be determined according to the coordinates of the parking space points after weighted average.
  • the coordinates of the weighted averaged parking space point can be further weighted and averaged.
  • Step 804 the parking space detection device determines whether the coordinates of the n first parking space points and the coordinates of the m second parking space points coincide with the coordinates of two parking space points; if yes, perform the following step 805; if not, explain the first The accuracy of at least one of the first parking space point information or the first parking space frame information is poor, and a new parking space can be found again.
  • the parking space detection device determines whether k is equal to 2 or not. If yes, perform the following step 805; if not, it means that the accuracy of at least one of the first parking space point information or the first parking space frame information is poor, here, you can search for a new parking space or you can also execute the following Figure 8b Methods.
  • the two overlapping parking space points refer to the two parking space points forming the entrance line.
  • the parking space detection device may determine the parking space information of the parking space according to the n first parking space point information.
  • the parking space detection device may eliminate m pieces of second parking space information, and determine the first parking space point information as the parking space information of the parking space.
  • the parking space information and the second parking space point information can improve the accuracy of the parking space information.
  • the parking space information can be determined based on the first parking space point information with higher accuracy, thereby also improving the accuracy of the parking space information.
  • the second parking space point information in FIG. 8a is obtained based on the method shown in FIG.
  • the point information of the third parking space is used to determine k.
  • the parking space detection device finds that four third parking space point information is replaced by the first parking space point information from the stored third parking space point information replaced by the first parking space point information, it can be directly after step 505 in FIG. 5 Determine the parking space information according to the second parking space point information obtained in FIG. 5 ; step 803 and step 705 do not need to be executed.
  • the coordinates of the second parking space point can be used as the coordinates of the parking space point of the parking space; further, the parking space frame, center coordinates, inclination angle, etc.
  • the parking space detection device determines that there are less than two overlapping parking space points between the coordinates of n first parking space points and the coordinates of m second parking space points (that is, the coordinates of only one parking space point overlap or the coordinates of no parking space point coincide), you can Refer to the method shown in FIG. 8b to determine the parking space information of the parking space.
  • FIG. 8 b is a schematic flowchart of another method for determining parking space information of a parking space based on the first parking space point information and the first parking space frame information provided by the present application.
  • the method includes the following steps:
  • Step 811 if the coordinates of the n first parking spaces overlap with the coordinates of the m second parking spaces or there is no overlapping of parking spaces, the parking space detection device determines the first average value of the n first confidence levels, and the m first parking space points The second average of the two confidence levels.
  • the parking space detection device determines the first average value of n first confidence levels and the second average value of m second confidence levels.
  • the n first confidence levels are respectively the first confidence level of A11, the first confidence level of A12, the first confidence level of A13 and the first confidence level of A14, and the m second confidence levels are respectively the second
  • the confidence level is A21, the second confidence level is A22, the second confidence level is A23, and the second confidence level is A24,
  • Step 812 the parking space detection device determines whether the first average value is greater than or equal to the threshold; if greater than or equal to, perform step 813 ; if less, perform step 814 .
  • the threshold may be 0.7, for example.
  • the parking space detection device may determine the parking space information according to the n first parking space point information.
  • the parking space detection device can determine the first parking space point information as the parking space information of the parking space by eliminating the m pieces of second parking space information.
  • Step 814 the parking space detection device determines whether the second average value is greater than or equal to the threshold; if greater, execute step 815, if less, indicating that the parking space information and the second parking space point information are inaccurate, and a new parking space can be found again.
  • the threshold may be 0.7, for example.
  • the parking space detection device may determine the parking space information of the parking space according to the m second parking space point information.
  • the parking space detection device may eliminate n pieces of first parking space point information, and determine the second parking space point information as the parking space information of the parking space.
  • the parking space information can be determined based on the parking space point with a higher degree of confidence, and the prediction result with a lower degree of confidence can be Filtering is performed to improve the accuracy of parking information.
  • the first parking space point information When the confidence of the first parking space point is low, the first parking space point information can be transitioned out, and the first parking space frame information can be used to determine the parking space information, so that the recall rate of parking space detection can be improved in the parking space point failure scenario; when the first parking space point When the confidence of the frame is low, the first parking space frame information can be transitioned away, and the first parking space point information can be used to determine the parking space information, so that the recall rate of parking space detection can also be improved in the scene of parking space frame failure.
  • the network structure is taken as an example.
  • the network structure includes a feature extraction model, a parking space point analysis model, a parking space frame transition model and a parking space frame analysis model, wherein the feature extraction model, the parking space point analysis model, the parking space frame analysis model and the parking space frame
  • the transition model may be integrated in one functional module, or may be integrated in different functional modules, which is not limited in this application.
  • the total loss function of the network structure is the linear superposition of the loss functions of the parking space point analysis model and the parking space frame analysis model.
  • the network structure may be, for example, Resnet-50 or Visual Geometry Group Network (Visual Geometry Group Network 16, VGG-16), which is not limited in this application.
  • the method includes the following steps:
  • Step 901 the parking space detection device acquires a panoramic image of the parking space.
  • step 902 the parking space detection device extracts feature information of the panoramic image of the parking space.
  • the feature information may represent image information, such as points, lines, and/or circles.
  • the feature information of the panoramic image may represent information of the panoramic image, such as points and/or lines and/or circles.
  • Step 903 the parking space detection device inputs the characteristic information of the panoramic image of the parking space into the parking space point analysis model, identifies the parking space point information, and obtains the first parking space point information; and inputs the characteristic information of the panoramic image of the parking space into the parking space frame transition model to identify the parking space frame Transition information, get the transition information of the parking space frame.
  • the parking space point analysis model and the parking space frame transition model are equal models.
  • the input of the parking space point analysis model and the parking space frame transition model is the same feature information of the panoramic image, that is, the acquired feature information of the panoramic image is input to both the parking space point analysis model and the parking space frame transition model. Therefore, the feature information extracted from the panoramic image of the parking space in the above step 902 may be a relatively generalized model, and with the same detection accuracy, the relatively generalized model can reduce the demand for the amount of original data.
  • the first parking space point information includes but not limited to the coordinates of the first parking space point and the first confidence level corresponding to the coordinates of the first parking space point.
  • step 903 the feature information of the panoramic image of the parking space is input into the transition model of the parking space frame.
  • the transition model of the parking space frame can output the transition information of the parking space frame.
  • the transition information of the parking space frame can be, for example, line information or other information in the parking space frame.
  • Step 904 the parking space detection device inputs the parking space frame transition information and the first parking space point information into the parking space frame analysis model, identifies the parking space frame information, and obtains the first parking space frame information.
  • the first parking space frame information includes but not limited to the center coordinates of the first parking space frame, the confidence corresponding to the center coordinates of the first parking space frame, the size (such as length and width) of the first parking space frame, and the size correspondence of the first parking space frame. Confidence degree of , the inclination angle of the first parking space frame and the confidence degree corresponding to the inclination angle of the first parking space frame, etc.
  • the first parking space point information is part of the information for determining the first parking space frame information, which can reduce the training difficulty of the parking space frame analysis model and help to improve the accuracy of the first parking space frame information.
  • first parking space point information input into the parking space frame analysis model in step 903 may be some intermediate features used to characterize the first parking space point information.
  • Step 905 the parking space detection device determines the parking space information of the parking space according to the first parking space point information and the first parking space frame information.
  • the parking space detection device can obtain the second parking space point information according to the first parking space frame information.
  • it may be based on the foregoing manner in FIG. 5 .
  • the first parking space frame information may also be directly converted into the second parking space point information.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the second parking space point information.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the second parking space point information.
  • the acquisition of the first parking space point information and the acquisition of the first parking space frame information in the above steps can be considered as two parallel tasks, and these two parallel tasks are based on the feature information extraction of the same parking space image extracted In this way, the feature information of the panoramic image of the parking space can be extracted through a more generalized model.
  • the intermediate task of obtaining the first parking space point information is used as the input of the task of obtaining the first parking space frame information, so that the accuracy of the obtained first parking space frame information can be improved.
  • the more accurate local information of the parking space can be obtained through the first parking space point information
  • the more accurate global information of the parking space can be obtained through the first parking space frame information
  • the parking space information is fused with the second parking space point information obtained based on the first parking space frame information , can obtain both local and global accurate parking space information.
  • the parking space information can also be sent to the planning control device (such as the control system 230 in FIG. 2b ), so that the planning control device can control the movement of the vehicle.
  • the planning control device may be integrated into the vehicle, or may also be integrated into the cloud server shown above, or may also be an independent device, etc., which is not limited in this application.
  • the planning control device may plan the parking route according to the parking space information of the parking space, obtain the parking route, and control the vehicle to park according to the parking route.
  • FIG. 10 it is a schematic flowchart of another parking space detection method provided by the present application.
  • This method can be applied to the parking space management server 102 in the above-mentioned FIG. 2a, or to the vehicle in the above-mentioned FIG. 2b.
  • the parking space management server and the processor can be collectively referred to as a parking space detection device.
  • the parking space detection device may be the parking space management server 102 shown in Figure 2a above, or may be the vehicle shown in Figure 2b above.
  • the method includes the following steps:
  • Step 1001 the parking space detection device acquires the image information of the parking space.
  • step 1001 reference may be made to the introduction of the aforementioned step 301, which will not be repeated here.
  • Step 1002 the parking space detection device acquires first parking space point information according to the image information of the parking space.
  • step 1002 reference may be made to the introduction of the aforementioned step 302, which will not be repeated here.
  • Step 1003 the parking space detection device determines the first parking space frame information according to the image information of the parking space and the first parking space point information.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and/or the first parking space frame information.
  • This step 1004 can be understood as, the parking space detection device can determine the parking space information of the parking space according to the first parking space point information; or determine the parking space information according to the first parking space frame information; or determine the parking space according to the first parking space point information and the first parking space frame information information.
  • the parking space detection device can determine the parking space information of the parking space according to the first parking space point information; or determine the parking space information according to the first parking space frame information; or determine the parking space according to the first parking space point information and the first parking space frame information information.
  • the accuracy of the first parking space frame information can be improved, thereby improving the accuracy of the parking space information.
  • the parking space information of the parking space is determined by the first parking space point information and the first parking space frame information
  • the first parking space point information is invalid (such as the parking space point is blocked or painted) or the first parking space frame information is invalid (such as the parking space frame is not been photographed completely, etc.)
  • the parking space information can still be determined.
  • the detecting device includes hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the modules and method steps described in the embodiments disclosed in the present application. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
  • FIG. 11 and FIG. 12 are schematic structural diagrams of a possible detection device provided in the present application. These detection devices can be used to realize the functions in the above method embodiments, and thus can also realize the beneficial effects possessed by the above method embodiments.
  • the detection device can be the parking space management server 102 in Figure 2a, or the vehicle in Figure 2b above, or a module (such as a chip) applied to a cloud server or a module (such as a chip) applied to a vehicle (such as chip).
  • the detection device 1100 includes an acquisition module 1101 and a processing module 1102 .
  • the detection device 1100 is used to implement the functions in any one of the method embodiments shown in FIG. 3 to FIG. 10 above.
  • the acquisition module 1101 is used to acquire the image information of the parking space; the processing module 1102 is used to determine the first parking space point information according to the image information of the parking space; The image information determines the first parking space frame information; according to the first parking space point information and the first parking space frame information, the parking space information of the parking space is determined.
  • obtaining module 1101 and the processing module 1102 in the embodiment of the present application may be implemented by a processor or processor-related circuit components.
  • the present application further provides a detection device 1200 .
  • the detection device 1200 may include a processor 1201 .
  • the detection device 1200 may further include a memory 1202 for storing instructions executed by the processor 1201 or storing input data required by the processor 1201 to execute the instructions or storing data generated by the processor 1201 after executing the instructions.
  • the processor 1201 is used to execute the functions of the acquisition module 1101 and the processing module 1102 described above.
  • processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor can be a microprocessor, or any conventional processor.
  • the method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or known in the art any other form of storage medium.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the ASIC can be located in the parking space detection device.
  • the processor and the storage medium may also exist in the parking space detection device as discrete components.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product consists of one or more computer programs or instructions. When the computer programs or instructions are loaded and executed on the computer, the processes or functions of the embodiments of the present application are executed in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a parking space detection device, user equipment or other programmable devices.
  • Computer programs or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer programs or instructions may be Wired or wireless transmission to another website site, computer, server or data center.
  • a computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media.
  • Available media can be magnetic media, such as floppy disks, hard disks, and magnetic tapes; optical media, such as digital video discs (digital video discs, DVDs); and semiconductor media, such as solid state drives (SSDs). ).
  • “plurality” means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship; in the formulas of this application, the character “/” indicates that the contextual objects are a “division” Relationship.
  • the word “exemplary” is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as “example” is not to be construed as preferred or advantageous over other embodiments or designs. Or it can be understood that the use of the word example is intended to present a concept in a specific manner, and does not constitute a limitation to the application.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请公开了一种车位检测方法及装置,可应用于自动驾驶、辅助驾驶、或者三维建模等领域。用于解决现有技术中车位信息精确度较差的问题。其中车位检测方法可包括获取车位的图像信息,根据车位的图像信息确定第一车位点信息,根据车位的图像信息确定第一车位框信息,根据第一车位点信息和第一车位框信息,确定车位的车位信息。通过第一车位点信息和第一车位框信息确定车位信息,可实现两种不同的特征信息进行互补,从而可获得从局部到全局均较为精确的车位信息。

Description

一种车位检测方法及装置 技术领域
本申请涉及智能驾驶技术领域,尤其涉及一种车位检测方法及装置。
背景技术
随着自动驾驶技术的发展,自动泊车技术迅速崛起,自动泊车是指在不需要人工控制的情况下自动控制车辆停入车位,从而减少用户的泊车操作。在自动泊车过程中,需要进行车位检测。其中,车位检测包括在感知范围(即传感器可覆盖范围)内寻找车位,在泊车过程中不断提供特定车位的距离信息等。
目前,车位检测方案主要确定车位的四个车位点(或称为角点)。当车位点被遮挡、涂改、磨损或反光时,确定出的车位的准确性较差,甚至无法确定出车位。
综上所述,如何提高检测车位的准确性,是当前亟需解决的技术问题。
发明内容
本申请提供了一种车位检测方法及装置,用于尽可能的提高检测出的车位信息的准确性。
第一方面,本申请提供一种车位检测方法,该方法可包括获取车位的图像信息,根据车位的图像信息确定第一车位点信息,并根据车位的图像信息确定第一车位框信息,再根据第一车位点信息和第一车位框信息,确定车位的车位信息。
该方法可由自车上的处理器、或者由云服务器等执行。
基于该方案,通过第一车位点信息和第一车位框信息确定车位信息,可实现两种不同的特征信息进行互补,从而可获得从局部到全局较为精确的车位信息。进一步,基于该方案,获取的车位的图像信息进入两个分支,其中一个用于确定第一车位点信息,另一个用于确定第一车位框信息,如此,当获取图像信息是采用网络结构(或称为网络模型)等获取时,可以采用较为泛化的网络结构,可以实现在相同检测准确性下,减少对原始数据量的需求。
在一种可能的实现方式中,车位信息可以是车位框信息。示例性地,车位的车位信息包括但不限于车位点的坐标、车位框的中心坐标、车位框的尺寸(如长和宽)、车位框的倾斜角等。
在一种可能的实现方式中,可根据第一车位点信息和第一车位框信息,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息,其中,n和m均为正整数;还可根据n个第一车位点信息及m个第二车位点信息,确定车位信息。应理解,基于一个车位框可获得单个车位的车位信息,根据本申请提供的车位检测方法可以进一步获取多个车位对应的车位信息,例如,一个停车场的车位信息。
进一步,可选地,第一车位点信息包括第一车位点的坐标,第二车位点信息包括第二车位点的坐标。
在一种可能的实现方式中,确定所述n个第一车位点的坐标与所述m个第二车位点的坐标中存在k个车位点的坐标重合;当k≥3时,根据所述m个所述第二车位点信息和/或 所述n个第一车位点信息确定所述车位信息;当k≤2时,根据所述n个第一车位点信息确定所述车位信息。
在一种可能的实现方式中,第一车位点信息还包括第一车位点的坐标对应的第一置信度,第二车位点信息还包括第二车位点的坐标对应的第二置信度。
进一步,可选地,确定n个第一车位点的坐标与m个所述第二车位点的坐标中存在k个车位点的坐标重合。基于此,如下示例性地的示出了四种确定车位信息的情形。
情形一,若k≥3,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在至少三个车位点的坐标重合,针对n个第一车位点的坐标中的每个第一车位点的坐标及对应的第二车位点的坐标,可根据第一置信度和第二置信度,加权平均第一车位点的坐标和对应的第二车位点的坐标,得到加权平均后的车位点的坐标,根据加权平均后的车位点的坐标,确定车位信息。
基于该情形一,通过加权平均第一车位点的坐标和对应的第二车位点的坐标,得到车位信息,有助于提高车位信息的准确性。
在一种可能的实现方式中,加权平均后的车位点的坐标=(第一车位点的坐标×第一车位点的坐标对应的第一置信度+对应的第二车位点的坐标×第二车位点的坐标对应的第二置信度)/2。
情形二,若k=2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在两个车位点的坐标重合,根据n个第一车位点信息确定车位信息。
基于该情形二,当有两个车位点的坐标重合时,通过使用准确性较高的第一车位点信息确定车位信息,有助于提高车位信息的准确性。
情形三,若k<2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在一个车位点的坐标重合或者不存在车位点的坐标重合,且n个第一置信度的均值大于阈值,根据n个第一车位点信息确定车位信息。
情形四,若k<2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在一个车位点的坐标重合或者不存在车位点的坐标重合,且m个第二置信度的均值大于阈值,根据m个第二车位点信息确定车位信息。
基于上述情形三或情形四,当n个第一车位点和m个第二车位点中重合的车位点较少时,可基于置信度较高的车位点信息确定车位的车位信息,从而可提高车位信息的准确性。
在一种可能的实现方式中,可根据车位的图像信息确定车位框过渡信息,根据车位框过渡信息及第一车位点信息,确定第一车位框信息。
通过结合获得的第一车位点信息来确定第一车位框信息,即第一车位点信息作为确定第一车位框信息的部分信息,从而可提高第一车位框信息的准确性。
在一种可能的实现方式中,可根据第一车位框信息,确定第三车位点信息;若确定在第三车位点信息的预设范围内存在第一车位点信息,用第一车位点信息替换第三车位点信息,得到第二车位点信息。
通过用第一车位点信息替换基于第一车位框信息转换得到的第三车位点信息,有助于提高第二车位点信息的准确性。应理解,通常通过车位点的任务可获得较精确的车位点信息。
在一种可能的实现方式中,第一车位框信息包括第一车位框的中心坐标;根据第一车位点信息,确定第二车位框信息,其中,第二车位框信息包括第二车位框的中心坐标;根 据第二车位框的中心坐标及第一车位框的中心坐标,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息。
通过第二车位框的中心和第一车位框的中心之间的距离,来确定属于同一车位的第一车位点信息和第二车位点信息,确定过程较为简单,且准确性较高。
在一种可能的实现方式中,车位的图像信息可为车位的全景图像的图像信息。本申请所述的图像信息可以为图像或者视频所包含的信息。
可通过泛化的模型来提取全景图像的图像信息,从而实现在相同的检测精确度下,可减少对原始数据量的需求。
第二方面,本申请提供一种车位检测方法,该方法包括获取车位的图像信息,根据车位的图像信息确定第一车位点信息,根据车位的图像信息和第一车位点信息确定第一车位框信息,并根据第一车位点信息和/或第一车位框信息,确定车位的车位信息。
基于该方案,通过结合获得的第一车位点信息来确定第一车位框信息,可提高第一车位框信息的准确性,进而可提高车位信息的准确性。而且,通过第一车位点信息和第一车位框信息确定车位的车位信息,当其中第一车位点信息失效(如车位点被遮挡或涂抹)或者第一车位框信息失效(如车位框未被拍摄完整等),仍可确定出车位信息。换言之,当某一特征信息失效时,基于该方案,仍可识别出车位的车位信息,从而可提高车位检测结果的召回率。
在一种可能的实现方式中,车位信息可以是车位框信息。
第二方面中可能的实现方式可参见前述第一方面的介绍,此处不再重复赘述。
第三方面,本申请提供一种检测装置,该检测装置可用于实现上述第一方面或第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该检测装置可以是车辆或云服务器,或者是用于云服务器或车辆中的模块,例如芯片或芯片***或者电路。有益效果可参见上述第一方面的描述,此处不再赘述。该检测装置可以包括:处理器。该处理器可被配置为支持该检测装置执行以上第一方面所示的相应功能。可选地,该检测装置还可以包括存储器,该存储器可以与处理器耦合,其保存该检测装置必要的程序指令和数据。
其中,处理器用于获取车位的图像信息,根据车位的图像信息确定第一车位点信息,根据车位的图像信息确定第一车位框信息,并根据第一车位点信息和第一车位框信息,确定车位的车位信息。
在一种可能的实现方式中,车位信息可以是车位框信息。
在一种可能的实现方式中,处理器具体用于:根据第一车位点信息和第一车位框信息,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息,n和m均为正整数;根据n个第一车位点信息及m个第二车位点信息,确定车位信息。
在一种可能的实现方式中,第一车位点信息包括第一车位点的坐标;第二车位点信息包括第二车位点的坐标。
在一种可能的实现方式中,处理器具体用于:可确定所述n个第一车位点的坐标与所述m个第二车位点的坐标中存在k个车位点的坐标重合;当k≥3时,可根据所述m个所述第二车位点信息和/或所述n个第一车位点信息确定所述车位信息;当k≤2时,根据所 述n个第一车位点信息确定所述车位信息。
在一种可能的实现方式中,第一车位点信息还包括第一车位点的坐标对应的第一置信度,第二车位点信息还包括第二车位点的坐标对应的第二置信度。
在一种可能的实现方式中,处理器具体用于:若k≥3,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在至少三个车位点的坐标重合,针对n个第一车位点的坐标中的每个第一车位点的坐标及对应的第二车位点的坐标,可根据第一置信度和第二置信度,加权平均第一车位点的坐标和对应的第二车位点的坐标,得到加权平均后的车位点的坐标,根据加权平均后的车位点的坐标,确定车位信息;或者,若k=2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在两个车位点的坐标重合,根据n个第一车位点信息确定车位信息;或者,若k<2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在一个车位点的坐标重合或者不存在车位点的坐标重合、且n个第一置信度的均值大于阈值,根据n个第一车位点信息确定车位信息;或者,若k<2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在一个车位点的坐标重合或者不存在车位点的坐标重合、且m个第二置信度的均值大于阈值,根据m个第二车位点信息确定车位信息。
在一种可能的实现方式中,加权平均后的车位点的坐标=(第一车位点的坐标×第一车位点的坐标对应的第一置信度+对应的第二车位点的坐标×第二车位点的坐标对应的第二置信度)/2。
在一种可能的实现方式中,处理器具体用于:根据车位的图像信息确定车位框过渡信息,根据车位框过渡信息及第一车位点信息,确定第一车位框信息。
在一种可能的实现方式中,处理器还用于:根据第一车位框信息,确定第三车位点信息;若确定在第三车位点信息的预设范围内存在第一车位点信息,用第一车位点信息替换第三车位点信息,得到第二车位点信息。
在一种可能的实现方式中,第一车位框信息包括第一车位框的中心坐标;处理器具体用于:根据第一车位点信息,确定第二车位框信息,第二车位框信息包括第二车位框的中心坐标;根据第二车位框的中心坐标及第一车位框的中心坐标,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息。
在一种可能的实现方式中,车位的图像信息包括车位的全景图像的图像信息。
第四方面,本申请提供一种检测装置,该检测装置用于实现上述第一方面或第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
该检测装置可包括获取模块和处理模块,其中,获取模块用于获取车位的图像信息;处理模块用于根据车位的图像信息确定第一车位点信息;根据车位的图像信息确定第一车位框信息;根据第一车位点信息和第一车位框信息,确定车位的车位信息。
在一种可能的实现方式中,车位信息可以是车位框信息。
在一种可能的实现方式中,处理模块具体用于:根据第一车位点信息和第一车位框信息,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息,n和m为正整数;根据n个第一车位点信息及m个第二车位点信息,确定车位信息。
在一种可能的实现方式中,第一车位点信息包括第一车位点的坐标;第二车位点信息包括第二车位点的坐标。
在一种可能的实现方式中,处理器具体用于:可确定所述n个第一车位点的坐标与所述m个第二车位点的坐标中存在k个车位点的坐标重合;当k≥3时,可根据所述m个所述第二车位点信息和/或所述n个第一车位点信息确定所述车位信息;当k≤2时,根据所述n个第一车位点信息确定所述车位信息。
在一种可能的实现方式中,第一车位点信息还包括第一车位点的坐标对应的第一置信度,第二车位点信息还包括第二车位点的坐标对应的第二置信度。
在一种可能的实现方式中,处理模块具体用于:若k≥3,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在至少三个车位点的坐标重合,针对n个第一车位点的坐标中的每个第一车位点的坐标及对应的第二车位点的坐标,可根据第一置信度和第二置信度,加权平均第一车位点的坐标和对应的第二车位点的坐标,得到加权平均后的车位点的坐标,根据加权平均后的车位点的坐标,确定车位信息;或者,若k=2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在两个车位点的坐标重合,根据n个第一车位点信息确定车位信息;或者,若k<2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在一个车位点的坐标重合或者不存在车位点的坐标重合、且n个第一置信度的均值大于阈值,根据n个第一车位点信息确定车位信息;或者,若k<2,即确定n个第一车位点的坐标与m个第二车位点的坐标中存在一个车位点的坐标重合或者不存在车位点的坐标重合、且m个第二置信度的均值大于阈值,根据m个第二车位点信息确定车位信息。
在一种可能的实现方式中,加权平均后的车位点的坐标=(第一车位点的坐标×第一车位点的坐标对应的第一置信度+对应的第二车位点的坐标×第二车位点的坐标对应的第二置信度)/2。
在一种可能的实现方式中,处理模块具体用于:根据车位的图像信息确定车位框过渡信息,根据车位框过渡信息及第一车位点信息,确定第一车位框信息。
在一种可能的实现方式中,处理模块还用于:根据第一车位框信息,确定第三车位点信息;若确定在第三车位点信息的预设范围内存在第一车位点信息,用第一车位点信息替换第三车位点信息,得到第二车位点信息。
在一种可能的实现方式中,第一车位框信息包括第一车位框的中心坐标;处理模块具体用于:根据第一车位点信息,确定第二车位框信息,第二车位框信息包括第二车位框的中心坐标;根据第二车位框的中心坐标及第一车位框的中心坐标,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息。
在一种可能的实现方式中,车位的图像信息可为车位的全景图像的图像信息。
第五方面,本申请提供一种检测装置,该检测装置可用于实现上述第二方面或第二方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该检测装置可以是车辆或云服务器,或者是用于云服务器或车辆中的模块,例如芯片或芯片***或者电路。有益效果可参见上述第一方面的描述,此处不再赘述。该检测装置可以包括:处理器。该处理器可被配置为支持该检测装置执行以上第一方面所示的相应功能。可选地,该检测装置还可以包括存储器,该存储器可以与处理器耦合,其保存该检测装置必要的程序指令和数据。
其中,处理器用于获取车位的图像信息,根据车位的图像信息确定第一车位点信息;根据车位的图像信息和第一车位点信息确定第一车位框信息;根据第一车位点信息和/或第一车位框信息,确定车位的车位信息。
第五方面中可能的实现方式可参见前述第二方面的介绍,此处不再重复赘述。
第六方面,本申请提供一种检测装置,该检测装置用于实现上述第二方面或第二方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
该检测装置可包括获取模块和处理模块,其中,获取模块用于获取车位的图像信息;处理模块用于根据车位的图像信息确定第一车位点信息;根据车位的图像信息和第一车位点信息确定第一车位框信息;根据第一车位点信息和/或第一车位框信息,确定车位的车位信息。
上述第五方面和第六方面中任一方面可以达到的技术效果可以参照上述第二方面中有益效果的描述,此处不再重复赘述。
第七方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令被检测装置执行时,使得该检测装置执行上述第一方面或第一方面的任意可能的实现方式中的方法、或者使得该检测装置执行上述第二方面或第二方面的任意可能的实现方式中的方法。
第八方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当该计算机程序或指令被检测装置执行时,使得该检测装置执行上述第一方面或第一方面的任意可能的实现方式中的方法、或者使得该检测装置执行上述第二方面或第二方面的任意可能的实现方式中的方法。
附图说明
图1a为本申请提供的一种车位框的示意图;
图1b为本申请提供的一种车位倾斜角的示意图;
图2a为本申请提供的一种***架构示意图;
图2b为本申请提供的另一种***架构示意图;
图2c为本申请提供的一种车辆与传感器的位置关系示意图;
图3为本申请提供的一种车位检测方法的方法流程示意图;
图4a为本申请提供的一种获取车位的全景图像的方法流程示意图;
图4b为本申请提供的一种获取的四个角度的图像的示意图;
图5为本申请提供的一种获取第二车位点信息的方法流程示意图;
图6为本申请提供的一种确定第一车位点与第二车位点属于同一车位框的方法流程示意图;
图7为本申请提供的一种基于n个第一车位点信息及m个第二车位点信息确定车位的车位信息的方法流程示意图;
图8a为本申请提供的一种基于n个第一车位点信息及m个第二车位点信息确定车位的车位信息的方法流程示意图;
图8b为本申请提供的另一种基于n个第一车位点信息及m个第二车位点信息确定车 位的车位信息的方法流程示意图;
图9为本申请提供的另一种车位检测方法的方法流程示意图;
图10为本申请提供的另一种车位检测方法的方法流程示意图;
图11为本申请提供的一种车位检测装置的结构示意图;
图12为本申请提供的一种车位检测装置的结构示意图。
具体实施方式
下面将结合附图,对本申请实施例进行详细描述。
以下,对本申请中的部分用语进行解释说明。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。
1)入口线、分割线、泊车标记点
如图1a所示,为本申请提供的一种车位框的示意图。其中,P1P2称为入口线;P1P4和P2P3可均称为分割线;入口线与分割线的交点可称为入口车位点(或称为泊车标记点),即图1a中的P1和P2称为入口车位点;P3和P4可称为非入口车位点。P1、P2、P3和P4可统称为车位点,车位点也可以理解为是车位框各个转角处的角点,因此,车位点也可称为车位角点。通常,分割线与入口线垂直,对于分割线与入口线垂直的车位,可称为垂直车位或平行车位。
2)车位倾斜角
车位倾斜角或称为分割线的倾斜角或车位框的倾斜角,是指分割线与图像的x轴之间的夹角。通常,车位框分割线与图像的x轴之间的夹角为θ,可参见图1b。
需要说明的是,对于斜向车位,需要判断车位倾斜角。该类型算法通常采用模板匹配的方法来确定车位倾斜角。
3)车位与自车的距离
车位与自车的距离指自车的中心到入口车位点之间的距离。结合上述图1a,距离S1和距离S2表示车位与自车的距离。结合上述图1b,距离S3和距离S4表示车位与自车的距离。
下面示例性地的示出了本申请可应用的两种可能的***架构。
基于上述内容,图2a是本申请的可应用的一种***架构示意图。该***可包括车辆101及车辆管理服务器102。车辆101是指具有采集周围环境的图像及远程通信功能的车辆。示例性地,车辆101上设置有传感器1011,可实现对车辆周围环境信息的采集。传感器例如可以是图像采集设备,图像采集设备例如可以是鱼眼摄像头、单目摄像头、深度摄像头中的至少一种等。图2a中以传感器设置于车辆的前、后、左、右四个方向(可结合图2c)为例,以实现对车辆前、后、左、右四个方向的环境信息进行采集。四个鱼眼摄像头的视场角可均大于180度,从而可实现对车辆周围环境的全方位捕获。应理解,传感器1011的视场角越大,传感器可感知的范围越大。车辆101的远程通信功能通常可以由设置在车辆101上的通信模块来实现,通信模块例如包括远程通信箱(telematics box,TBOX)或者无线通信***(可参见下述图2b中的无线通信***244的介绍)等。
车辆管理服务器102可以实现车位检测的功能。车辆管理服务器102是单个服务器,也可以是由多个服务器组成的服务器集群。车辆管理服务器102例如也可以是云服务器(或称为云、云端、云端服务器、云端控制器或车联网服务器等)。云服务器是对数据处理能 力的设备或器件的统称,诸如可以包括主机或处理器等实体设备,也可以包括虚拟机或容器等虚拟设备,还可以包括芯片或集成电路等。另外,车辆管理服务器102可以将所有的功能集成在一个独立的物理设备上,或者也可以将不同的功能分别部署在不同的物理设备上,本申请对此不做限定。通常,一台车辆管理服务器102可以与多个车辆101进行通信。
需要说明的是,上述图2a所示的***架构中的车辆101、车辆管理服务器102、传感器1011的数量仅是示例,本申请对此不做限定。另外,该***中的车辆管理服务器102的名称仅是示例,具体实现也可以其它可能的名称,例如也可称为车位检测装置,本申请对此不做限定。应理解,上述图2a中的车辆101可以是下述图2b的车辆。
请参阅图2b,是本申请的可应用的另一种***架构示意图。在一个实施例中,车辆可以配置为完全或部分地自动驾驶模式。耦合到车辆200或包括在车辆200中的组件可以包括推进***210、传感器***220、控制***230、***设备240、电源250、计算机***260以及用户接口270。车辆200的组件可以被配置为以与彼此互连和/或与耦合到各***的其它组件互连的方式工作。例如,电源250可以向车辆200的所有组件提供电力。计算机***260可以被配置为从推进***210、传感器***220、控制***230和***设备240接收数据并对它们进行控制。计算机***260还可以被配置为在用户接口270上生成图像的显示并从用户接口270接收输入。
需要说明的是,在其它示例中,车辆200可以包括更多、更少或不同的***,并且每个***可以包括更多、更少或不同的组件。此外,示出的***和组件可以按任意种的方式进行组合或划分,本申请对此不做具体限定。
推进***210可以为车辆200提供动力运动。如图2b所示,推进***210可以包括引擎/发动机214、能量源213、传动装置(transmission)212和车轮/轮胎211。另外,推进***210可以额外地或可替换地包括除了图2b所示出的组件以外的其他组件。本申请对此不做具体限定。
传感器***220可以包括用于感测关于车辆200所位于的环境的信息的若干个传感器。如图2b所示,传感器***220的传感器可包括相机传感器223。相机传感器223可用于捕捉车辆200的周边环境的多个图像。相机传感器223可以是静态相机或视频相机。进一步,可选地,传感器***220还可包括全球定位***(Global PositioningSystem,GPS)226、惯性测量单元(Inertial Measurement Unit,IMU)225、激光雷达传感器、毫米波雷达传感器以及用于修改传感器的位置和/或朝向的制动器221等。毫米波雷达传感器可利用无线电信号来感测车辆200的周边环境内的目标。在一些实施例中,除了感测目标以外,毫米波雷达222还可用于感测目标的速度和/或前进方向。激光雷达224可利用激光来感测车辆200所位于的环境中的目标。GPS 226可以为用于估计车辆200的地理位置的任何传感器。为此,GPS 226可以包括收发器,基于卫星定位数据估计车辆200相对于地球的位置。在示例中,计算机***260可以用于结合地图数据使用GPS 226来估计车辆200行驶的道路。IMU 225可以用于基于惯性加速度及其任意组合来感测车辆200的位置和朝向变化。在一些示例中,IMU225中传感器的组合可包括例如加速度计和陀螺仪。另外,IMU 225中传感器的其它组合也是可能的。
控制***230为控制车辆200及其组件的操作。控制***230可包括各种元件,其中包括转向单元236、油门235、制动单元234、传感器融合算法233、计算机视觉***232、路线控制***234以及障碍物避免***237。转向***236可操作来调整车辆200的前进 方向。例如在一个实施例中可以为方向盘***。油门235用于控制引擎214的操作速度并进而控制车辆200的速度。控制***230可以额外地或可替换地包括除了图2b所示出的组件以外的其他组件。本申请对此不做具体限定。
制动单元234用于控制车辆200减速。制动单元234可使用摩擦力来减慢车轮211。在其他实施例中,制动单元234可将车轮211的动能转换为电流。制动单元234也可采取其他形式来减慢车轮211转速从而控制车辆200的速度。计算机视觉***232可以操作处理和分析由相机传感器223捕捉的图像以便识别车辆200周边环境中的目标和/或特征。目标和/或特征可包括交通信号、道路边界和障碍物。计算机视觉***232可使用目标识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉***232可以用于为环境绘制地图、跟踪目标、估计目标的速度等等。路线控制***234用于确定车辆200的行驶路线。在一些实施例中,路线控制***242可结合来自传感器***220、GPS 226和一个或多个预定地图的数据以为车辆200确定行驶路线(如泊车路线)。障碍物避免***237用于识别、评估和避免或者以其他方式越过车辆200的环境中的潜在障碍物。在另一个实例中,控制***230可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
***设备240可以被配置为允许车辆200与外部传感器、其它车辆和/或用户交互。为此,***设备240可以包括例如无线通信***244、触摸屏243、麦克风242和/或扬声器241。***设备240可以额外地或可替换地包括除了图2b所示出的组件以外的其他组件。本申请对此不做具体限定。
在一些实施例中,***设备240提供车辆200的用户与用户接口270交互的手段。例如,触摸屏243可向车辆200的用户提供信息。用户接口270还可操作触摸屏243来接收用户的输入。在其他情况中,***设备240可提供用于车辆200与位于车内的其它设备通信的手段。例如,麦克风242可从车辆200的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器241可向车辆200的用户输出音频。
无线通信***244可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信***244可使用3G蜂窝通信,例如码分多址(code division multiple access,CDMA)、EVD0、全球移动通信***(global system for mobile communications,GSM)/通用分组无线服务技术(general packet radio service,GPRS),或者4G蜂窝通信,例如长期演进(long term evolution,LTE),或者5G蜂窝通信。无线通信***244可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信***244可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信***,例如,无线通信***244可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
电源250可以被配置为向车辆200的一些或全部组件提供电力。为此,电源250可以包括例如可再充电锂离子或铅酸电池。在一些示例中,一个或多个电池组可被配置为提供电力。其它电源材料和配置也是可能的。在一些示例中,电源250和能量源213可以一起实现,如一些全电动车中那样。车辆200的组件可以被配置为以与在其各自的***内部和/或外部的其它组件互连的方式工作。为此,车辆200的组件和***可以通过***总线、网络和/或其它连接机制通信地链接在一起。
车辆200的部分或所有功能受计算机***260控制。计算机***260可包括至少一个 处理器261,处理器261执行存储在例如存储器263这样的计算机可读介质中的指令2631。计算机***260还可以是采用分布式方式控制车辆200的个体组件或子***的多个计算设备。
处理器261可以是任何常规的处理器,诸如中央处理器(central processing unit,CPU)。替选地,还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、图形处理器(graphic processing unit,GPU)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。应理解,本申请对上述车辆***包括的传感器的数量、处理器的数量不做限定。尽管图2b功能性地图示了处理器、存储器、和在计算机***260的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机***260的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,存储器263可包含指令2631(例如,程序逻辑),指令2631可被处理器261执行来执行车辆200的各种功能,包括以上描述的那些功能。存储器214也可包含额外的指令,包括向推进***210、传感器***220、控制***230和***设备240中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令2631以外,存储器263还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆200在自主、半自主和/或手动模式中操作期间被车辆200和计算机***260使用。
用户接口270,用于向车辆200的用户提供信息或从其接收信息。可选地,用户接口270可包括在***设备240的集合内的一个或多个输入/输出设备,例如无线通信***244、触摸屏243、麦克风242和扬声器241。
计算机***260可基于从各种子***(例如,推进***210、传感器***220和控制***230)以及从用户接口270接收的输入来控制车辆200的功能。例如,计算机***260可利用来自控制***230的输入以便控制转向单元236来避免由传感器***220和障碍物避免***237检测到的障碍物。在一些实施例中,计算机***260可操作来对车辆200及其子***的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆200分开安装或关联。例如,存储器263可以部分或完全地与车辆200分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图2b不应理解为对本申请实施例的限制。
需要说明的是,上述车辆包括但不限于无人车、智能车(如自动导引运输车(automated guided vehicle,AGV))、电动车、数字汽车、智能制造车。
本申请所提供的车位检测方法可应用于高级驾驶辅助***(advanced driving assistant system,ADAS)、自动驾驶***或智能驾驶***等领域,尤其适用于自动泊车相关功能。也可以应用于利用车位信息作为约束进行更加高级功能的使用,如基于车位信息进行三维建模等。
如背景技术所描述,基于现有车位检测方法的确定的车位信息的准确性较低。鉴于此,本申请提出一种车位检测方法。该车位检测方法可提高车位信息的准确性。
基于上述内容,下面结合附图3至附图10,对本申请提出的车位检测方法进行具体阐述。
请参阅图3,为本申请提供的一种车位检测方法的方法流程示意图。该方法可适用于上述图2a中的车位管理服务器102,或者适用于上述图2b中的车辆。车位管理服务器与车辆中执行该方法的装置可以统称为车位检测装置。换言之,车位检测装置可以是上述图2a的车位管理服务器102,或者也可以是上述图2b的车辆。如图3所示,该方法包括以下步骤:
步骤301,车位检测装置获取车位的图像信息。
在一种可能的实现方式中,车位检测装置可获取车位的全景图像,获取车位的全景图像的可能的实现方式可参见下述图4a的介绍。进一步,可基于车位的全景图像提取车位的全景图像的特征信息,即获取车位的图像信息。此处,车位全景图像也可以称为车位的环视俯瞰图像或车位的鸟瞰图像。
示例性地,车位的全景图像可作为特征提取模型的输入。换言之,将车位的全景图像输入特征提取模型,特征提取模型可输出车位的全景图像的特征信息(可参见下述图9),该特征信息指图像中的点和/或线和/或圈等。全景图像的特征信息指可以表征该全景图像本身的信息,例如是点和/或线和/或圈等。应理解,该特征提取模型可以是基于样本进行监督学习得到的,并存储于车位检测装置或与车位检测装置通信的车辆中。
步骤302,车位检测装置根据车位的图像信息确定第一车位点信息。
此处,第一车位点信息包括第一车位点的坐标。进一步,可选地,第一车位点信息还包括第一车位点的坐标对应的第一置信度。其中,第一车位点的坐标指第一车位点在图像中的像素坐标。示例性地,第一车位点在图像上的像素坐标是指第一车位点在图像上的像素的横纵坐标。应理解,车位检测装置根据车位的图像信息可能确定出N个第一车位点信息,每个第一车位点信息包括第一车位点的坐标,进一步,还包括第一车位点的坐标对应的第一置信度,N为正整数。
在一种可能的实现方式中,车位检测装置可根据全景图像的特征信息及车位点识别算法,确定车位的第一车位点信息。其中,车位点识别算法可以是车位点解析模型(或称为车位点识别模型或车位点解析器)中算法。此处,该车位点解析模型的标签为车位点的坐标(即车位点在图像中的坐标)。也可以理解为,车位点解析模型的输出为车位点的坐标。应理解,该车位点解析模型可以是基于样本进行监督学习得到的,并存储于车位检测装置或与车位检测装置通信的车辆中。
基于该车位点解析模型,可将车位的全景图像的特征信息输入车位点解析模型,车位 点解析模型进行多层卷积操作,可输出N个第一车位点的坐标;进一步,还可输出N个第一车位点的坐标中每个第一车位点的坐标对应的第一置信度。
需要说明的是,车位点识别算法也可以是基于灰度图像的车位点检测算法、基于二值图像的车位点检测算法、或基于轮廓曲线的车位点检测算法。基于轮廓曲线的车位点检测算法的过程例如可以是通过图像识别拟合出图像上的直线,再将两条直线的交点识别为车位点。再例如,以当前点位为圆心建立一个圆形区域,再通过与当前点位明暗程度相同的区域所占整个圆形区域的比例确定当前区域是否为车位点,当前点位明暗程度相同的区域所占整个圆形区域的比例超过阈值时,判定当前点位为车位点。
步骤303,车位检测装置根据车位的图像信息确定第一车位框信息。
其中,第一车位框信息包括但不限于第一车位框的中心坐标、第一车位框的中心坐标对应的置信度、第一车位框的尺寸(例如长宽)、第一车位框的尺寸对应的置信度、第一车位框的倾斜角及第一车位框的倾斜角对应的置信度等。
如下,示例性地的示出了两种可能的确定第一车位框信息的实现方式。
实现方式一,车位检测装置可根据全景图像的特征信息及车位框识别算法,确定第一车位框信息。
示例性地,当车位框识别算法为车位框解析模型(或称为车位框识别模型或车位框解析器)中的算法时,该车位框解析模型的标签可为车位框的中心坐标、车位框的长宽及车位框的倾斜角。也可以理解为,车位框解析模型的输出为车位框的中心坐标、车位框的长宽及车位框的倾斜角。应理解,该车位框解析模型可以是基于样本进行监督学习得到的,并存储于车位检测装置或与车位检测装置通信的车辆中。
基于该车位框解析模型,可将车位的全景图像的特征信息输入车位框解析模型,车位框解析模型进行多层卷积操作,可输出第一车位框信息。例如,车位框解析模型可输出第一车位框的中心坐标、第一车位框的中心坐标对应的置信度、第一车位框的长宽、第一车位框的长宽对应的置信度、车位框的倾斜角及车位框的倾斜角对应的置信度。需要说明的是,通常第一车位框的长和宽对应的置信度是相同的。
实现方式二,车位检测装置根据全景图像的特征信息、车位框识别算法及第一车位点信息,确定车位的第二车框信息。
在一种可能的实现方式中,车位检测装置可根据车位的图像信息确定车位框过渡信息,并根据车位框过渡信息及第一车位点信息,确定所述第一车位框信息。
示例性地,当车位框识别算法包括车位框过渡模型和车位框解析模型中的算法时,可将车位的全景图像的特征信息输入车位框过渡模型,车位框过渡模型进行多层卷积操作,可输出车位框过渡信息,其中,车位框过渡信息例如可以为除车位点之外的线条信息、或车位框内的信息等。将车位框过渡信息和第一车位点信息输入车位框解析模型,车位框解析模型进行多层卷积操作,可输出第一车位框信息。关于第一车位框信息可参见上述实现方式一的介绍,此处不再赘述。换言之,上述得到的N个第一车位点信息也作为车位框解析模型的一部分输入。如此,可输出较为精确的第一车位框信息。
应理解,在确定第一车位框信息时,可能会受采集图像信息的设备的感知范围的限制导致获得的第一车位框不完整,从而导致获得的第一车位框信息精度较低。通过上述实现方式二,结合获得的第一车位点信息来确定第一车位框信息,可提高第一车位框信息的准确性。
进一步,可选地,车位检测装置确定出第一车位框信息后,还可根据第一车位框的长宽进行异常车位排除。例如,可将长宽不在合理值范围的第一车位框信息剔除。如此,可进一步提高第一车位框信息的准确性。
需要说明的是,上述步骤302和步骤303之间没有先后顺序。例如,可以先执行步骤302后执行步骤303,或者也可以先执行步骤303后执行步骤302,或者也可以步骤303和步骤302同时执行,本申请对此不做限定。
步骤304,车位检测装置可根据第一车位点信息和第一车位框信息,确定车位的车位信息。
此处,车位的车位信息包括但不限于车位点的坐标、车位框的中心坐标、车位框的尺寸(如长和宽)、车位框的倾斜角等。示例性地,车位信息可为车位框信息。
在一种可能的实现方式中,车位检测装置可根据第一车位框信息,确定第二车位点信息。此处,第二车位点信息包括M个第二车位点的坐标,进一步,还包括每个第二车位点的坐标对应的第二置信度。其中,第二车位点的坐标是指第二车位点在图像中的像素坐标。关于确定第二车位点信息的可能的方法可参见下述图5的介绍,此处不再赘述。
进一步,可选地,车位检测装置可根据第一车位点信息和第二车位点信息,确定车位的车位信息,可能的方法可参见下述图7、图8a和图8b的介绍。
需要说明的是,该车位信息即为最终输出的车位信息,可以显示在车辆的触摸屏上。另外,检测到的车位信息中的车位点的坐标实际上为图像信息中的像素坐标,需要将其结合传感器标定时检测的内参矩阵和外参矩阵将像素坐标转化为实际坐标,进一步,车辆可根据实际坐标对车位进行位置标定。
通过上述步骤301至步骤304可以看出,通过第一车位点信息和第一车位框信息确定车位信息,可通过两种不同的特征信息进行互补,从而可获得从局部(如车位点的坐标)到全局(如车位分割线的倾斜角等)较为精确的车位信息。进一步,基于该方案,基于该方案,获取的车位的图像信息进入两个分支,其中一个用于确定第一车位点信息,另一个用于确定第一车位框信息,如此,当获取图像信息是采用网络结构(或称为网络模型)等获取时,可以采用较为泛化的网络结构,可以实现在相同检测准确性下,减少对原始数据量的需求。
如图4a所示,为本申请提供的一种获取车位的全景图像的方法流程示意图。该方法包括以下步骤:
步骤401,M个传感器分别采集原始图像,其中,M为正整数。
示例性地,M可等于4,结合上述图2a或图2b,传感器可设置于车辆的前、后、左、右四个方向,四个传感器可分别采集原始图像,得到4张原始图像。应理解,M也可以大于4,或者小于4。当M小于4时,传感器的视场角要尽可能大于90度,以实现对车辆周围的全方位覆盖。
在一种可能的实现方式中,M大于1时,每个传感器采集到的原始图像为全景图像中的部分角度图像(或称为环视图像或部分俯瞰图像),不同的角度图像可以标识车辆周围不同角度的环境信息。当M等于1时,传感器采集到的原始图像即为全景图像。
步骤402,车位检测装置获取M个传感器采集的原始图像。
结合上述图2a,车位检测装置为云服务器时,传感器采集到的原始图像可通过车辆中的通信模块发送至云服务器。结合上述图2b,车位检测装置为车辆时,传感器采集到的原 始图像可传输至处理器。
步骤403,车位检测装置对获取的M个原始图像处理,获得M个环视图像。
在一种可能的实现方式中,车位检测装置可先对每个原始图像进行去畸变处理。示例性地,可基于传感器的内设修正参数对原始图像进行矫正。
步骤404,车位检测装置将M个环视图像分别进行单应变换,得到M个俯视图像。
在一种可能的实现方式中,环视图像可以为棋盘格,车位检测装置可获取棋盘格点的实际像素位置信息,并根据预设图像处理方法从畸变后的原始图像中得到环视图像中棋盘格点的像素位置信息。其中,棋盘格点的实际像素位置信息可以通过预设的棋盘格点设置方法在原始图像中得到。而棋盘格点的实际像素位置信息与环视图像中棋盘格点的像素位置信息存在预设比例关系,可以得到用于从畸变后的监测图像中得到环视图像中棋盘格点的像素位置信息的预设图像处理方法。也就是说,在获取预设转换比例信息后,根据预设转换比例信息和棋盘格点的实际像素位置信息可以得到环视图像中棋盘格点的像素位置信息。根据棋盘格点的实际像素位置信息和环视图像中棋盘格点的像素位置信息的对应关系得到单应性矩阵(Homography matrix),根据单应性矩阵对每一个畸变后的原始图像进行计算处理,得到多个角度的环视图像,这一过程可以理解为进行单应性变化处理。每个角度的环视图像都对应了一个传感器所拍摄方向的环视图像。
步骤405,车位检测装置可拼接M个俯视图像,获得车位的全景图像。
在一种可能的实现方式中,车位检测装置可提取M个俯视图像中的重叠区域和非重叠区域的图像,对重叠区域的图像进行特征匹配,根据匹配结果融合重叠区域和非重叠区域,从而得到车位的全景图像。
示例性地,车辆周围设置4个鱼眼摄像头,通过上述步骤401至步骤404,可得到4张俯视图像,4张俯视图像共有4个重叠区域A、B、C和D,可参见图4b,重叠区域由两张环视图像根据图像特征匹配融合而成,即对两幅图的重叠区域进行特征点提取并匹配,重叠区域A的图像可以是对前方的图像和左方的图像进行特征匹配融合,重叠区域B可以是对前方的图像和右方的图像进行特征匹配融合,重叠区域C可以是对左方的图像和后方的图像进行特征匹配融合,重叠区域D可以是对右方的图像和后方的图像进行特征匹配融合,非重叠区域直接保留对应环视图像原始图像,最终融合成一张车位的全景图像。
通过上述步骤401至步骤405,可以通过车位的全景图像。基于该全景图像确定可以获得较精确的车位信息。
如下,示例性地的示出了一种可能的确定第二车位点信息的方法。
如图5所示,为本申请提供的一种获取第二车位点信息的方法流程示意图。该方法包括以下步骤:
步骤501,车位检测装置获取车位的第一车位框信息。
该步骤501可参见前述步骤303的介绍,此处不再重复赘述。
步骤502,车位检测装置可将第一车位框信息转换为第三车位点信息。
在一种可能的实现方式中,第一车位框信息包括第一车位框的中心坐标、第一车位框的长宽、第一车位框的倾斜角度、第一车位框的中心坐标对应的置信度、第一车位框的长宽对应的置信度及第一车位框的倾斜角对应的置信度。
在一种可能的实现方式中,车位检测装置可基于第一车位框的中心坐标、第一车位框 的倾斜角度及第一车位框的长宽,确定出第三车位点信息,此处,第三车位点信息包括第三车位点的坐标。进一步,车位检测装置还可基于第一车位框的中心坐标对应的置信度、第一车位框的倾斜角对应的置信度及第一车位框的长宽对应的置信度确定第三车位点对应的第三置信度。应理解,车位检测装置可将每个第一车位框信息转换为一组第三车位点信息,一组第三车位点信息中的各个第三车位点的坐标对应的第三置信度是相同的,第三置信度为第三车位点所属的第一车位框的中心坐标对应的置信度、第一车位框的长宽及对应的置信度、第一车位框的倾斜角对应的置信度的平均值。也可以理解为,由一个第一车位框转换为一组第三车位点信息中每个第三车位点的坐标的置信度是相同的。例如,第一车位框信息A包括第一车位框的中心坐标对应的置信度A1,第一车位框的长宽对应的置信度A2,第一车位框的倾斜角对应的置信度A3,将该第一车位框信息A转换为第三车位点信息中的各个第三车位点的置信度是相同的,即等于(A1+A2+A3)/3。
步骤503,车位检测装置遍历每个第三车位点信息,若确定在预设范围内存在一个第一车位点,则执行下述步骤504;若确定在预设范围内存在两个或两个以上的第一车位点,则执行下述步骤505。
此处,预设范围例如可以是15像素(px)左右。例如,1px=2厘米(cm),15px对应物理车位的30厘米(cm)左右。
步骤504,车位检测装置用第一车位点信息替换第三车位点信息。
此处,用第一车位点信息替换第三车位点信息可以理解为用第一车位点的坐标替换第三车位点的坐标,用第一车位点的坐标对应的第一置信度替换第三车位点对应的第三置信度。
步骤505,车位检测装置用与第三车位点最近的第一车位点替换第三车位点信息。
此处,对于预设范围内存在两个或两个以上的第一车位点,可用与该第三车位点最近的第一车位点信息替换该第三车位点信息。换言之,可用与该第三车位点最近的第一车位点的坐标替换该第三车位点的坐标,用与该第三车位点最近的第一车位点的坐标对应的第一置信度替换该第三车位点对应的第三置信度。
示例性地,第三车位点信息包括{第三车位点31的坐标,第三车位点32的坐标、第三车位点33的坐标和第三车位点34的坐标},第三车位点31对应第三置信度31,第三车位点32对应第三置信度32,第三车位点33对应第三置信度33,第三车位点34对应第三置信度34;若第三车位点31信息被第一车位点11信息替换,其余未被替换,则得到的第二车位点信息包括{第一车位点11的坐标及对应的第一置信度11,第三车位点32的坐标及对应的第三置信度32、第三车位点33的坐标及对应的第三置信度33和第三车位点34的坐标及对应的第三置信度34}。
基于上述步骤503至步骤505,通过用第一车位点信息校正基于第一车位框信息转换得到的第三车位点信息,有助于提高第二车位点信息的准确性。
在一种可能的实现方式中,车位检测装置可记录并存储上述步骤503至步骤505中被第一车位点信息替换第三车位点信息,在后续确定第一车位点的坐标与第二车位点的坐标重合时,可以直接使用。
需要说明的是,上述图5所示的获取第二车位点信息的方法仅是一种可能的示例,也可以直接将第一车位框信息转换为第二车位信息,例如,可将上述步骤502中的第三车位点信息用第二车位点信息替换,本申请对此处不做限定。
在上述步骤304中,一种可能的实现方式是根据第一车位点信息和第二车位点信息确定车位的车位信息。基于此,需要先确定哪些第一车位点信息和哪些第二车位点信息属于同一个车位框。或者也可以理解为,需要先确定同一个车位框对应哪些第一车位点信息和哪些第二车位点信息。
下面示例性地的示出了一种确定第一车位点与第二车位点属于同一车位框的方法流程,可参见图6。该方法包括以下步骤:
步骤601,车位检测装置可根据第一车位点信息确定第二车位框信息。
此处,获得第一车位点信息的过程可参见前述步骤302的介绍。
在一种可能的实现方式中,车位检测装置可根据第一车位点信息确定出至少一个第二车位框信息。具体地:可使用距离约束及车位角点类型(如L型、T型或I型等)对N个第一车位点中的两两配对。例如,车位检测装置可计算两个车位点在x维度和y维度的距离,可以认为两个车位点之间的距离是车位的长度Length,例如:当600cm<Length<800cm时,即可以认为这两个车位点组成的车位是长(即分割线)。当250cm<Length<350cm时,即可以认为这两个车位点组成的车位是宽(即入口线或非入口线)。进一步,当确定出第二车位框的长宽后还可确定出第二车位框的中心坐标。如果计算得到的车位的长度Length不符合上述两种几何特征,即认为非常规车位,或者为车位检测有误,直接丢弃。由于各个第一车位点的第一置信度是独立的,基于两个独立的置信度的第一车位点信息确定出的第二车位框信息的准确性较高。再比如,在得到了车位单个边的长度,可在结合车位类型,可通过常规车位长宽对应表,通过宽推测出车位的长,或者通过长推断出车位的宽,再根据车位长宽和车位类型拟合出一个完整的第二车位框。
步骤602,车位检测装置可根据第一车位框信息和第二车位框信息,匹配属于同一车位框的第一车位点信息和第二车位点信息。
其中,第二车位框信息包括第二车位框的中心坐标,第一车位框信息包括第一车位框的中心坐标。
在一种可能的实现方式中,车位检测装置可根据第二车位框的中心坐标及第一车位框的中心坐标进行匹配。示例性地,针对第i个第二车位框的中心坐标(x1i,y1i),逐一与第一车位框的中心坐标(x2j,y2j)匹配,将与第i个第二车位框的中心之间的距离(如
Figure PCTCN2021101613-appb-000001
小于预设值的第一车位框确定为与第i个第二车位框属于同一个车位框。
通过上述步骤601和步骤602,可以确定出哪些第一车位点信息与哪些第二车位点信息属于同一车位框。
需要说明的是,上述图6所示的确定属于同一车位框的第一车位点信息和第二车位点信息的方法仅是一种可能的示例。例如,也可以通过将第一车位框信息转为的第二车位点信息,通过匹配第二车位信息与第一车位点信息,例如,可对第一车位点的坐标和第二车位点的坐标追一匹配,从而可确定第一车位点信息与哪些第二车位点信息属于同一车位框,本申请对此不做限定。
当确定出哪些第一车位点信息和哪些第二车位点信息属于同一个车位框,针对每个车 位框对应的第一车位点信息和第二车位点信息,可获得单个车位的车位信息,根据本申请提供的车位检测方法可以进一步获取多个车位对应的车位信息,例如,一个停车场的车位信息。
下面示例性的示出三种可能的基于n个第一车位点信息及m个第二车位点信息确定车位信息的方法。
在下文的介绍中,为了便于方案的说明,以属于同一车位框的n个第一车位点信息及m个第二车位点信息为例介绍。
方法1
请参阅图7,为本申请提供的一种基于n个第一车位点信息及m个第二车位点信息确定车位的车位信息的方法流程示意图。该方法包括以下步骤:
步骤701,车位检测装置获取属于同一车位框的n个第一车位点信息及m个第二车位点信息。
此处,第一车位点信息包括第一车位点的坐标,第二车位点信息包括第二车位点的坐标。
步骤702,车位检测装置确定n个第一车位点的坐标与m个第二车位点的坐标中存在k个车位点的坐标重合;若k≥3时,执行下述步骤703;若0<k≤2时,执行下述步骤704。
在一种可能的实现方式中,车位检测装置也可逐一对第一车位点的坐标和第二车位点的坐标进行匹配,以确定n个第一车位点的坐标与m个第二车位点的坐标中存在重合的车位点的坐标数量k。
需要说明的是,重合不是指绝对的完全重合,可以允许有一定的误差范围。例如,误差可以为20px。
步骤703,车位检测装置根据m个第二车位点信息和/或n个第一车位点信息确定车位信息。
此处,车位检测装置可根据m个第二车位点信息确定车位信息;或者,可根据n个第一车位点信息确定车位信息;或者,可根据m个第二车位点信息和n个第一车位点信息确定车位信息。
步骤704,车位检测装置可根据n个第一车位点信息确定车位信息。
也可以理解为,车位检测装置可剔除m个第二车位信息,根据n个第一车位点信息确定车位信息。此处,若n个第一车位点的坐标与m个第二车位点的坐标中存在k个车位点的坐标重合时,重合的两个车位点指组成入口线的两个车位点。
在一种可能的实现方式中,若k=0,可重新选择新的车位。
需要说明的是,若上述图7中的第二车位点信息是基于上述图5所示的方法得到的,在上述步骤702中,车位检测装置可通过获取存储的被第一车位点信息替换的第三车位点信息来确定k。当车位检测装置从存储的被第一车位点信息替换的第三车位点信息中查询到四个(即k=4)第三车位点信息被第一车位点信息替换,则可以在上述图5的步骤505之后,直接根据图5所得到的第二车位点信息,确定车位信息;不需要执行步骤703和步骤704。例如,可将第二车位点的坐标作为车位的车位点的坐标;进一步,可基于该第二车位点信息确定车位的车位框、中心坐标、倾斜角等。当车位检测装置从存储的被第一车位点信息替换的第三车位点信息中查询到三个(即k=4)第三车位点信息被第一车位点信息替换,则可基于上述步骤703确定车位信息。当车位检测装置从存储的被第一车位点信 息替换的第三车位点信息中查询到小于两个(即k≤2)第三车位点信息被第一车位点信息替换,则可基于上述步骤704确定车位信息。
方法2
请参阅图8a,为本申请提供的一种基于n个第一车位点信息及m个第二车位点信息确定车位的车位信息的方法流程示意图。该方法包括以下步骤:
步骤801,车位检测装置获取属于同一车位框的n个第一车位点信息及m个第二车位点信息。
此处,第一车位点信息包括第一车位点的坐标及第一车位点的坐标对应的第一置信度;第二车位点信息包括第二车位点的坐标及第二车位点的坐标对应的第二置信度。
步骤802,车位检测装置确定n个第一车位点的坐标与m个第二车位点的坐标中是否存在至少三个车位点的坐标重合;若存在,执行步骤803;若不存在,执行步骤804。
也可以理解为,车位检测装置确定n个第一车位点的坐标与m个第二车位点的坐标中存在k个车位点的坐标重合;若k≥3时,执行下述步骤803;若k<3时,执行下述步骤804。
在一种可能的实现方式中,针对m个第二车位点的坐标中每个第二车位点的坐标,分别与n个第一车位点的坐标逐一匹配;或者针对n个第一车位点的坐标中的每个第一车位点,分别与m个第二车位点的坐标逐一匹配。需要说明的是,重合不是指绝对的完全重合,可以允许有一定的误差范围。例如,误差可以为20px。
步骤803,针对n个第一车位点的坐标中的每个第一车位点的坐标及对应的第二车位点的坐标,车位检测装置根据第一置信度和第二置信度,加权平均第一车位点的坐标和对应的第二车位点的坐标,得到加权平均后的车位点的坐标,根据加权平均后的车位点的坐标,确定所述车位信息。
在一种可能的实现方式中,针对n个第一车位点的坐标中的一个第一车位点的坐标,加权平均后的车位点的坐标=(第一车位点的坐标×第一车位点的坐标对应的第一置信度+第二车位点的坐标×对应的第二车位点的坐标对应的第二置信度)/2。
或者,针对m个第二车位点的坐标中每个第二车位点的坐标,加权平均后的车位点的坐标=(第二车位点的坐标×第二车位点的坐标对应的第二置信度+对应的第一车位点的坐标×第一车位点的坐标对应的第二置信度)/2。
示例性地,当n=m=4时,第一车位点信息包括第一车位点的坐标(x11,y11)及第一车位点的坐标(x11,y11)对应的第一置信度为A11,第一车位点的坐标(x12,y12)及第一车位点的坐标(x12,y12)对应的第一置信度为A12,第一车位点的坐标(x13,y13)及第一车位点的坐标(x13,y13)对应的第一置信度为A13,第一车位点的坐标(x14,y14)及第一车位点的坐标(x14,y14)对应的第一置信度为A14。第二车位点信息包括第二车位点的坐标(x21,y21)及第二车位点的坐标(x21,y21)对应的第二置信度为A21,第二车位点的坐标(x22,y22)及第二车位点的坐标(x22,y22)对应的第二置信度为A22,第二车位点的坐标(x23,y23)及第二车位点的坐标(x23,y23)对应的第二置信度为A23,第二车位点的坐标(x24,y24)及第二车位点的坐标(x24,y24)对应的第二置信度为A24;根据第一置信度 和第二置信度,加权平均4个第一车位点的坐标和4个第二车位点的坐标可得到加权平均后的车位点的坐标为:
Figure PCTCN2021101613-appb-000002
Figure PCTCN2021101613-appb-000003
进一步,可根据加权平均后的车位点的坐标,确定车位的信息。例如,车位信息包括车位点的坐标,加权平均后的车位点的坐标即可为确定出的车位信息中车位点的坐标。进一步,车位信息还包括车位框信息,即可根据加权平均后的车位点的坐标确定车位框的长宽、车位框的中心坐标及车位框的倾斜角等。
需要说明的是,用于表示车位的同一个物理车位点的第一车位点信息和第二车位点信息对应。例如用于表示左上角的物理车位点的第一车位点信息与第二车位点信息对应。也可以理解为,第一车位点的坐标(x11,y11)与第二车位点的坐标(x21,y21)对应同一个物理车位点,第一车位点的坐标(x12,y12)与二车位点的坐标(x22,y22)对应同一个物理车位点,第一车位点的坐标(x13,y13)与第二车位点的坐标(x23,y23)对应同一个物理车位点,第一车位点的坐标(x14,y14)与第二车位点的坐标(x24,y24)对应同一个物理车位点。
示例性地,当m=4且n=3时,第一车位点信息包括第一车位点的坐标(x11,y11)及第一车位点的坐标(x11,y11)对应的第一置信度为A11,第一车位点的坐标(x12,y12)及第一车位点的坐标(x12,y12)对应的第一置信度为A12,第一车位点的坐标(x13,y13)及第一车位点的坐标(x13,y13)对应的第一置信度为A13。第二车位点信息包括第二车位点的坐标(x21,y21)及第二车位点的坐标(x21,y21)对应的第二置信度为A21,第二车位点的坐标(x22,y22)及第二车位点的坐标(x22,y22)对应的第二置信度为A22,第二车位点的坐标(x23,y23)及第二车位点的坐标(x23,y23)对应的第二置信度为A23,第二车位点的坐标(x24,y24)及第二车位点的坐标(x24,y24)对应的第二置信度为A24;根据第一置信度和第二置信度,加权平均3个第一车位点的坐标和对应的3个第二车位点的坐标可得到加权平均后的车位点的坐标为:
Figure PCTCN2021101613-appb-000004
Figure PCTCN2021101613-appb-000005
进一步,可根据加权平均后的车位点的坐标,确定车位的信息。例如,车位信息包括车位点的坐标,加权平均后的3个车位点的坐标即可为确定出的车位信息中的3个车位点的坐标。剩余一个车位点的坐标可以根据加权平均后的3个车位点的坐标推算出来;或者,也可以直接将第二车位点的坐标(x24,y24)作为车位信息中的第4个车位点的坐标。进一步,车位信息还包括车位框信息,即可根据加权平均后的车位点的坐标确定车位框的长宽、车位框的中心坐标及车位框的倾斜角等。
需要说明的是,若同一物理车位点有两个加权平均后的车位点的坐标,则可进一步对加权平均后的车位点的坐标进一步进行加权平均。
步骤804,车位检测装置确定n个第一车位点的坐标与m个第二车位点的坐标中是否存在两个车位点的坐标重合;若存在,执行下述步骤805;若不存在,说明第一车位点信 息或第一车位框信息中存在至少一个的准确性较差,可重新寻找新的车位。
也可以理解为,车位检测装置确定k是否等于2。若是,执行下述步骤805;若否,说明第一车位点信息或第一车位框信息中存在至少一个的准确性较差,此处,可以重新寻找新的车位或者也可以执行下述图8b的方法。
在一种可能的实现方式中,重合的两个车位点指组成入口线的两个车位点。
步骤805,车位检测装置可根据n个第一车位点信息确定车位的车位信息。
在一种可能的实现方式中,车位检测装置可剔除m个第二车位信息,将第一车位点信息确定为车位的车位信息。
通过上述步骤801至步骤805可以看出,当第一车位点和第二车位点重合的车位点较多(如三个或四个)时,则可基于第一置信度和第二置信度融合第一车位点信息和第二车位点信息,从而可提高车位信息的准确性。当第一车位点和第二车位点的两个车位点重合时,可基于精度较高的第一车位点信息确定车位信息,从而也可提高车位信息的准确性。
需要说明的是,若上述图8a中的第二车位点信息是基于上述图5所示的方法得到的,在上述步骤802中,车位检测装置可通过获取存储的被第一车位点信息替换的第三车位点信息来确定k。当车位检测装置从存储的被第一车位点信息替换的第三车位点信息中查询到四个第三车位点信息被第一车位点信息替换,则可以在上述图5的步骤505之后,直接根据图5所得到的第二车位点信息,确定车位信息;不需要执行步骤803和步骤705。例如,可将第二车位点的坐标作为车位的车位点的坐标;进一步,可基于该第二车位点信息确定车位的车位框、中心坐标、倾斜角等。当车位检测装置从存储的被第一车位点信息替换的第三车位点信息中查询到三个(即k=4)第三车位点信息被第一车位点信息替换,则可基于上述步骤803确定车位信息。当车位检测装置从存储的被第一车位点信息替换的第三车位点信息中查询到两个(即k=2)第三车位点信息被第一车位点信息替换,则可基于上述步骤804确定车位信息。
方法3
若车位检测装置确定n个第一车位点的坐标与m个第二车位点的坐标中重合的车位点少于两个(即只有一个车位点的坐标重合或者没有车位点的坐标重合),可参见图8b所示的方法确定该车位的车位信息。
请参阅图8b,为本申请提供的另一种基于第一车位点信息与第一车位框信息确定车位的车位信息的方法流程示意图。该方法包括以下步骤:
步骤811,若n个第一车位点的坐标与m个第二车位点的坐标存在一个重合或没有车位点重合,车位检测装置确定n个第一置信度的第一平均值、及m个第二置信度的第二平均值。
也可以理解为,若k<2,车位检测装置确定n个第一置信度的第一平均值、及m个第二置信度的第二平均值。
示例性地,n个第一置信度分别为第一置信度为A11、第一置信度为A12、第一置信度为A13和第一置信度为A14,m个第二置信度分别为第二置信度为A21、第二置信度为A22、第二置信度为A23和第二置信度为A24,
Figure PCTCN2021101613-appb-000006
Figure PCTCN2021101613-appb-000007
步骤812,车位检测装置确定第一平均值是否大于或等于阈值;若大于或等于,执行步骤813,若小于,执行步骤814。
此处,阈值例如可以是0.7。
步骤813,车位检测装置可根据n个第一车位点信息确定车位信息。
此处,车位检测装置剔除m个第二车位信息,可将第一车位点信息确定为车位的车位信息。
步骤814,车位检测装置确定第二平均值是否大于或等于阈值;若大于,执行步骤815,若小于,说明车位信息与第二车位点信息均不精确,可重新寻找新的车位。
此处,阈值例如可以是0.7。
步骤815,车位检测装置可根据m个第二车位点信息确定车位的车位信息。
此处,车位检测装置可剔除n个第一车位点信息,将第二车位点信息确定为车位的车位信息。
通过上述步骤811至步骤815可以看出,当第一车位点和第二车位点重合的车位点较少时,可基于置信度较高的车位点确定车位信息,可将置信度低的预测结果进行了过滤,从而可提高车位信息的准确性。当第一车位点置信度较低时,过渡掉第一车位点信息,可使用第一车位框信息确定车位信息,从而可在车位点失效场景下,提高车位检测的召回率;当第一车位框的置信度较低时,过渡掉第一车位框信息,可使用第一车位点信息确定车位信息,从而可在车位框失效场景下,也可提高车位检测的召回率。
基于上述内容,如图9所示,为本申请提供的另一种车位检测的方法流程示意图。该方法中以网络结构为例,该网络结构包括特征提取模型、车位点解析模型、车位框过渡模型及车位框解析模型,其中,特征提取模型、车位点解析模型、车位框解析模型及车位框过渡模型可以集成在一个功能模块,或者也可以集成于不同的功能模块,本申请对此不做限定。该网络结构的总损失函数为车位点解析模型及车位框解析模型的损失函数的线性叠加。其中,网络结构例如可以是Resnet-50、或者视觉几何组网络(Visual Geometry Group Network 16,VGG-16),本申请对此不做限定。
该方法包括以下步骤:
步骤901,车位检测装置获取车位的全景图像。
该步骤可参见上述图4a的相关介绍,此处不再赘述。
步骤902,车位检测装置提取车位的全景图像的特征信息。
其中,特征信息可以表征图像的信息,例如是点、线、和/或圈等。全景图像的特征信息可以表征该全景图像的信息,例如是点和/或线和/或圈等。关于该902步骤的可能的实现方式可参见上述步骤301的介绍,此处不再赘述。
步骤903,车位检测装置将车位的全景图像的特征信息输入车位点解析模型,识别车位点信息,得到第一车位点信息;并将车位的全景图像的特征信息输入车位框过渡模型,识别车位框过渡信息,得到车位框过渡信息。
此处,车位点解析模型和车位框过渡模型是平级模型。而且,车位点解析模型和车位框过渡模型的输入是相同的全景图像的特征信息,也即是获取到的全景图像的特征信息既输入给车位点解析模型,又输入给车位框过渡模型。因此,上述步骤902中提取车位的全景图像的特征信息可以是较为泛化的模型,在相同检测准确性下,较为泛化的模型可减少 对原始数据量的需求。
该步骤903中获取第一车位点信息的可能方式可参见前述步骤302中的相关介绍,此处不再赘述。其中,第一车位点信息包括但不限于第一车位点的坐标、第一车位点的坐标对应的第一置信度。
该步骤903中将车位的全景图像的特征信息输入车位框过渡模型,车位框过渡模型可输出车位框过渡信息,车位框过渡信息例如可以为线条信息、或车位框内其它信息等。具体可参见前述相关描述,此处不再赘述。
步骤904,车位检测装置将车位框过渡信息和第一车位点信息输入车位框解析模型,识别车位框信息,得到第一车位框信息。
其中,第一车位框信息包括但不限于第一车位框的中心坐标、第一车位框的中心坐标对应的置信度、第一车位框的尺寸(例如长宽)、第一车位框的尺寸对应的置信度、第一车位框的倾斜角及第一车位框的倾斜角对应的置信度等。
该步骤904的过程可参见上述步骤303的介绍,此处不再赘述。第一车位点信息作为确定第一车位框信息的部分信息,可降低车位框解析模型的训练难度,且有助于提高第一车位框信息的准确性。
需要说明的是,该步骤903中输入车位框解析模型中的第一车位点信息可以是用于表征第一车位点信息的一些中间特征。
步骤905,车位检测装置根据第一车位点信息和第一车位框信息,确定车位的车位信息。
此处,车位检测装置可根据第一车位框信息,获取第二车位点信息。在一种可能的实现方式中,可以是基于上述图5的方式。在另一种可能的实现方式中,也可以是直接将第一车位框信息转换为第二车位点信息。
进一步,可选地,车位检测装置可根据第一车位点信息和第二车位点信息,确定车位的车位信息。该过程快可参见前述图6、图7、图8a和图8b的介绍,此处不再赘述。
需要说明的是,上述步骤中获取第一车位点信息和获取第一车位框信息可以认为是两个并行的任务,而且,这两个并行任务的是基于提取的相同的车位图像的特征信息获取的,如此可通过较为泛化的模型来提取车位全景图像的特征信息。而且,将获得的第一车位点信息的中间任务作为了获取第一车位框信息任务的输入,从而可提高获取的第一车位框信息的准确性。进一步,通过第一车位点信息可以获得车位的较精确的局部信息,通过第一车位框信息可以获得车位较精确的全局信息,融合车位信息和基于第一车位框信息得到的第二车位点信息,可以得到局部和全局均较精确的车位信息。
基于上述内容,获得车位的车位信息后,还可向规划控制装置(如图2b中的控制***230)发送该车位信息,以便于规划控制装置控制车辆的移动。规划控制装置可集成与车辆中,或者也可以集成于上述所示的云服务器中,或者也可以是独立的设备等,本申请对此不做限定。
在一种可能的实现方式中,规划控制装置可根据车位的车位信息进行泊车路径规划,得到泊车路径,并控制车辆按照泊车路径进行泊车。
如图10所示,为本申请提供的另一种车位检测方法的流程示意图。该方法可适用于上述图2a中的车位管理服务器102,或者适用于上述图2b中的车辆。车位管理服务器与处理器可以统称为车位检测装置。换言之,车位检测装置可以是上述图2a的车位管理服务 器102,或者也可以是上述图2b的车辆。该方法包括以下步骤:
步骤1001,车位检测装置获取车位的图像信息。
该步骤1001可参见前述步骤301的介绍,此处不再赘述。
步骤1002,车位检测装置根据车位的图像信息获取第一车位点信息。
该步骤1002可参见前述步骤302的介绍,此处不再赘述。
步骤1003,车位检测装置根据车位的图像信息和第一车位点信息确定第一车位框信息。
在一种可能的实现方式中,该步骤可参见前述步骤303中实现方式二的介绍,此处不再赘述。
步骤1004,车位检测装置可根据第一车位点信息和/或第一车位框信息,确定车位的车位信息。
该步骤1004可以理解为,车位检测装置可根据第一车位点信息确定车位的车位信息;或,根据第一车位框信息确定车位信息;或根据第一车位点信息和第一车位框信息确定车位信息。具体可能的实现方式可分别参见前述相关描述,此处不再赘述。
通过上述步骤1001至步骤1004可以看出,通过结合第一车位点信息来确定第一车位框信息,可提高第一车位框信息的准确性,从而可提高车位信息的准确性。而且,通过第一车位点信息和第一车位框信息确定车位的车位信息时,当其中第一车位点信息失效(如车位点被遮挡或涂抹)或者第一车位框信息失效(如车位框未被拍摄完整等),仍可确定出车位信息。换言之,当某一特征信息失效时,基于该方案,仍可识别出车位的车位信息,从而可提高车位检测结果的召回率。基于第一车位点信息和第一车位框信息确定车位信息,可理解为是基于两种不同的特征信息确定车位信息,从而有助于提高车位信息的准确性。
可以理解的是,为了实现上述实施例中功能,检测装置包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的模块及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用场景和设计约束条件。
基于上述内容和相同构思,图11和图12为本申请的提供的可能的检测装置的结构示意图。这些检测装置可以用于实现上述方法实施例中的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请中,该检测装置可以是图2a中的车位管理服务器102、或者是上述图2b中的车辆,还可以是应用于云服务器的模块(如芯片)或应用于车辆中的模块(如芯片)。
如图11所示,该检测装置1100包括获取模块1101和处理模块1102。检测装置1100用于实现上述图3至图10任一所示的方法实施例中的功能。
当检测装置1100用于实现图3所示的方法实施例的功能时:获取模块1101用于获取车位的图像信息;处理模块1102用于根据车位的图像信息确定第一车位点信息;根据车位的图像信息确定第一车位框信息;根据第一车位点信息和第一车位框信息,确定车位的车位信息。
有关上述获取模块1101和处理模块1102更详细的描述可以参考图3所示的方法实施例中相关描述直接得到,此处不再一一赘述。
应理解,本申请实施例中的获取模块1101和处理模块1102可以由处理器或处理器相 关电路组件实现。
基于上述内容和相同构思,如图12所示,本申请还提供一种检测装置1200。该检测装置1200可包括处理器1201。可选地,检测装置1200还可包括存储器1202,用于存储处理器1201执行的指令或存储处理器1201运行指令所需要的输入数据或存储处理器1201运行指令后产生的数据。
当检测装置1200用于实现图3所示的方法时,处理器1201用于执行上述获取模块1101及处理模块1102的功能。
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(central processing unit,CPU),还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于车位检测装置中。当然,处理器和存储介质也可以作为分立组件存在于车位检测装置。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行计算机程序或指令时,全部或部分地执行本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、车位检测装置、用户设备或者其它可编程装置。计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
本申请中,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。在本申请的文字描述中,字符“/”,一般表示 前后关联对象是一种“或”的关系;在本申请的公式中,字符“/”,表示前后关联对象是一种“相除”的关系。另外,在本申请中,“示例的”一词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。或者可理解为,使用示例的一词旨在以具体方式呈现概念,并不对本申请构成限定。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。术语“第一”、“第二”等类似表述,是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或模块。方法、***、产品或设备不必限于清楚地列出的那些步骤或模块,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或模块。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的保护范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (27)

  1. 一种车位检测方法,其特征在于,包括:
    获取车位的图像信息;
    根据所述车位的图像信息确定第一车位点信息;
    根据所述车位的图像信息确定第一车位框信息;
    根据所述第一车位点信息和所述第一车位框信息,确定所述车位的车位信息。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述第一车位点信息和所述第一车位框信息,确定所述车位的车位信息,包括:
    根据所述第一车位点信息和所述第一车位框信息,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息,所述n和m均为正整数;
    根据所述n个第一车位点信息及所述m个第二车位点信息,确定所述车位信息。
  3. 如权利要求2所述的方法,其特征在于,所述第一车位点信息包括第一车位点坐标;和/或,
    所述第二车位点信息包括第二车位点坐标。
  4. 如权利要求3所述的方法,其特征在于,所述根据所述n个第一车位点信息及所述m个第二车位点信息,确定所述车位信息,包括:
    确定所述n个第一车位点的坐标与所述m个第二车位点的坐标中存在k个车位点的坐标重合;
    当k≥3时,根据所述m个所述第二车位点信息和/或所述n个第一车位点信息确定所述车位信息;
    当k≤2时,根据所述n个第一车位点信息确定所述车位信息。
  5. 如权利要求3所述的方法,其特征在于,所述第一车位点信息还包括所述第一车位点的坐标对应的第一置信度;和/或,
    所述第二车位点信息还包括所述第二车位点的坐标对应的第二置信度。
  6. 如权利要求5所述的方法,其特征在于,所述根据所述n个第一车位点信息及所述m个第二车位点信息,确定所述车位信息,包括:
    确定所述n个第一车位点的坐标与所述m个所述第二车位点的坐标中存在k个车位点的坐标重合;
    当k≥3时,针对所述n个第一车位点的坐标中的每个第一车位点的坐标及对应的第二车位点的坐标,根据所述第一置信度和所述第二置信度,加权平均所述第一车位点的坐标和所述对应的第二车位点的坐标,得到加权平均后的车位点的坐标,根据加权平均后的车位点的坐标,确定所述车位信息;或者,
    当k≤2时,根据所述n个第一车位点信息确定所述车位信息;或者,
    当k<2且所述n个第一置信度的第一平均值大于阈值,根据所述n个第一车位点信息确定所述车位信息;或者,
    当k<2且所述m个第二置信度的第二平均值大于阈值,根据所述m个第二车位点信息确定所述车位信息。
  7. 如权利要求6所述的方法,其特征在于,所述根据所述第一置信度和所述第二置信度,加权平均所述第一车位点的坐标和所述对应的第二车位点的坐标,得到加权平均后的 车位点的坐标,包括:
    确定所述加权平均后的车位点的坐标=(所述第一车位点的坐标×所述第一车位点的坐标对应的第一置信度+所述对应的第二车位点的坐标×所述对应的第二车位点的坐标对应的第二置信度)/2。
  8. 如权利要求1至7任一项所述的方法,其特征在于,所述根据所述车位的图像信息确定第一车位框信息,包括:
    根据所述车位的图像信息确定所述第一车位框的过渡信息;
    根据所述过渡信息及所述第一车位点信息,确定所述第一车位框信息。
  9. 如权利要求2至8任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一车位框信息,确定第三车位点信息;
    若确定在所述第三车位点信息的预设范围内存在所述第一车位点信息,用所述第一车位点信息替换所述第三车位点信息,得到所述第二车位点信息。
  10. 如权利要求1至9任一项所述的方法,其特征在于,所述车位信息为车位框信息。
  11. 如权利要求2至10任一项所述的方法,其特征在于,所述第一车位框信息包括第一车位框的中心坐标;
    所述根据所述第一车位点信息和所述第一车位框信息,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息,包括:
    根据所述第一车位点信息,确定第二车位框信息,所述第二车位框信息包括第二车位框的中心坐标;
    根据所述第一车位框的中心坐标及所述第二车位框的中心坐标,确定属于同一车位框的所述n个第一车位点信息及所述m个第二车位点信息。
  12. 如权利要求1至11任一项所述的方法,其特征在于,所述车位的图像信息包括所述车位的全景图像的图像信息。
  13. 一种检测装置,其特征在于,包括获取模块和处理模块:
    所述获取模块,用于获取车位的图像信息;
    所述处理模块,用于根据所述车位的图像信息确定第一车位点信息;根据所述车位的图像信息确定第一车位框信息;根据所述第一车位点信息和所述第一车位框信息,确定所述车位的车位信息。
  14. 如权利要求13所述的装置,其特征在于,所述处理模块,具体用于:
    根据所述第一车位点信息和所述第一车位框信息,确定属于同一车位框的n个第一车位点信息及m个第二车位点信息,所述n和m均为正整数;
    根据所述n个第一车位点信息及所述m个第二车位点信息,确定车位信息。
  15. 如权利要求14所述的装置,其特征在于,所述第一车位点信息包括第一车位点的坐标;
    所述第二车位点信息包括第二车位点的坐标。
  16. 如权利要求15所述的装置,其特征在于,所述处理模块,具体用于:
    确定所述n个第一车位点的坐标与所述m个第二车位点的坐标中存在k个车位点的坐标重合;
    当k≥3时,根据所述m个所述第二车位点信息和/或所述n个第一车位点信息确定所述车位信息;
    当k≤2时,根据所述n个第一车位点信息确定所述车位信息。
  17. 如权利要求15所述的装置,其特征在于,所述第一车位点信息还包括所述第一车位点的坐标对应的第一置信度;和/或,
    所述第二车位点信息还包括所述第二车位点的坐标对应的第二置信度。
  18. 如权利要求17所述的装置,其特征在于,所述处理模块,具体用于:
    确定所述n个第一车位点的坐标与所述m个所述第二车位点的坐标中存在k个车位点的坐标重合;
    当k≥3时,针对所述n个第一车位点的坐标中的每个第一车位点的坐标及对应的第二车位点的坐标,根据所述第一置信度和所述第二置信度,加权平均所述第一车位点的坐标和所述对应的第二车位点的坐标,得到加权平均后的车位点的坐标,根据加权平均后的车位点的坐标,确定所述车位信息;或者,
    当k=2时,根据所述n个第一车位点信息确定所述车位信息;或者,
    当k<2且所述n个第一置信度的第一平均值大于阈值,根据所述n个第一车位点信息确定所述车位信息;或者,
    当k<2且所述m个第二置信度的第二平均值大于阈值,根据所述m个第二车位点信息确定所述车位信息。
  19. 如权利要求18所述的装置,其特征在于,所述处理模块,具体用于:
    确定所述加权平均后的车位点的坐标=(所述第一车位点的坐标×所述第一车位点的坐标对应的第一置信度+所述对应的第二车位点的坐标×所述对应的第二车位点的坐标对应的第二置信度)/2。
  20. 如权利要求13至19任一项所述的装置,其特征在于,所述处理模块,具体用于:
    根据所述车位的图像信息确定所述第一车位框的过渡信息;
    根据所述过渡信息,确定所述第一车位框信息。
  21. 如权利要求14至20任一项所述的装置,其特征在于,所述处理模块,还用于:
    根据所述第一车位框信息,确定第三车位点信息;
    若确定在所述第三车位点信息的预设范围内存在所述第一车位点信息,用所述第一车位点信息替换所述第三车位点信息,得到所述第二车位点信息。
  22. 如权利要求14至21任一项所述的装置,其特征在于,所述第一车位框信息包括第一车位框的中心坐标;
    所述处理模块,具体用于:
    根据所述第一车位点信息,确定第二车位框信息,所述第二车位框信息包括第二车位框的中心坐标;
    根据所述第二车位框的中心坐标及所述第一车位框的中心坐标,确定属于同一车位框的所述n个第一车位点信息及所述m个第二车位点信息。
  23. 如权利要求13至22任一项所述的装置,其特征在于,所述车位信息为车位框信息。
  24. 如权利要求13至23任一项所述的装置,其特征在于,所述车位的图像信息包括所述车位的全景图像的图像信息。
  25. 一种检测装置,其特征在于,包括处理器,所述处理器与存储器相连,所述存储器用于存储计算机程序,所述处理器用于执行所述存储器中存储的计算机程序,使得所述装置执行如权利要求1至12中任一项所述的方法。
  26. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序或指令,当所述计算机程序或指令被检测装置执行时,使得所述检测装置执行如权利要求1至12中任一项所述的方法。
  27. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序或指令,当所述计算机程序或指令被检测装置执行时,使得所述检测装置执行如权利要求1至12中任一项所述的方法。
PCT/CN2021/101613 2021-06-22 2021-06-22 一种车位检测方法及装置 WO2022266854A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101613 WO2022266854A1 (zh) 2021-06-22 2021-06-22 一种车位检测方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101613 WO2022266854A1 (zh) 2021-06-22 2021-06-22 一种车位检测方法及装置

Publications (1)

Publication Number Publication Date
WO2022266854A1 true WO2022266854A1 (zh) 2022-12-29

Family

ID=84543858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101613 WO2022266854A1 (zh) 2021-06-22 2021-06-22 一种车位检测方法及装置

Country Status (1)

Country Link
WO (1) WO2022266854A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117012053A (zh) * 2023-09-28 2023-11-07 东风悦享科技有限公司 一种车位检测点的后优化方法、***及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685000A (zh) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 一种基于视觉的车位检测方法及装置
CN110969655A (zh) * 2019-10-24 2020-04-07 百度在线网络技术(北京)有限公司 用于检测车位的方法、装置、设备、存储介质以及车辆
CN111191485A (zh) * 2018-11-14 2020-05-22 广州汽车集团股份有限公司 一种车位检测方法及其***、汽车
CN111881874A (zh) * 2020-08-05 2020-11-03 北京四维智联科技有限公司 车位识别方法、设备及***
CN112509354A (zh) * 2020-12-08 2021-03-16 广州小鹏自动驾驶科技有限公司 一种车位检测方法、装置、车辆、可读介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191485A (zh) * 2018-11-14 2020-05-22 广州汽车集团股份有限公司 一种车位检测方法及其***、汽车
CN109685000A (zh) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 一种基于视觉的车位检测方法及装置
CN110969655A (zh) * 2019-10-24 2020-04-07 百度在线网络技术(北京)有限公司 用于检测车位的方法、装置、设备、存储介质以及车辆
CN111881874A (zh) * 2020-08-05 2020-11-03 北京四维智联科技有限公司 车位识别方法、设备及***
CN112509354A (zh) * 2020-12-08 2021-03-16 广州小鹏自动驾驶科技有限公司 一种车位检测方法、装置、车辆、可读介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117012053A (zh) * 2023-09-28 2023-11-07 东风悦享科技有限公司 一种车位检测点的后优化方法、***及存储介质

Similar Documents

Publication Publication Date Title
CN110148185B (zh) 确定成像设备坐标系转换参数的方法、装置和电子设备
CN110543814B (zh) 一种交通灯的识别方法及装置
CN113168708B (zh) 车道线跟踪方法和装置
WO2021184218A1 (zh) 一种相对位姿标定方法及相关装置
WO2021227645A1 (zh) 目标检测方法和装置
US9342888B2 (en) System and method for mapping, localization and pose correction of a vehicle based on images
CN107636679B (zh) 一种障碍物检测方法及装置
US20200232800A1 (en) Method and apparatus for enabling sequential groundview image projection synthesis and complicated scene reconstruction at map anomaly hotspot
CN114667437A (zh) 用于自主驾驶应用的地图创建和定位
EP4033324B1 (en) Obstacle information sensing method and device for mobile robot
WO2020186444A1 (zh) 物体检测方法、电子设备与计算机存储介质
CN113228135B (zh) 一种盲区图像获取方法及相关终端装置
CN113591518B (zh) 一种图像的处理方法、网络的训练方法以及相关设备
CN110910453A (zh) 基于无重叠视域多相机***的车辆位姿估计方法及其***
CN112740268A (zh) 目标检测方法和装置
CN113850867A (zh) 相机参数标定及设备的控制方法、装置、设备及存储介质
CN115220449B (zh) 路径规划的方法、装置、存储介质、芯片及车辆
CN114240769A (zh) 一种图像处理方法以及装置
CN114167404A (zh) 目标跟踪方法及装置
CN115265561A (zh) 车辆定位方法、装置、车辆及介质
WO2022266854A1 (zh) 一种车位检测方法及装置
CN108416044B (zh) 场景缩略图的生成方法、装置、电子设备及存储介质
CN115205311B (zh) 图像处理方法、装置、车辆、介质及芯片
CN115891984A (zh) 一种智能驾驶***、方法、设备及介质
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946356

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21946356

Country of ref document: EP

Kind code of ref document: A1