WO2022266854A1 - Procédé et dispositif de détection d'espace de stationnement - Google Patents

Procédé et dispositif de détection d'espace de stationnement Download PDF

Info

Publication number
WO2022266854A1
WO2022266854A1 PCT/CN2021/101613 CN2021101613W WO2022266854A1 WO 2022266854 A1 WO2022266854 A1 WO 2022266854A1 CN 2021101613 W CN2021101613 W CN 2021101613W WO 2022266854 A1 WO2022266854 A1 WO 2022266854A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking space
information
coordinates
space point
parking
Prior art date
Application number
PCT/CN2021/101613
Other languages
English (en)
Chinese (zh)
Inventor
白宇材
任建乐
张强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/101613 priority Critical patent/WO2022266854A1/fr
Publication of WO2022266854A1 publication Critical patent/WO2022266854A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas

Definitions

  • the present application relates to the technical field of intelligent driving, in particular to a parking space detection method and device.
  • Automatic parking refers to automatically controlling the vehicle to park in a parking space without manual control, thereby reducing the user's parking operations.
  • parking space detection is required.
  • the parking space detection includes searching for a parking space within the sensing range (that is, the coverage area of the sensor), and continuously providing the distance information of a specific parking space during the parking process.
  • the parking space detection scheme mainly determines four parking space points (or called corner points) of the parking space.
  • corner points four parking space points (or called corner points) of the parking space.
  • the present application provides a parking space detection method and device, which are used to improve the accuracy of the detected parking space information as much as possible.
  • the present application provides a parking space detection method, which may include acquiring image information of the parking space, determining the first parking space point information according to the image information of the parking space, and determining the first parking space frame information according to the image information of the parking space, and then according to The first parking space point information and the first parking space frame information determine the parking space information of the parking space.
  • the method can be executed by a processor on the ego vehicle, or by a cloud server or the like.
  • the parking space information is determined by the first parking space point information and the first parking space frame information, which can realize the complementarity of two different feature information, so as to obtain more accurate parking space information from local to global.
  • the image information of the acquired parking space enters two branches, one of which is used to determine the first parking space point information, and the other is used to determine the first parking space frame information.
  • the network structure (or network model), etc., can adopt a more generalized network structure, which can reduce the demand for the amount of original data under the same detection accuracy.
  • the parking space information may be parking space frame information.
  • the parking space information of the parking space includes but not limited to the coordinates of the parking space point, the central coordinates of the parking space frame, the size (such as length and width) of the parking space frame, the inclination angle of the parking space frame, and the like.
  • n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame can be determined according to the first parking space point information and the first parking space frame information, wherein n and m Both are positive integers; the parking space information can also be determined according to n first parking space point information and m second parking space point information.
  • the parking space information of a single parking space can be obtained based on one parking space frame, and the parking space information corresponding to multiple parking spaces can be further obtained according to the parking space detection method provided in the present application, for example, the parking space information of a parking lot.
  • the first parking space point information includes the coordinates of the first parking space point
  • the second parking space point information includes the coordinates of the second parking space point
  • the coordinates of the n first parking spaces coincide with the coordinates of k parking spaces among the coordinates of the m second parking spaces; when k ⁇ 3, according to the The m pieces of the second parking space point information and/or the n pieces of the first parking space point information determine the parking space information; when k ⁇ 2, the parking space information is determined according to the n first parking space point information.
  • the first parking space point information further includes a first confidence level corresponding to the coordinates of the first parking space point
  • the second parking space point information further includes a second confidence level corresponding to the coordinates of the second parking space point
  • the coordinates of the n first parking spaces coincide with the coordinates of k parking spaces among the coordinates of the m second parking spaces. Based on this, four situations of determining parking space information are exemplarily shown as follows.
  • the parking space information is obtained by weighting the coordinates of the first parking space point and the corresponding second parking space point coordinates, which helps to improve the accuracy of the parking space information.
  • the coordinates of the weighted average parking space point (the coordinate of the first parking space point ⁇ the first confidence degree corresponding to the coordinate of the first parking space point+the corresponding coordinate of the second parking space point ⁇ the second The second confidence level corresponding to the coordinates of the parking space point)/2.
  • determining the parking space information by using the first parking space point information with higher accuracy helps to improve the accuracy of the parking space information.
  • Case 3 if k ⁇ 2, it is determined that the coordinates of n first parking spaces coincide with the coordinates of m second parking spaces, or that there is no coordinates of parking spaces.
  • the mean value of degree is greater than the threshold value, and the parking space information is determined according to the n first parking space point information.
  • the parking space information of the parking spaces can be determined based on the higher confidence degree of parking space point information, thereby can improve Accuracy of parking information.
  • the parking space frame transition information may be determined according to the image information of the parking space
  • the first parking space frame information may be determined according to the parking space frame transition information and the first parking space point information.
  • the first parking space frame information is determined by combining the obtained first parking space point information, that is, the first parking space point information is used as part of the information for determining the first parking space frame information, thereby improving the accuracy of the first parking space frame information.
  • the third parking space point information can be determined according to the first parking space frame information; if it is determined that the first parking space point information exists within the preset range of the third parking space point information, use the first parking space point information Replace the point information of the third parking space to obtain the point information of the second parking space.
  • the first parking space frame information includes the center coordinates of the first parking space frame; according to the first parking space point information, the second parking space frame information is determined, wherein the second parking space frame information includes the second parking space frame Center coordinates: according to the center coordinates of the second parking space frame and the center coordinates of the first parking space frame, determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the first parking space point information and the second parking space point information belonging to the same parking space are determined according to the distance between the center of the second parking space frame and the center of the first parking space frame.
  • the determination process is relatively simple and the accuracy is high.
  • the image information of the parking space may be image information of a panoramic image of the parking space.
  • the image information described in this application may be information contained in images or videos.
  • the generalized model can be used to extract the image information of the panoramic image, so that the demand for the amount of original data can be reduced under the same detection accuracy.
  • the present application provides a parking space detection method, the method comprising acquiring image information of the parking space, determining the first parking space point information according to the image information of the parking space, and determining the first parking space frame according to the image information of the parking space and the first parking space point information information, and determine the parking space information of the parking space according to the first parking space point information and/or the first parking space frame information.
  • the accuracy of the first parking space frame information can be improved, and thus the accuracy of the parking space information can be improved.
  • the parking space information of the parking space is determined by the first parking space point information and the first parking space frame information. complete shooting, etc.), the parking space information can still be determined. In other words, when a certain characteristic information fails, based on this scheme, the parking space information of the parking space can still be identified, thereby improving the recall rate of the parking space detection results.
  • the parking space information may be parking space frame information.
  • the present application provides a detection device, which can be used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection device may be a vehicle or a cloud server, or a module used in the cloud server or the vehicle, such as a chip or a chip system or a circuit.
  • the detection device may include: a processor.
  • the processor may be configured to support the detection device to perform the corresponding functions shown in the first aspect above.
  • the detection device may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the detection device.
  • the processor is used to acquire the image information of the parking space, determine the first parking space point information according to the image information of the parking space, determine the first parking space frame information according to the image information of the parking space, and determine the first parking space point information and the first parking space frame information according to the first parking space point information and the first parking space frame information. Parking space information.
  • the parking space information may be parking space frame information.
  • the processor is specifically configured to: determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame according to the first parking space point information and the first parking space frame information, Both n and m are positive integers; the parking space information is determined according to n first parking space point information and m second parking space point information.
  • the first parking space point information includes the coordinates of the first parking space point; the second parking space point information includes the coordinates of the second parking space point.
  • the processor is specifically configured to: determine that the coordinates of the n first parking space points coincide with the coordinates of k parking space points among the coordinates of the m second parking space points; when k When ⁇ 3, the parking space information can be determined according to the m pieces of the second parking space point information and/or the n first parking space point information; when k ⁇ 2, according to the n first parking space point information The information determines the parking space information.
  • the first parking space point information further includes a first confidence level corresponding to the coordinates of the first parking space point
  • the second parking space point information further includes a second confidence level corresponding to the coordinates of the second parking space point
  • the coordinates of the weighted average parking space point (the coordinate of the first parking space point ⁇ the first confidence degree corresponding to the coordinate of the first parking space point+the corresponding coordinate of the second parking space point ⁇ the second The second confidence level corresponding to the coordinates of the parking space point)/2.
  • the processor is specifically configured to: determine the parking space frame transition information according to the image information of the parking space, and determine the first parking space frame information according to the parking space frame transition information and the first parking space point information.
  • the processor is further configured to: determine the third parking space point information according to the first parking space frame information; if it is determined that the first parking space point information exists within the preset range of the third parking space point information, use The first parking space point information replaces the third parking space point information to obtain the second parking space point information.
  • the first parking space frame information includes the center coordinates of the first parking space frame; the processor is specifically configured to: determine the second parking space frame information according to the first parking space point information, and the second parking space frame information includes the first The center coordinates of the second parking space frame; according to the center coordinates of the second parking space frame and the center coordinates of the first parking space frame, determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the image information of the parking space includes image information of a panoramic image of the parking space.
  • the present application provides a detection device, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection device may include an acquisition module and a processing module, wherein the acquisition module is used to acquire the image information of the parking space; the processing module is used to determine the first parking space point information according to the image information of the parking space; determine the first parking space frame information according to the image information of the parking space ; Determine the parking space information of the parking space according to the first parking space point information and the first parking space frame information.
  • the parking space information may be parking space frame information.
  • the processing module is specifically configured to: determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame according to the first parking space point information and the first parking space frame information, n and m are positive integers; the parking space information is determined according to n first parking space point information and m second parking space point information.
  • the first parking space point information includes the coordinates of the first parking space point; the second parking space point information includes the coordinates of the second parking space point.
  • the processor is specifically configured to: determine that the coordinates of the n first parking space points coincide with the coordinates of k parking space points among the coordinates of the m second parking space points; when k When ⁇ 3, the parking space information can be determined according to the m pieces of the second parking space point information and/or the n first parking space point information; when k ⁇ 2, according to the n first parking space point information The information determines the parking space information.
  • the first parking space point information further includes a first confidence level corresponding to the coordinates of the first parking space point
  • the second parking space point information further includes a second confidence level corresponding to the coordinates of the second parking space point
  • the coordinates of the weighted average parking space point (the coordinate of the first parking space point ⁇ the first confidence degree corresponding to the coordinate of the first parking space point+the corresponding coordinate of the second parking space point ⁇ the second The second confidence level corresponding to the coordinates of the parking space point)/2.
  • the processing module is specifically configured to: determine the parking space frame transition information according to the image information of the parking space, and determine the first parking space frame information according to the parking space frame transition information and the first parking space point information.
  • the processing module is further configured to: determine the third parking space point information according to the first parking space frame information; if it is determined that the first parking space point information exists within the preset range of the third parking space point information, use The first parking space point information replaces the third parking space point information to obtain the second parking space point information.
  • the first parking space frame information includes the center coordinates of the first parking space frame; the processing module is specifically configured to: determine the second parking space frame information according to the first parking space point information, the second parking space frame information includes the first The center coordinates of the second parking space frame; according to the center coordinates of the second parking space frame and the center coordinates of the first parking space frame, determine n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the image information of the parking space may be image information of a panoramic image of the parking space.
  • the present application provides a detection device, which can be used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection device may be a vehicle or a cloud server, or a module used in the cloud server or the vehicle, such as a chip or a chip system or a circuit.
  • the detection device may include: a processor.
  • the processor may be configured to support the detection device to perform the corresponding functions shown in the first aspect above.
  • the detection device may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the detection device.
  • the processor is used to acquire the image information of the parking space, determine the first parking space point information according to the image information of the parking space; determine the first parking space frame information according to the image information of the parking space and the first parking space point information; determine the first parking space frame information according to the first parking space point information and/or The first parking space frame information determines the parking space information of the parking space.
  • the present application provides a detection device, which is used to implement the second aspect or any one of the methods in the second aspect, and includes corresponding functional modules, respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • the detection device may include an acquisition module and a processing module, wherein the acquisition module is used to acquire the image information of the parking space; the processing module is used to determine the first parking space point information according to the image information of the parking space; according to the image information of the parking space and the first parking space point information Determine the first parking space frame information; determine the parking space information of the parking space according to the first parking space point information and/or the first parking space frame information.
  • the present application provides a computer-readable storage medium, in which a computer program or instruction is stored, and when the computer program or instruction is executed by the detection device, the detection device performs the above-mentioned first aspect or the first aspect.
  • the method in any possible implementation manner of the first aspect, or causing the detection device to execute the above second aspect or the method in any possible implementation manner of the second aspect.
  • the present application provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by the detection device, the detection device performs any of the above-mentioned first aspect or the first aspect.
  • Figure 1a is a schematic diagram of a parking space frame provided by the present application.
  • Figure 1b is a schematic diagram of a parking space inclination angle provided by the present application.
  • Figure 2a is a schematic diagram of a system architecture provided by the present application.
  • Figure 2b is a schematic diagram of another system architecture provided by the present application.
  • Figure 2c is a schematic diagram of the positional relationship between a vehicle and a sensor provided by the present application.
  • Fig. 3 is a schematic flow chart of a parking space detection method provided by the present application.
  • Fig. 4a is a schematic flowchart of a method for obtaining a panoramic image of a parking space provided by the present application
  • Fig. 4b is a schematic diagram of images obtained from four angles provided by the present application.
  • FIG. 5 is a schematic flow chart of a method for obtaining second parking space point information provided by the present application
  • FIG. 6 is a schematic flowchart of a method for determining that a first parking space point and a second parking space point belong to the same parking space frame provided by the present application;
  • FIG. 7 is a schematic flowchart of a method for determining the parking space information of a parking space based on n first parking space point information and m second parking space point information provided by the present application;
  • Fig. 8a is a schematic flowchart of a method for determining the parking space information of a parking space based on n first parking space point information and m second parking space point information provided by the present application;
  • Fig. 8 b is a schematic flow chart of another method for determining the parking space information of a parking space based on n first parking space point information and m second parking space point information provided by the present application;
  • FIG. 9 is a schematic flow chart of another parking space detection method provided by the present application.
  • FIG. 10 is a schematic flow chart of another parking space detection method provided by the present application.
  • FIG. 11 is a schematic structural diagram of a parking space detection device provided by the present application.
  • FIG. 12 is a schematic structural diagram of a parking space detection device provided by the present application.
  • FIG. 1 a it is a schematic diagram of a parking space frame provided by the present application.
  • P1P2 is called the entrance line
  • P1P4 and P2P3 can both be called the dividing line
  • the intersection point of the entrance line and the dividing line can be called the entrance parking space point (or called the parking mark point), that is, P1 and P2 in Figure 1a are called is the entrance parking spot
  • P3 and P4 can be called non-entrance parking spots.
  • P1, P2, P3 and P4 can be collectively referred to as parking space points
  • parking space points can also be understood as corner points at each corner of the parking space frame, therefore, parking space points can also be called parking space corner points.
  • the dividing line is perpendicular to the entrance line
  • the parking spaces whose dividing line is perpendicular to the entrance line can be called vertical parking spaces or parallel parking spaces.
  • the inclination angle of the parking space or the inclination angle of the dividing line or the inclination angle of the parking space frame refers to the angle between the dividing line and the x-axis of the image.
  • the included angle between the dividing line of the parking space frame and the x-axis of the image is ⁇ , as shown in Figure 1b.
  • This type of algorithm usually uses a template matching method to determine the inclination angle of the parking space.
  • the distance between the parking space and the own car refers to the distance from the center of the own car to the entrance of the parking space.
  • the distance S1 and the distance S2 represent the distance between the parking space and the vehicle.
  • the distance S3 and the distance S4 indicate the distance between the parking space and the vehicle.
  • FIG. 2a is a schematic diagram of an applicable system architecture of the present application.
  • the system may include a vehicle 101 and a vehicle management server 102 .
  • the vehicle 101 refers to a vehicle that has the functions of collecting images of the surrounding environment and remote communication.
  • the vehicle 101 is provided with a sensor 1011, which can realize the collection of the surrounding environment information of the vehicle.
  • the sensor may be, for example, an image acquisition device, and the image acquisition device may be, for example, at least one of a fisheye camera, a monocular camera, and a depth camera.
  • the sensors are set in the four directions of front, rear, left and right of the vehicle (combined with Fig.
  • the remote communication function of the vehicle 101 can generally be realized by a communication module provided on the vehicle 101, and the communication module includes, for example, a telematics box (telematics box, TBOX) or a wireless communication system (see the wireless communication system 244 in FIG. introduction) etc.
  • a communication module includes, for example, a telematics box (telematics box, TBOX) or a wireless communication system (see the wireless communication system 244 in FIG. introduction) etc.
  • the vehicle management server 102 can realize the function of parking space detection.
  • the vehicle management server 102 is a single server, or may be a server cluster composed of a plurality of servers.
  • the vehicle management server 102 may also be, for example, a cloud server (or called cloud, cloud, cloud server, cloud controller, or Internet of Vehicles server, etc.).
  • Cloud server is a general term for devices or devices with data processing capabilities, such as physical devices such as hosts or processors, virtual devices such as virtual machines or containers, and chips or integrated circuits.
  • the vehicle management server 102 may integrate all functions on an independent physical device, or may deploy different functions on different physical devices, which is not limited in this application.
  • one vehicle management server 102 can communicate with multiple vehicles 101 .
  • the number of vehicles 101 , vehicle management servers 102 , and sensors 1011 in the system architecture shown in FIG. 2 a is just an example, and the present application does not limit it.
  • the name of the vehicle management server 102 in the system is only an example, and other possible names may be used for specific implementation, for example, it may also be called a parking space detection device, which is not limited in this application.
  • the vehicle 101 in Fig. 2a described above may be the vehicle in Fig. 2b described below.
  • FIG. 2 b is a schematic diagram of another applicable system architecture of the present application.
  • the vehicle may be configured in a fully or partially autonomous driving mode.
  • Components coupled to or included in vehicle 200 may include propulsion system 210 , sensor system 220 , control system 230 , peripherals 240 , power supply 250 , computer system 260 , and user interface 270 .
  • Components of vehicle 200 may be configured to operate interconnected with each other and/or with other components coupled to various systems.
  • power supply 250 may provide power to all components of vehicle 200 .
  • Computer system 260 may be configured to receive data from and control propulsion system 210 , sensor system 220 , control system 230 , and peripherals 240 .
  • Computer system 260 may also be configured to generate a display of images on user interface 270 and to receive input from user interface 270 .
  • vehicle 200 may include more, fewer or different systems, and each system may include more, fewer or different components.
  • illustrated systems and components may be combined or divided in any manner, which is not specifically limited in the present application.
  • Propulsion system 210 may provide powered motion for vehicle 200 . As shown in FIG. 2 b , propulsion system 210 may include engine/motor 214 , energy source 213 , transmission 212 and wheels/tyres 211 . Additionally, propulsion system 210 may additionally or alternatively include other components than those shown in Figure 2b. This application does not specifically limit it.
  • the sensor system 220 may include several sensors for sensing information about the environment in which the vehicle 200 is located. As shown in FIG. 2 b , the sensors of the sensor system 220 may include a camera sensor 223 . The camera sensor 223 may be used to capture multiple images of the surrounding environment of the vehicle 200 . The camera sensor 223 may be a still camera or a video camera. Further, optionally, the sensor system 220 may also include a global positioning system (Global Positioning System, GPS) 226, an inertial measurement unit (Inertial Measurement Unit, IMU) 225, a laser radar sensor, a millimeter wave radar sensor, and a position sensor for modifying the sensor. And/or toward the brake 221, etc.
  • GPS Global Positioning System
  • IMU inertial measurement unit
  • the millimeter wave radar sensor may utilize radio signals to sense objects within the surrounding environment of the vehicle 200 .
  • millimeter wave radar 222 may be used to sense the speed and/or heading of a target in addition to sensing the target.
  • Lidar 224 may utilize laser light to sense objects in the environment in which vehicle 200 is located.
  • GPS 226 may be any sensor used to estimate the geographic location of vehicle 200. To this end, the GPS 226 may include a transceiver to estimate the position of the vehicle 200 relative to the earth based on satellite positioning data.
  • computer system 260 may be used to estimate the road traveled by vehicle 200 using GPS 226 in conjunction with map data.
  • the IMU 225 can be used to sense changes in position and orientation of the vehicle 200 based on inertial accelerations and any combination thereof.
  • the combination of sensors in IMU 225 may include, for example, accelerometers and gyroscopes. Additionally, other combinations of sensors in the IMU 225 are possible.
  • the control system 230 controls the operation of the vehicle 200 and its components.
  • Control system 230 may include various elements including steering unit 236 , accelerator 235 , braking unit 234 , sensor fusion algorithm 233 , computer vision system 232 , route control system 234 , and obstacle avoidance system 237 .
  • Steering system 236 is operable to adjust the heading of vehicle 200.
  • the throttle 235 is used to control the operating speed of the engine 214 and thus the speed of the vehicle 200 .
  • Control system 230 may additionally or alternatively include other components than those shown in Figure 2b. This application does not specifically limit it.
  • the braking unit 234 is used to control the deceleration of the vehicle 200 .
  • the braking unit 234 may use friction to slow the wheels 211 .
  • the brake unit 234 can convert the kinetic energy of the wheel 211 into electric current.
  • the braking unit 234 may also take other forms to slow down the rotation of the wheels 211 to control the speed of the vehicle 200 .
  • Computer vision system 232 may be operable to process and analyze images captured by camera sensor 223 in order to identify objects and/or features in the environment surrounding vehicle 200 . Objects and/or features may include traffic signals, road boundaries, and obstacles.
  • the computer vision system 232 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM structure from motion
  • computer vision system 232 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 234 is used to determine the travel route of the vehicle 200 .
  • route control system 242 may combine data from sensor system 220, GPS 226, and one or more predetermined maps to determine a travel route (eg, a parking route) for vehicle 200 .
  • Obstacle avoidance system 237 is used to identify, evaluate and avoid or otherwise overcome potential obstacles in the environment of vehicle 200 .
  • control system 230 may additionally or alternatively include components other than those shown and described. Alternatively, some of the components shown above may be reduced.
  • Peripherals 240 may be configured to allow vehicle 200 to interact with external sensors, other vehicles, and/or a user.
  • peripherals 240 may include, for example, a wireless communication system 244 , a touch screen 243 , a microphone 242 and/or a speaker 241 .
  • Peripheral device 240 may additionally or alternatively include other components than those shown in FIG. 2b. This application does not specifically limit it.
  • peripheral device 240 provides a means for a user of vehicle 200 to interact with user interface 270 .
  • touch screen 243 may provide information to a user of vehicle 200 .
  • the user interface 270 can also operate the touch screen 243 to receive user's input.
  • peripheral device 240 may provide a means for vehicle 200 to communicate with other devices located within the vehicle.
  • microphone 242 may receive audio (eg, voice commands or other audio input) from a user of vehicle 200 .
  • speaker 241 may output audio to a user of vehicle 200 .
  • Wireless communication system 244 may communicate wirelessly with one or more devices, either directly or via a communication network.
  • the wireless communication system 244 may use 3G cellular communication, such as code division multiple access (CDMA), EVD0, global system for mobile communications (GSM)/general packet radio service technology (general packet radio service, GPRS), or 4G cellular communication, such as long term evolution (long term evolution, LTE), or 5G cellular communication.
  • the wireless communication system 244 can use WiFi to communicate with a wireless local area network (wireless local area network, WLAN).
  • the wireless communication system 244 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, wireless communication system 244 may include one or more dedicated short range communications (DSRC) devices, which may include public and/or private data communications.
  • DSRC dedicated short range communications
  • Power supply 250 may be configured to provide power to some or all components of vehicle 200 .
  • power source 250 may comprise, for example, a rechargeable lithium-ion or lead-acid battery.
  • one or more battery packs may be configured to provide electrical power.
  • Other power supply materials and configurations are also possible.
  • power source 250 and energy source 213 may be implemented together, as in some all-electric vehicles.
  • Components of vehicle 200 may be configured to function in an interconnected manner with other components within and/or external to their respective systems. To this end, the components and systems of vehicle 200 may be communicatively linked together via a system bus, network, and/or other connection mechanisms.
  • Computer system 260 may include at least one processor 261 executing instructions 2631 stored in a computer-readable medium such as memory 263 .
  • Computer system 260 may also be a plurality of computing devices that control individual components or subsystems of vehicle 200 in a distributed manner.
  • the processor 261 can be any conventional processor, such as a central processing unit (central processing unit, CPU). Alternatively, it can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), graphics processing units (graphic processing units, GPUs), application specific integrated circuits (application specific integrated circuits, ASICs), field programmable Gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • a general-purpose processor can be a microprocessor, or any conventional processor. It should be understood that the present application does not limit the number of sensors and processors included in the above vehicle system.
  • FIG. 2b functionally illustrates the processor, memory, and other elements in the computer system 260, those of ordinary skill in the art will appreciate that the processor, computer, or memory may actually include Multiple processors, computers, or memory within a physical enclosure.
  • memory may be a hard drive or other storage medium located in a different housing than computer system 260 .
  • references to a processor or computer are to be understood to include references to collections of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components, may each have their own processor that only performs calculations related to component-specific functions .
  • the processor may be located remotely from the vehicle and be in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle while others are executed by a remote processor, including taking the necessary steps to perform a single maneuver.
  • memory 263 may contain instructions 2631 (eg, program logic) executable by processor 261 to perform various functions of vehicle 200 , including those described above.
  • Memory 214 may also contain additional instructions, including instructions for sending data to, receiving data from, interacting with, and/or controlling one or more of propulsion system 210, sensor system 220, control system 230, and peripherals 240. instruction.
  • memory 263 may also store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by vehicle 200 and computer system 260 during operation of vehicle 200 in autonomous, semi-autonomous, and/or manual modes.
  • a user interface 270 for providing information to or receiving information from a user of the vehicle 200 .
  • user interface 270 may include one or more input/output devices within set of peripheral devices 240 , such as wireless communication system 244 , touch screen 243 , microphone 242 and speaker 241 .
  • Computer system 260 may control functions of vehicle 200 based on input received from various subsystems (eg, propulsion system 210 , sensor system 220 , and control system 230 ), as well as from user interface 270 .
  • computer system 260 may utilize input from control system 230 in order to control steering unit 236 to avoid obstacles detected by sensor system 220 and obstacle avoidance system 237 .
  • computer system 260 is operable to provide control over many aspects of vehicle 200 and its subsystems.
  • one or more of these components described above may be installed separately from or associated with the vehicle 200 .
  • the memory 263 may exist partially or completely separately from the vehicle 200 .
  • the components described above may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 2b should not be construed as a limitation to the embodiment of the present application.
  • the aforementioned vehicles include but are not limited to unmanned vehicles, smart vehicles (such as automated guided vehicles (AGV)), electric vehicles, digital vehicles, and intelligent manufacturing vehicles.
  • unmanned vehicles such as automated guided vehicles (AGV)
  • smart vehicles such as automated guided vehicles (AGV)
  • electric vehicles such as electric vehicles
  • digital vehicles such as digital vehicles
  • intelligent manufacturing vehicles such as automated guided vehicles (AGV)
  • the parking space detection method provided in the present application can be applied to the fields of advanced driving assistant system (advanced driving assistant system, ADAS), automatic driving system or intelligent driving system, and is especially suitable for functions related to automatic parking. It can also be applied to the use of more advanced functions using parking space information as constraints, such as 3D modeling based on parking space information.
  • advanced driving assistant system advanced driving assistant system
  • ADAS advanced driving assistant system
  • intelligent driving system intelligent driving system
  • the present application proposes a parking space detection method.
  • the parking space detection method can improve the accuracy of the parking space information.
  • FIG. 3 is a schematic flow chart of a parking space detection method provided in the present application.
  • This method can be applied to the parking space management server 102 in the above-mentioned FIG. 2a, or to the vehicle in the above-mentioned FIG. 2b.
  • the parking space management server and the device executing the method in the vehicle can be collectively referred to as a parking space detection device.
  • the parking space detection device may be the parking space management server 102 in FIG. 2a above, or may be the vehicle in FIG. 2b above.
  • the method includes the following steps:
  • step 301 the parking space detection device acquires the image information of the parking space.
  • the parking space detection device may acquire a panoramic image of the parking space.
  • a panoramic image of the parking space please refer to the introduction of FIG. 4a below.
  • feature information of the panoramic image of the parking space may be extracted based on the panoramic image of the parking space, that is, image information of the parking space may be obtained.
  • the panoramic image of the parking space may also be referred to as a panoramic image of the parking space or a bird's-eye view image of the parking space.
  • the panoramic image of the parking space can be used as the input of the feature extraction model.
  • the panoramic image of the parking space is input to the feature extraction model, and the feature extraction model can output the feature information of the panoramic image of the parking space (see Figure 9 below), which feature information refers to points and/or lines and/or circles in the image, etc. .
  • the feature information of the panoramic image refers to information that can characterize the panoramic image itself, such as points and/or lines and/or circles.
  • the feature extraction model may be obtained through supervised learning based on samples, and stored in the parking space detection device or a vehicle communicating with the parking space detection device.
  • step 302 the parking space detection device determines first parking space point information according to the image information of the parking space.
  • the first parking space point information includes the coordinates of the first parking space point. Further, optionally, the first parking space point information also includes a first degree of confidence corresponding to the coordinates of the first parking space point.
  • the coordinates of the first parking space point refer to the pixel coordinates of the first parking space point in the image.
  • the pixel coordinates of the first parking space point on the image refer to the horizontal and vertical coordinates of the pixel of the first parking space point on the image.
  • the parking space detection device may determine N first parking space point information according to the image information of the parking space, and each first parking space point information includes the coordinates of the first parking space point, and further includes the coordinates of the first parking space point.
  • One confidence level, N is a positive integer.
  • the parking space detection device may determine the first parking space point information of the parking space according to the feature information of the panoramic image and the parking space point recognition algorithm.
  • the parking spot recognition algorithm may be an algorithm in a parking spot analysis model (or called a parking spot recognition model or a parking spot resolver).
  • the label of the parking spot analysis model is the coordinates of the parking spot (that is, the coordinates of the parking spot in the image).
  • the output of the parking spot analysis model is the coordinates of the parking spot.
  • the analysis model of the parking space point may be obtained by supervised learning based on samples, and stored in the parking space detection device or a vehicle communicating with the parking space detection device.
  • the feature information of the panoramic image of the parking space can be input into the parking space point analysis model, and the parking space point analysis model performs multi-layer convolution operation, which can output the coordinates of N first parking space points; further, it can also output N The first degree of confidence corresponding to the coordinates of each first parking space point in the coordinates of the first parking space points.
  • the parking spot recognition algorithm may also be a parking spot detection algorithm based on a gray image, a binary image based parking spot detection algorithm, or a contour curve based parking spot detection algorithm.
  • the process of the parking space point detection algorithm based on the contour curve may be, for example, to fit a straight line on the image through image recognition, and then identify the intersection point of the two straight lines as the parking space point.
  • Step 303 the parking space detection device determines first parking space frame information according to the image information of the parking space.
  • the first parking space frame information includes but not limited to the center coordinates of the first parking space frame, the confidence corresponding to the center coordinates of the first parking space frame, the size (such as length and width) of the first parking space frame, and the size correspondence of the first parking space frame. Confidence degree of , the inclination angle of the first parking space frame and the confidence degree corresponding to the inclination angle of the first parking space frame, etc.
  • the parking space detection device can determine the first parking space frame information according to the feature information of the panoramic image and the parking space frame recognition algorithm.
  • the label of the parking space frame analysis model can be the center coordinates of the parking space frame, the parking space frame The length and width of the parking space and the inclination angle of the parking space frame.
  • the output of the parking space frame analysis model is the center coordinates of the parking space frame, the length and width of the parking space frame, and the inclination angle of the parking space frame.
  • the parsing model of the parking space frame may be obtained by supervised learning based on samples, and stored in the parking space detection device or a vehicle communicating with the parking space detection device.
  • the feature information of the panoramic image of the parking space can be input into the parking space frame analysis model, and the parking space frame analysis model performs multi-layer convolution operation to output the first parking space frame information.
  • the analysis model of the parking space frame can output the center coordinates of the first parking space frame, the confidence degree corresponding to the center coordinates of the first parking space frame, the length and width of the first parking space frame, the confidence degree corresponding to the length and width of the first parking space frame, the parking space frame The inclination angle of and the confidence corresponding to the inclination angle of the parking space frame. It should be noted that usually the confidence levels corresponding to the length and width of the first parking space frame are the same.
  • the parking space detection device determines the second vehicle frame information of the parking space according to the feature information of the panoramic image, the parking space frame recognition algorithm, and the first parking space point information.
  • the parking space detection device may determine the transition information of the parking space frame according to the image information of the parking space, and determine the first parking space frame information according to the transition information of the parking space frame and the first parking space point information.
  • the parking space frame recognition algorithm includes algorithms in the parking space frame transition model and the parking space frame analysis model
  • the feature information of the panoramic image of the parking space can be input into the parking space frame transition model, and the parking space frame transition model performs a multi-layer convolution operation
  • the transition information of the parking space frame may be output, wherein the parking space frame transition information may be, for example, line information other than the parking space points, or information within the parking space frame.
  • the transition information of the parking space frame and the first parking space point information are input into the analysis model of the parking space frame, and the analysis model of the parking space frame performs multi-layer convolution operation to output the information of the first parking space frame.
  • the N first parking space point information obtained above is also input as part of the analysis model of the parking space frame. In this way, more accurate frame information of the first parking space can be output.
  • the obtained first parking space frame information may be incomplete due to the limitation of the perception range of the device for collecting image information, resulting in low accuracy of the obtained first parking space frame information.
  • the first parking space frame information is determined in combination with the obtained first parking space point information, which can improve the accuracy of the first parking space frame information.
  • abnormal parking spaces may also be excluded according to the length and width of the first parking space frame.
  • the frame information of the first parking space whose length and width do not fall within a reasonable value range can be eliminated. In this way, the accuracy of the first parking space frame information can be further improved.
  • step 302 may be performed first and then step 303 may be performed, or step 303 may be performed first and then step 302 may be performed, or step 303 and step 302 may be performed simultaneously, which is not limited in this application.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the first parking space frame information.
  • the parking space information of the parking space includes but not limited to the coordinates of the parking space point, the center coordinates of the parking space frame, the size (such as length and width) of the parking space frame, the inclination angle of the parking space frame, and the like.
  • the parking space information may be parking space frame information.
  • the parking space detection device may determine the second parking space point information according to the first parking space frame information.
  • the second parking space point information includes coordinates of M second parking space points, and further includes a second degree of confidence corresponding to the coordinates of each second parking space point.
  • the coordinates of the second parking space point refer to the pixel coordinates of the second parking space point in the image.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the second parking space point information.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the second parking space point information.
  • the parking space information is the final output parking space information, which can be displayed on the touch screen of the vehicle.
  • the coordinates of the parking space points in the detected parking space information are actually the pixel coordinates in the image information, which need to be combined with the internal parameter matrix and external parameter matrix detected during sensor calibration to convert the pixel coordinates into actual coordinates.
  • the vehicle can The location of the parking space is calibrated according to the actual coordinates.
  • the parking space information is determined by the first parking space point information and the first parking space frame information, which can be complemented by two different feature information, thereby obtaining from the local (such as the coordinates of the parking space point) To the global (such as the inclination angle of the dividing line of the parking space, etc.) more accurate parking space information.
  • the image information of the acquired parking space enters two branches, one of which is used to determine the first parking space point information, and the other is used to determine the first parking space frame information, so that when the acquired image information is
  • a network structure or called a network model
  • a more generalized network structure can be used, which can reduce the demand for the amount of original data under the same detection accuracy.
  • FIG. 4 a it is a schematic flowchart of a method for acquiring a panoramic image of a parking space provided by the present application.
  • the method includes the following steps:
  • Step 401 M sensors collect original images respectively, where M is a positive integer.
  • M can be equal to 4.
  • the sensors can be arranged in the front, rear, left, and right directions of the vehicle, and the four sensors can respectively collect original images to obtain 4 original images.
  • M may also be greater than 4, or less than 4.
  • the field of view of the sensor should be greater than 90 degrees as much as possible to achieve all-round coverage around the vehicle.
  • the original image collected by each sensor is a partial angle image (or called a surround view image or a partial bird’s-eye view image) in the panoramic image, and different angle images can identify the vehicle’s surroundings Environmental information from different angles.
  • M is equal to 1
  • the original image collected by the sensor is the panoramic image.
  • step 402 the parking space detection device acquires original images collected by M sensors.
  • the original image collected by the sensor can be sent to the cloud server through the communication module in the vehicle.
  • the original image collected by the sensor can be transmitted to the processor.
  • Step 403 the parking space detection device processes the acquired M original images to obtain M surround-view images.
  • the parking space detection device may firstly perform de-distortion processing on each original image.
  • the original image may be corrected based on built-in correction parameters of the sensor.
  • step 404 the parking space detection device performs homography transformation on the M surround-view images respectively to obtain M top-view images.
  • the look-around image can be a checkerboard pattern
  • the parking space detection device can obtain the actual pixel position information of the checkerboard points, and obtain the checkerboard pattern in the look-around image from the distorted original image according to a preset image processing method.
  • the pixel position information of the point can be obtained in the original image through a preset checkerboard point setting method.
  • there is a preset proportional relationship between the actual pixel position information of the checkerboard points and the pixel position information of the checkerboard points in the surround-view image and the prediction for obtaining the pixel position information of the checkerboard points in the surround-view image from the distorted monitoring image can be obtained.
  • the pixel position information of the checkerboard points in the look-around image can be obtained according to the preset conversion ratio information and the actual pixel position information of the checkerboard points.
  • a homography matrix (Homography matrix) is obtained, and each distorted original image is calculated and processed according to the homography matrix.
  • the process of obtaining surround view images from multiple angles can be understood as performing homography change processing.
  • the look-around image at each angle corresponds to the look-around image in a direction captured by the sensor.
  • the parking space detection device can stitch M top-view images to obtain a panoramic image of the parking space.
  • the parking space detection device can extract images of overlapping regions and non-overlapping regions in M top-view images, perform feature matching on images of overlapping regions, and fuse overlapping regions and non-overlapping regions according to the matching results, thereby Get a panoramic image of the parking space.
  • 4 fisheye cameras are set up around the vehicle.
  • 4 top-view images can be obtained.
  • the 4 top-view images have 4 overlapping areas A, B, C and D, as shown in Figure 4b.
  • the overlapping area is formed by matching and fusing two surround-view images according to image features, that is, feature point extraction and matching are performed on the overlapping area of the two images.
  • the image in the overlapping area A can be the feature matching and fusion of the image in front and the image on the left
  • the overlapping area B can be the feature matching and fusion of the front image and the right image
  • the overlapping area C can be the feature matching and fusion of the left image and the rear image
  • the overlapping area D can be the right image and
  • the images in the rear are subjected to feature matching and fusion, and the original images of the corresponding surround-view images are directly retained in the non-overlapping areas, and finally fused into a panoramic image of the parking space.
  • the panoramic image of the parking space can be passed. Based on the panoramic image determination, more accurate parking space information can be obtained.
  • a possible method for determining the information of the second parking space point is exemplarily shown as follows.
  • FIG. 5 it is a schematic flowchart of a method for acquiring second parking spot information provided by the present application. The method includes the following steps:
  • Step 501 the parking space detection device acquires the first parking space frame information of the parking space.
  • step 501 reference may be made to the introduction of the aforementioned step 303, which will not be repeated here.
  • the parking space detection device may convert the first parking space frame information into third parking space point information.
  • the first parking space frame information includes the center coordinates of the first parking space frame, the length and width of the first parking space frame, the inclination angle of the first parking space frame, and the confidence level corresponding to the center coordinates of the first parking space frame , the confidence degree corresponding to the length and width of the first parking space frame and the confidence degree corresponding to the inclination angle of the first parking space frame.
  • the parking space detection device can determine the third parking space point information based on the center coordinates of the first parking space frame, the inclination angle of the first parking space frame, and the length and width of the first parking space frame.
  • the third parking space point information includes the coordinates of the third parking space point.
  • the parking space detection device can also determine the corresponding position of the third parking space point based on the confidence degree corresponding to the center coordinate of the first parking space frame, the confidence degree corresponding to the inclination angle of the first parking space frame, and the confidence degree corresponding to the length and width of the first parking space frame. third degree of confidence.
  • the parking space detection device can convert each first parking space frame information into a set of third parking space point information, and the third confidence degrees corresponding to the coordinates of each third parking space point in a set of third parking space point information are the same , the third confidence degree is the average of the confidence degree corresponding to the center coordinate of the first parking space frame to which the third parking space point belongs, the length, width and corresponding confidence degree of the first parking space frame, and the confidence degree corresponding to the inclination angle of the first parking space frame value. It can also be understood that the confidence of the coordinates of each third parking space point in a group of third parking space point information transformed from a first parking space frame is the same.
  • the first parking space frame information A includes the confidence degree A1 corresponding to the center coordinates of the first parking space frame, the confidence degree A2 corresponding to the length and width of the first parking space frame, and the confidence degree A3 corresponding to the inclination angle of the first parking space frame.
  • the confidence degree of each third parking space point in the conversion of the first parking space frame information A into the third parking space point information is the same, that is, equal to (A1+A2+A3)/3.
  • Step 503 the parking space detection device traverses the information of each third parking space point, if it is determined that there is a first parking space point within the preset range, then perform the following step 504; if it is determined that there are two or more than two parking space points within the preset range The first parking space point, then perform the following step 505.
  • the preset range may be, for example, about 15 pixels (px).
  • 1px 2 centimeters (cm)
  • 15px corresponds to about 30 centimeters (cm) of the physical parking space.
  • Step 504 the parking space detection device replaces the third parking space point information with the first parking space point information.
  • replacing the third parking space point information with the first parking space point information can be understood as replacing the coordinates of the third parking space point with the coordinates of the first parking space point, and replacing the third parking space with the first confidence level corresponding to the coordinates of the first parking space point The point corresponds to the third confidence level.
  • Step 505 the parking space detection device replaces the third parking space point information with the first parking space point closest to the third parking space point.
  • the third parking space point information can be replaced by the first parking space point information closest to the third parking space point.
  • the coordinates of the first parking space point closest to the third parking space point can be used to replace the coordinates of the third parking space point
  • the first confidence level corresponding to the coordinates of the first parking space point closest to the third parking space point can be used to replace the first parking space point.
  • the third degree of confidence corresponding to the three parking spots can be used to replace the third parking spots.
  • the third parking space point information includes ⁇ coordinates of the third parking space point 31, coordinates of the third parking space point 32, coordinates of the third parking space point 33, and coordinates of the third parking space point 34 ⁇ , and the third parking space point 31 corresponds to The third confidence degree 31, the third parking space point 32 corresponds to the third confidence degree 32, the third parking space point 33 corresponds to the third confidence degree 33, and the third parking space point 34 corresponds to the third confidence degree 34; if the third parking space point 31 information is The first parking space point 11 information is replaced, and the rest are not replaced, then the obtained second parking space point information includes ⁇ the coordinates of the first parking space point 11 and the corresponding first confidence level 11, the coordinates of the third parking space point 32 and the corresponding first The third degree of confidence 32, the coordinates of the third parking space point 33 and the corresponding third degree of confidence 33, the coordinates of the third parking space point 34 and the corresponding third degree of confidence 34 ⁇ .
  • the parking space detection device can record and store the third parking space point information replaced by the first parking space point information in the above step 503 to step 505, and then determine the coordinates of the first parking space point and the second parking space point When the coordinates of are coincident, they can be used directly.
  • the method for obtaining the second parking space point information shown in FIG. 5 above is only a possible example, and the first parking space frame information can also be directly converted into the second parking space information.
  • the above step 502 can be The third parking space point information in is replaced with the second parking space point information, which is not limited in this application.
  • a possible implementation is to determine the parking space information of the parking space according to the first parking space point information and the second parking space point information. Based on this, it is necessary to first determine which first parking space point information and which second parking space point information belong to the same parking space frame. Or it can also be understood that it is necessary to first determine which first parking space point information and which second parking space point information correspond to the same parking space frame.
  • the following exemplarily shows a flow of a method for determining that the first parking space point and the second parking space point belong to the same parking space frame, see FIG. 6 .
  • the method includes the following steps:
  • the parking space detection device may determine second parking space frame information according to the first parking space point information.
  • the process of obtaining the information of the first parking spot can refer to the introduction of the aforementioned step 302 .
  • the parking space detection device may determine at least one second parking space frame information according to the first parking space point information.
  • distance constraints and parking corner types (such as L-type, T-type, or I-type, etc.) can be used to pair pairs of N first parking space points.
  • the parking space detection device can calculate the distance between the two parking space points in the x dimension and the y dimension, and the distance between the two parking space points can be considered to be the length Length of the parking space, for example: when 600cm ⁇ Length ⁇ 800cm, it can be considered that this The parking space formed by two parking space points is long (that is, the dividing line).
  • the parking space formed by these two parking spaces is wide (that is, the entrance line or the non-entry line). Further, after the length and width of the second parking space frame are determined, the center coordinates of the second parking space frame can also be determined. If the calculated Length of the parking space does not meet the above two geometric features, it is considered as an unconventional parking space, or the parking space is detected incorrectly, and it is discarded directly. Since the first confidence levels of each first parking space point are independent, the accuracy of the second parking space frame information determined based on the first parking space point information with two independent confidence levels is relatively high.
  • the parking space detection device can match the first parking space point information and the second parking space point information belonging to the same parking space frame according to the first parking space frame information and the second parking space frame information.
  • the second parking space frame information includes the center coordinates of the second parking space frame
  • the first parking space frame information includes the center coordinates of the first parking space frame
  • the parking space detection device may perform matching according to the center coordinates of the second parking space frame and the center coordinates of the first parking space frame.
  • the center coordinates (x1i, y1i) of the i-th second parking space frame match the center coordinates (x2j, y2j) of the first parking space frame one by one, and the center coordinates (x2j, y2j) of the i-th second parking space frame will be distance (such as The first parking space frame smaller than the preset value is determined to belong to the same parking space frame as the i-th second parking space frame.
  • the method for determining the first parking space point information and the second parking space point information belonging to the same parking space frame shown in FIG. 6 is only a possible example.
  • the first parking space frame information into the second parking space point information
  • the second parking space information with the first parking space point information, for example, the coordinates of the first parking space point and the second parking space point coordinates After a match, it can be determined which second parking space point information and the first parking space point information belong to the same parking space frame, which is not limited in this application.
  • the parking space information of a single parking space can be obtained for the first parking space point information and the second parking space point information corresponding to each parking space frame, according to
  • the parking space detection method provided in the present application may further obtain parking space information corresponding to multiple parking spaces, for example, parking space information of a parking lot.
  • the following exemplarily shows three possible methods for determining parking space information based on n pieces of first parking space point information and m pieces of second parking space point information.
  • n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame are used as an example to introduce.
  • FIG. 7 is a schematic flowchart of a method for determining parking space information of a parking space based on n first parking space point information and m second parking space point information provided by the present application. The method includes the following steps:
  • Step 701 the parking space detection device acquires n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the first parking space point information includes the coordinates of the first parking space point
  • the second parking space point information includes the coordinates of the second parking space point
  • Step 702 the parking space detection device determines that the coordinates of n first parking space points coincide with the coordinates of k parking space points among the coordinates of m second parking space points; if k ⁇ 3, execute the following step 703; if 0 ⁇ k When ⁇ 2, execute step 704 below.
  • the parking space detection device can also match the coordinates of the first parking space points and the coordinates of the second parking space points one by one, so as to determine the coordinates of the n first parking space points and the m second parking space points. Coordinate quantity k of overlapping parking spots in the coordinates.
  • coincidence does not mean absolute complete coincidence, and a certain error range may be allowed.
  • the error can be 20px.
  • Step 703 the parking space detection device determines the parking space information according to the m pieces of second parking space point information and/or the n pieces of first parking space point information.
  • the parking space detection device can determine the parking space information according to the m second parking space point information; or, can determine the parking space information according to the n first parking space point information; or can determine the parking space information according to the m second parking space point information and the n first The parking space point information determines the parking space information.
  • the parking space detection device may determine the parking space information according to the n first parking space point information.
  • the parking space detection device can eliminate the m pieces of second parking space information, and determine the parking space information according to the n pieces of first parking space point information.
  • the two overlapping parking spaces refer to the two parking spaces forming the entrance line.
  • a new parking space may be reselected.
  • the second parking space point information in FIG. 7 is obtained based on the method shown in FIG.
  • the point information of the third parking space is used to determine k.
  • the coordinates of the second parking space point can be used as the coordinates of the parking space point of the parking space; further, the parking space frame, center coordinates, inclination angle, etc. of the parking space can be determined based on the second parking space point information.
  • the parking space detection device finds that less than two (ie, k ⁇ 2) third parking space point information is replaced by the first parking space point information from the stored third parking space point information replaced by the first parking space point information, it can be based on the above steps 704 Determine the parking space information.
  • FIG. 8 a is a schematic flowchart of a method for determining parking space information of a parking space based on n pieces of first parking space point information and m pieces of second parking space point information provided by the present application.
  • the method includes the following steps:
  • step 801 the parking space detection device acquires n pieces of first parking space point information and m pieces of second parking space point information belonging to the same parking space frame.
  • the first parking space point information includes the coordinates of the first parking space point and the first confidence degree corresponding to the coordinates of the first parking space point;
  • the second parking space point information includes the coordinates of the second parking space point and the corresponding second degree of confidence.
  • Step 802 the parking space detection device determines whether the coordinates of the n first parking space points and the coordinates of the m second parking space points coincide with the coordinates of at least three parking space points; if yes, perform step 803; if not, perform step 804 .
  • the parking space detection device determines that the coordinates of n first parking space points coincide with the coordinates of k parking space points among the coordinates of m second parking space points; if k ⁇ 3, perform the following step 803; if k ⁇ 3, execute the following step 804.
  • the coordinates of each second parking space point in the coordinates of the m second parking space points are respectively matched with the coordinates of the n first parking space points one by one; or for the n first parking space points Each first parking space point in the coordinates is matched one by one with the coordinates of the m second parking space points.
  • coincidence does not mean absolute complete coincidence, and a certain error range may be allowed. For example, the error can be 20px.
  • Step 803 for the coordinates of each first parking space point among the coordinates of the n first parking space points and the corresponding coordinates of the second parking space point, the parking space detection device performs a weighted average of the first The coordinates of the parking space point and the corresponding coordinates of the second parking space point obtain the coordinates of the parking space point after weighted average, and determine the parking space information according to the coordinates of the parking space point after the weighted average.
  • the coordinates of the weighted average parking space point (the coordinate of the first parking space point ⁇ the coordinate of the first parking space point The first degree of confidence corresponding to the coordinates + the coordinates of the second parking space point ⁇ the second degree of confidence corresponding to the coordinates of the corresponding second parking space point)/2.
  • the coordinates of the weighted average parking space point (the coordinates of the second parking space point ⁇ the second confidence degree corresponding to the coordinates of the second parking space point + the coordinates of the corresponding first parking space point ⁇ the second confidence level corresponding to the coordinates of the first parking space point)/2.
  • the first parking space point information includes the coordinates (x11, y11) of the first parking space point and the first confidence level corresponding to the coordinates (x11, y11) of the first parking space point is A11
  • the first confidence degree corresponding to the coordinates (x12, y12) of the first parking space point and the coordinates (x12, y12) of the first parking space point is A12
  • the coordinates (x13, y13) of the first parking space point and the coordinates of the first parking space point is A13
  • the first confidence level corresponding to the coordinates (x14, y14) of the first parking space point and the coordinates (x14, y14) of the first parking space point is A14.
  • the second parking space point information includes the coordinates (x21, y21) of the second parking space point and the second confidence level corresponding to the coordinates (x21, y21) of the second parking space point is A21, the coordinates (x22, y22) of the second parking space point and The second confidence degree corresponding to the coordinates (x22, y22) of the second parking space point is A22, and the second confidence degree corresponding to the coordinates (x23, y23) of the second parking space point and the coordinates (x23, y23) of the second parking space point is A23, the coordinates (x24, y24) of the second parking space point and the second confidence degree corresponding to the coordinates (x24, y24) of the second parking space point are A24; according to the first confidence degree and the second confidence degree, the weighted average of 4
  • the coordinates of a parking space point and the coordinates of 4 second parking space points can obtain the coordinates of the weighted average parking space point as:
  • the parking space information can be determined according to the weighted averaged coordinates of the parking space points.
  • the parking space information includes the coordinates of the parking space points, and the coordinates of the parking space points after the weighted average can be the coordinates of the parking space points in the determined parking space information.
  • the parking space information also includes the parking space frame information, that is, the length and width of the parking space frame, the central coordinates of the parking space frame, and the inclination angle of the parking space frame can be determined according to the coordinates of the parking space points after weighted average.
  • first parking space point information and the second parking space point information used to represent the same physical parking space point of the parking space correspond to each other.
  • first parking space point information used to represent the physical parking space point in the upper left corner corresponds to the second parking space point information.
  • the coordinates (x11, y11) of the first parking space point and the coordinates (x21, y21) of the second parking space point correspond to the same physical parking space point
  • the coordinates (x12, y12) of the first parking space point and the second parking space point The coordinates (x22, y22) of the corresponding to the same physical parking point, the coordinates of the first parking point (x13, y13) and the coordinates (x23, y23) of the second parking point correspond to the same physical parking point
  • the coordinates of the first parking point (x14, y14) and the coordinates (x24, y24) of the second parking space point correspond to the same physical parking space point.
  • the first parking space point information includes the coordinates (x11, y11) of the first parking space point and the first confidence level corresponding to the coordinates (x11, y11) of the first parking space point is A11
  • the coordinates (x12, y12) of the first parking space point and the first confidence degree corresponding to the coordinates (x12, y12) of the first parking space point are A12
  • the first confidence level corresponding to the coordinates (x13, y13) of is A13.
  • the second parking space point information includes the coordinates (x21, y21) of the second parking space point and the second confidence level corresponding to the coordinates (x21, y21) of the second parking space point is A21, the coordinates (x22, y22) of the second parking space point and The second confidence degree corresponding to the coordinates (x22, y22) of the second parking space point is A22, and the second confidence degree corresponding to the coordinates (x23, y23) of the second parking space point and the coordinates (x23, y23) of the second parking space point is A23, the coordinates (x24, y24) of the second parking space point and the second confidence degree corresponding to the coordinates (x24, y24) of the second parking space point are A24; according to the first confidence degree and the second confidence degree, the weighted average of three
  • the coordinates of a parking space point and the corresponding coordinates of the 3 second parking space points can obtain the coordinates of the weighted average parking space point as follows:
  • the parking space information can be determined according to the weighted averaged coordinates of the parking space points.
  • the parking space information includes the coordinates of the parking space points, and the weighted average coordinates of the three parking space points can be the coordinates of the three parking space points in the determined parking space information.
  • the coordinates of the remaining one parking space point can be calculated based on the weighted average coordinates of the three parking space points; or, the coordinates (x24, y24) of the second parking space point can be directly used as the coordinates of the fourth parking space point in the parking space information .
  • the parking space information also includes the parking space frame information, that is, the length and width of the parking space frame, the central coordinates of the parking space frame, and the inclination angle of the parking space frame can be determined according to the coordinates of the parking space points after weighted average.
  • the coordinates of the weighted averaged parking space point can be further weighted and averaged.
  • Step 804 the parking space detection device determines whether the coordinates of the n first parking space points and the coordinates of the m second parking space points coincide with the coordinates of two parking space points; if yes, perform the following step 805; if not, explain the first The accuracy of at least one of the first parking space point information or the first parking space frame information is poor, and a new parking space can be found again.
  • the parking space detection device determines whether k is equal to 2 or not. If yes, perform the following step 805; if not, it means that the accuracy of at least one of the first parking space point information or the first parking space frame information is poor, here, you can search for a new parking space or you can also execute the following Figure 8b Methods.
  • the two overlapping parking space points refer to the two parking space points forming the entrance line.
  • the parking space detection device may determine the parking space information of the parking space according to the n first parking space point information.
  • the parking space detection device may eliminate m pieces of second parking space information, and determine the first parking space point information as the parking space information of the parking space.
  • the parking space information and the second parking space point information can improve the accuracy of the parking space information.
  • the parking space information can be determined based on the first parking space point information with higher accuracy, thereby also improving the accuracy of the parking space information.
  • the second parking space point information in FIG. 8a is obtained based on the method shown in FIG.
  • the point information of the third parking space is used to determine k.
  • the parking space detection device finds that four third parking space point information is replaced by the first parking space point information from the stored third parking space point information replaced by the first parking space point information, it can be directly after step 505 in FIG. 5 Determine the parking space information according to the second parking space point information obtained in FIG. 5 ; step 803 and step 705 do not need to be executed.
  • the coordinates of the second parking space point can be used as the coordinates of the parking space point of the parking space; further, the parking space frame, center coordinates, inclination angle, etc.
  • the parking space detection device determines that there are less than two overlapping parking space points between the coordinates of n first parking space points and the coordinates of m second parking space points (that is, the coordinates of only one parking space point overlap or the coordinates of no parking space point coincide), you can Refer to the method shown in FIG. 8b to determine the parking space information of the parking space.
  • FIG. 8 b is a schematic flowchart of another method for determining parking space information of a parking space based on the first parking space point information and the first parking space frame information provided by the present application.
  • the method includes the following steps:
  • Step 811 if the coordinates of the n first parking spaces overlap with the coordinates of the m second parking spaces or there is no overlapping of parking spaces, the parking space detection device determines the first average value of the n first confidence levels, and the m first parking space points The second average of the two confidence levels.
  • the parking space detection device determines the first average value of n first confidence levels and the second average value of m second confidence levels.
  • the n first confidence levels are respectively the first confidence level of A11, the first confidence level of A12, the first confidence level of A13 and the first confidence level of A14, and the m second confidence levels are respectively the second
  • the confidence level is A21, the second confidence level is A22, the second confidence level is A23, and the second confidence level is A24,
  • Step 812 the parking space detection device determines whether the first average value is greater than or equal to the threshold; if greater than or equal to, perform step 813 ; if less, perform step 814 .
  • the threshold may be 0.7, for example.
  • the parking space detection device may determine the parking space information according to the n first parking space point information.
  • the parking space detection device can determine the first parking space point information as the parking space information of the parking space by eliminating the m pieces of second parking space information.
  • Step 814 the parking space detection device determines whether the second average value is greater than or equal to the threshold; if greater, execute step 815, if less, indicating that the parking space information and the second parking space point information are inaccurate, and a new parking space can be found again.
  • the threshold may be 0.7, for example.
  • the parking space detection device may determine the parking space information of the parking space according to the m second parking space point information.
  • the parking space detection device may eliminate n pieces of first parking space point information, and determine the second parking space point information as the parking space information of the parking space.
  • the parking space information can be determined based on the parking space point with a higher degree of confidence, and the prediction result with a lower degree of confidence can be Filtering is performed to improve the accuracy of parking information.
  • the first parking space point information When the confidence of the first parking space point is low, the first parking space point information can be transitioned out, and the first parking space frame information can be used to determine the parking space information, so that the recall rate of parking space detection can be improved in the parking space point failure scenario; when the first parking space point When the confidence of the frame is low, the first parking space frame information can be transitioned away, and the first parking space point information can be used to determine the parking space information, so that the recall rate of parking space detection can also be improved in the scene of parking space frame failure.
  • the network structure is taken as an example.
  • the network structure includes a feature extraction model, a parking space point analysis model, a parking space frame transition model and a parking space frame analysis model, wherein the feature extraction model, the parking space point analysis model, the parking space frame analysis model and the parking space frame
  • the transition model may be integrated in one functional module, or may be integrated in different functional modules, which is not limited in this application.
  • the total loss function of the network structure is the linear superposition of the loss functions of the parking space point analysis model and the parking space frame analysis model.
  • the network structure may be, for example, Resnet-50 or Visual Geometry Group Network (Visual Geometry Group Network 16, VGG-16), which is not limited in this application.
  • the method includes the following steps:
  • Step 901 the parking space detection device acquires a panoramic image of the parking space.
  • step 902 the parking space detection device extracts feature information of the panoramic image of the parking space.
  • the feature information may represent image information, such as points, lines, and/or circles.
  • the feature information of the panoramic image may represent information of the panoramic image, such as points and/or lines and/or circles.
  • Step 903 the parking space detection device inputs the characteristic information of the panoramic image of the parking space into the parking space point analysis model, identifies the parking space point information, and obtains the first parking space point information; and inputs the characteristic information of the panoramic image of the parking space into the parking space frame transition model to identify the parking space frame Transition information, get the transition information of the parking space frame.
  • the parking space point analysis model and the parking space frame transition model are equal models.
  • the input of the parking space point analysis model and the parking space frame transition model is the same feature information of the panoramic image, that is, the acquired feature information of the panoramic image is input to both the parking space point analysis model and the parking space frame transition model. Therefore, the feature information extracted from the panoramic image of the parking space in the above step 902 may be a relatively generalized model, and with the same detection accuracy, the relatively generalized model can reduce the demand for the amount of original data.
  • the first parking space point information includes but not limited to the coordinates of the first parking space point and the first confidence level corresponding to the coordinates of the first parking space point.
  • step 903 the feature information of the panoramic image of the parking space is input into the transition model of the parking space frame.
  • the transition model of the parking space frame can output the transition information of the parking space frame.
  • the transition information of the parking space frame can be, for example, line information or other information in the parking space frame.
  • Step 904 the parking space detection device inputs the parking space frame transition information and the first parking space point information into the parking space frame analysis model, identifies the parking space frame information, and obtains the first parking space frame information.
  • the first parking space frame information includes but not limited to the center coordinates of the first parking space frame, the confidence corresponding to the center coordinates of the first parking space frame, the size (such as length and width) of the first parking space frame, and the size correspondence of the first parking space frame. Confidence degree of , the inclination angle of the first parking space frame and the confidence degree corresponding to the inclination angle of the first parking space frame, etc.
  • the first parking space point information is part of the information for determining the first parking space frame information, which can reduce the training difficulty of the parking space frame analysis model and help to improve the accuracy of the first parking space frame information.
  • first parking space point information input into the parking space frame analysis model in step 903 may be some intermediate features used to characterize the first parking space point information.
  • Step 905 the parking space detection device determines the parking space information of the parking space according to the first parking space point information and the first parking space frame information.
  • the parking space detection device can obtain the second parking space point information according to the first parking space frame information.
  • it may be based on the foregoing manner in FIG. 5 .
  • the first parking space frame information may also be directly converted into the second parking space point information.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the second parking space point information.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and the second parking space point information.
  • the acquisition of the first parking space point information and the acquisition of the first parking space frame information in the above steps can be considered as two parallel tasks, and these two parallel tasks are based on the feature information extraction of the same parking space image extracted In this way, the feature information of the panoramic image of the parking space can be extracted through a more generalized model.
  • the intermediate task of obtaining the first parking space point information is used as the input of the task of obtaining the first parking space frame information, so that the accuracy of the obtained first parking space frame information can be improved.
  • the more accurate local information of the parking space can be obtained through the first parking space point information
  • the more accurate global information of the parking space can be obtained through the first parking space frame information
  • the parking space information is fused with the second parking space point information obtained based on the first parking space frame information , can obtain both local and global accurate parking space information.
  • the parking space information can also be sent to the planning control device (such as the control system 230 in FIG. 2b ), so that the planning control device can control the movement of the vehicle.
  • the planning control device may be integrated into the vehicle, or may also be integrated into the cloud server shown above, or may also be an independent device, etc., which is not limited in this application.
  • the planning control device may plan the parking route according to the parking space information of the parking space, obtain the parking route, and control the vehicle to park according to the parking route.
  • FIG. 10 it is a schematic flowchart of another parking space detection method provided by the present application.
  • This method can be applied to the parking space management server 102 in the above-mentioned FIG. 2a, or to the vehicle in the above-mentioned FIG. 2b.
  • the parking space management server and the processor can be collectively referred to as a parking space detection device.
  • the parking space detection device may be the parking space management server 102 shown in Figure 2a above, or may be the vehicle shown in Figure 2b above.
  • the method includes the following steps:
  • Step 1001 the parking space detection device acquires the image information of the parking space.
  • step 1001 reference may be made to the introduction of the aforementioned step 301, which will not be repeated here.
  • Step 1002 the parking space detection device acquires first parking space point information according to the image information of the parking space.
  • step 1002 reference may be made to the introduction of the aforementioned step 302, which will not be repeated here.
  • Step 1003 the parking space detection device determines the first parking space frame information according to the image information of the parking space and the first parking space point information.
  • the parking space detection device may determine the parking space information of the parking space according to the first parking space point information and/or the first parking space frame information.
  • This step 1004 can be understood as, the parking space detection device can determine the parking space information of the parking space according to the first parking space point information; or determine the parking space information according to the first parking space frame information; or determine the parking space according to the first parking space point information and the first parking space frame information information.
  • the parking space detection device can determine the parking space information of the parking space according to the first parking space point information; or determine the parking space information according to the first parking space frame information; or determine the parking space according to the first parking space point information and the first parking space frame information information.
  • the accuracy of the first parking space frame information can be improved, thereby improving the accuracy of the parking space information.
  • the parking space information of the parking space is determined by the first parking space point information and the first parking space frame information
  • the first parking space point information is invalid (such as the parking space point is blocked or painted) or the first parking space frame information is invalid (such as the parking space frame is not been photographed completely, etc.)
  • the parking space information can still be determined.
  • the detecting device includes hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the modules and method steps described in the embodiments disclosed in the present application. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
  • FIG. 11 and FIG. 12 are schematic structural diagrams of a possible detection device provided in the present application. These detection devices can be used to realize the functions in the above method embodiments, and thus can also realize the beneficial effects possessed by the above method embodiments.
  • the detection device can be the parking space management server 102 in Figure 2a, or the vehicle in Figure 2b above, or a module (such as a chip) applied to a cloud server or a module (such as a chip) applied to a vehicle (such as chip).
  • the detection device 1100 includes an acquisition module 1101 and a processing module 1102 .
  • the detection device 1100 is used to implement the functions in any one of the method embodiments shown in FIG. 3 to FIG. 10 above.
  • the acquisition module 1101 is used to acquire the image information of the parking space; the processing module 1102 is used to determine the first parking space point information according to the image information of the parking space; The image information determines the first parking space frame information; according to the first parking space point information and the first parking space frame information, the parking space information of the parking space is determined.
  • obtaining module 1101 and the processing module 1102 in the embodiment of the present application may be implemented by a processor or processor-related circuit components.
  • the present application further provides a detection device 1200 .
  • the detection device 1200 may include a processor 1201 .
  • the detection device 1200 may further include a memory 1202 for storing instructions executed by the processor 1201 or storing input data required by the processor 1201 to execute the instructions or storing data generated by the processor 1201 after executing the instructions.
  • the processor 1201 is used to execute the functions of the acquisition module 1101 and the processing module 1102 described above.
  • processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor can be a microprocessor, or any conventional processor.
  • the method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or known in the art any other form of storage medium.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the ASIC can be located in the parking space detection device.
  • the processor and the storage medium may also exist in the parking space detection device as discrete components.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product consists of one or more computer programs or instructions. When the computer programs or instructions are loaded and executed on the computer, the processes or functions of the embodiments of the present application are executed in whole or in part.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, a parking space detection device, user equipment or other programmable devices.
  • Computer programs or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer programs or instructions may be Wired or wireless transmission to another website site, computer, server or data center.
  • a computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media.
  • Available media can be magnetic media, such as floppy disks, hard disks, and magnetic tapes; optical media, such as digital video discs (digital video discs, DVDs); and semiconductor media, such as solid state drives (SSDs). ).
  • “plurality” means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship; in the formulas of this application, the character “/” indicates that the contextual objects are a “division” Relationship.
  • the word “exemplary” is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as “example” is not to be construed as preferred or advantageous over other embodiments or designs. Or it can be understood that the use of the word example is intended to present a concept in a specific manner, and does not constitute a limitation to the application.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de détection d'espace de stationnement, aptes à être appliqués au domaine de la conduite autonome, de la conduite assistée, de la modélisation tridimensionnelle, ou analogue, et utilisés pour résoudre le problème de faible précision d'informations d'espace de stationnement dans l'état de la technique. Le procédé de détection d'espace de stationnement peut comprendre les étapes consistant à : obtenir des informations d'image d'un espace de stationnement ; déterminer des premières informations de point d'espace de stationnement en fonction des informations d'image de l'espace de stationnement ; déterminer des premières informations de cadre d'espace de stationnement en fonction des informations d'image de l'espace de stationnement ; et déterminer des informations d'espace de stationnement de l'espace de stationnement en fonction des premières informations de point d'espace de stationnement et des premières informations de cadre d'espace de stationnement. Les informations d'espace de stationnement sont déterminées au moyen des premières informations de point d'espace de stationnement et des premières informations de cadre d'espace de stationnement, de telle sorte que deux types d'informations de caractéristiques différentes peuvent être complémentaires, et ainsi des informations d'espace de stationnement relativement précises de locales à globales peuvent être obtenues.
PCT/CN2021/101613 2021-06-22 2021-06-22 Procédé et dispositif de détection d'espace de stationnement WO2022266854A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101613 WO2022266854A1 (fr) 2021-06-22 2021-06-22 Procédé et dispositif de détection d'espace de stationnement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101613 WO2022266854A1 (fr) 2021-06-22 2021-06-22 Procédé et dispositif de détection d'espace de stationnement

Publications (1)

Publication Number Publication Date
WO2022266854A1 true WO2022266854A1 (fr) 2022-12-29

Family

ID=84543858

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101613 WO2022266854A1 (fr) 2021-06-22 2021-06-22 Procédé et dispositif de détection d'espace de stationnement

Country Status (1)

Country Link
WO (1) WO2022266854A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117012053A (zh) * 2023-09-28 2023-11-07 东风悦享科技有限公司 一种车位检测点的后优化方法、***及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685000A (zh) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 一种基于视觉的车位检测方法及装置
CN110969655A (zh) * 2019-10-24 2020-04-07 百度在线网络技术(北京)有限公司 用于检测车位的方法、装置、设备、存储介质以及车辆
CN111191485A (zh) * 2018-11-14 2020-05-22 广州汽车集团股份有限公司 一种车位检测方法及其***、汽车
CN111881874A (zh) * 2020-08-05 2020-11-03 北京四维智联科技有限公司 车位识别方法、设备及***
CN112509354A (zh) * 2020-12-08 2021-03-16 广州小鹏自动驾驶科技有限公司 一种车位检测方法、装置、车辆、可读介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191485A (zh) * 2018-11-14 2020-05-22 广州汽车集团股份有限公司 一种车位检测方法及其***、汽车
CN109685000A (zh) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 一种基于视觉的车位检测方法及装置
CN110969655A (zh) * 2019-10-24 2020-04-07 百度在线网络技术(北京)有限公司 用于检测车位的方法、装置、设备、存储介质以及车辆
CN111881874A (zh) * 2020-08-05 2020-11-03 北京四维智联科技有限公司 车位识别方法、设备及***
CN112509354A (zh) * 2020-12-08 2021-03-16 广州小鹏自动驾驶科技有限公司 一种车位检测方法、装置、车辆、可读介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117012053A (zh) * 2023-09-28 2023-11-07 东风悦享科技有限公司 一种车位检测点的后优化方法、***及存储介质

Similar Documents

Publication Publication Date Title
CN110148185B (zh) 确定成像设备坐标系转换参数的方法、装置和电子设备
CN113168708B (zh) 车道线跟踪方法和装置
CN110543814B (zh) 一种交通灯的识别方法及装置
WO2021184218A1 (fr) Procédé d'étalonnage de pose relative et appareil associé
WO2021227645A1 (fr) Procédé et dispositif de détection de cible
CN107636679B (zh) 一种障碍物检测方法及装置
US20200232800A1 (en) Method and apparatus for enabling sequential groundview image projection synthesis and complicated scene reconstruction at map anomaly hotspot
CN114667437A (zh) 用于自主驾驶应用的地图创建和定位
US20160203373A1 (en) System and method for mapping, localization, and pose correction of a vehicle based on images
CN112740268B (zh) 目标检测方法和装置
EP4033324B1 (fr) Procédé et dispositif de détection d'informations d'obstacle pour robot mobile
CN113228135B (zh) 一种盲区图像获取方法及相关终端装置
WO2020186444A1 (fr) Procédé de détection d'objet, dispositif électronique, et support de stockage informatique
CN113591518B (zh) 一种图像的处理方法、网络的训练方法以及相关设备
CN113850867A (zh) 相机参数标定及设备的控制方法、装置、设备及存储介质
CN115220449B (zh) 路径规划的方法、装置、存储介质、芯片及车辆
CN114240769A (zh) 一种图像处理方法以及装置
CN116997771A (zh) 车辆及其定位方法、装置、设备、计算机可读存储介质
CN115265561A (zh) 车辆定位方法、装置、车辆及介质
WO2022266854A1 (fr) Procédé et dispositif de détection d'espace de stationnement
CN108416044B (zh) 场景缩略图的生成方法、装置、电子设备及存储介质
CN114167404A (zh) 目标跟踪方法及装置
CN115205311B (zh) 图像处理方法、装置、车辆、介质及芯片
CN115891984A (zh) 一种智能驾驶***、方法、设备及介质
WO2021159397A1 (fr) Procédé de détection et dispositif de détection de région pouvant être parcourue par un véhicule

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946356

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21946356

Country of ref document: EP

Kind code of ref document: A1