WO2024109079A1 - 一种车位开口检测方法及装置 - Google Patents

一种车位开口检测方法及装置 Download PDF

Info

Publication number
WO2024109079A1
WO2024109079A1 PCT/CN2023/104957 CN2023104957W WO2024109079A1 WO 2024109079 A1 WO2024109079 A1 WO 2024109079A1 CN 2023104957 W CN2023104957 W CN 2023104957W WO 2024109079 A1 WO2024109079 A1 WO 2024109079A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking space
information
model
vehicle
data
Prior art date
Application number
PCT/CN2023/104957
Other languages
English (en)
French (fr)
Inventor
刘广峰
苏潇然
姜雨
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024109079A1 publication Critical patent/WO2024109079A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas

Definitions

  • the present application relates to the field of vehicle technology, and in particular to a parking space opening detection method and device.
  • APA automated parking assist
  • RPA remote parking assist
  • AVP automated valet parking
  • the embodiments of the present application provide a parking space opening detection method and device for improving the accuracy of parking space opening detection and improving the generalization ability of a parking space detection model.
  • the embodiments of the present application provide a parking space opening detection method, which can be implemented by a parking space opening detection device.
  • the parking space opening detection device can be an independent device, a chip or component in a device, or a software module, which can be deployed on a vehicle, or in an on-board device or smart device on the vehicle.
  • the embodiments of the present application do not limit the product form of the parking space opening detection device.
  • the method may include: obtaining at least one feature information of a parking space; determining the parking space information of the parking space based on the at least one feature information and a parking space detection model, wherein the parking space information includes the parking space opening of the parking space.
  • the parking space detection model may specifically be a parking space opening detection model.
  • the parking space detection model may be a decision tree model.
  • the parking space opening detection device can predict the parking space opening based on the different parking space feature information extracted for the parking space and the preset parking space detection model, so as to assist in determining whether the parking space can be parked and from which entrance the vehicle should enter the parking space.
  • This method helps to improve the accuracy of parking space opening detection and improve the generalization ability of the parking space detection model.
  • the at least one feature information includes at least one of the following feature information of the parking space: length, width, type, other parking space information within a predetermined range around the parking space, obstacle information inside and outside the parking space, distance information between the parking space and the vehicle, and coordinate information of the parking space in the coordinate system of the vehicle. It should be understood that this is only an example of the parking space feature information and not any limitation. In specific implementation, other parking space feature information can also be obtained according to business needs or application scenarios, etc., and the embodiments of the present application do not limit this.
  • obtaining at least one feature information of a parking space may include: obtaining environmental information inside and/or outside the parking space through at least one sensor associated with the vehicle; and extracting the at least one feature information from the environmental information inside and/or outside the parking space.
  • the parking space opening detection device can obtain parking space feature information through different means, providing a flexible parking space detection method.
  • the method also includes: obtaining first prediction data based on test data and a first model; obtaining second prediction data based on test data and a second model, wherein the first model and the second model are deep learning models, the first model is trained using data from a first range, and the second model is trained using data from a second range, and the first range is larger than the second range; obtaining data of a target scene (such as a difficult scene or a diverse scene) based on the difference between the first prediction data and the second prediction data; obtaining a training data set based on the data of the target scene, and the training data set is used to train the parking space detection model.
  • a target scene such as a difficult scene or a diverse scene
  • the parking space opening detection device can use the same test data and different models to make predictions respectively, and use the prediction differences of the different models to obtain the data of the target scene, so as to enrich the training data set and improve the robustness of the parking space detection model in the embodiment of the present application.
  • the first model and the second model can be existing deep learning models.
  • the first model is trained with a large range of data, so the model structure is complex and has many parameters.
  • the prediction accuracy is worse than the second model, its real-time performance is poorer than the second model when applied to the field of vehicle assisted driving.
  • relevant data of difficult scenarios can be supplemented, which helps to improve the robustness of the parking space detection model to be trained.
  • the parking space detection model includes a decision tree model.
  • the model parameters of the decision tree model include at least one of the following: the number of decision trees, the maximum depth of the decision tree, the number of categories, the learning rate, the positive sample weight/negative sample weight, the sample random sampling rate and the sampling frequency.
  • an embodiment of the present application provides a parking space opening detection device, which may include: an acquisition unit, used to acquire at least one feature information of a parking space; a determination unit, used to determine the parking space information of the parking space based on the at least one feature information and a parking space detection model, wherein the parking space information includes the parking opening of the parking space.
  • the at least one item of characteristic information includes at least one of the following characteristic information of the parking space: length, width, type, information of other parking spaces within a predetermined range around the parking space, obstacle information inside and outside the parking space, distance information between the parking space and the vehicle, and coordinate information of the parking space in the coordinate system of the vehicle.
  • the acquisition unit is specifically used to: acquire environmental information inside and/or outside the parking space through at least one sensor associated with the vehicle; and extract at least one item of feature information from the environmental information inside and/or outside the parking space.
  • the device may also include: a prediction unit, used to obtain first prediction data based on test data and a first model; obtain second prediction data based on test data and a second model, wherein the first model and the second model are deep learning models, the first model is trained using data from a first range, and the second model is trained using data from a second range, and the first range is larger than the second range; the acquisition unit is also used to obtain data of a target scene based on the difference between the first prediction data and the second prediction data; obtain a training data set based on the data of the target scene, and the training data set is used to train the parking space detection model.
  • a prediction unit used to obtain first prediction data based on test data and a first model
  • obtain second prediction data based on test data and a second model wherein the first model and the second model are deep learning models, the first model is trained using data from a first range, and the second model is trained using data from a second range, and the first range is larger than the second range
  • the acquisition unit is also used to
  • the parking space detection model includes a decision tree model.
  • the model parameters of the decision tree model include at least one of the following: the number of decision trees, the maximum depth of the decision tree, the number of categories, the learning rate, the positive sample weight/negative sample weight, the sample random sampling rate and the sampling frequency.
  • an embodiment of the present application provides a communication device, comprising: a processor and a memory; the memory is used to store programs; the processor is used to execute the programs stored in the memory, so that the device implements the method described in the first aspect and any possible design of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable medium stores a program code.
  • the program code runs on a computer, the computer executes the method described in the first aspect and any possible design of the first aspect.
  • an embodiment of the present application provides a computer program product.
  • the computer program product runs on a computer, the computer executes the method described in the first aspect and any possible design of the first aspect.
  • an embodiment of the present application provides a terminal device, including a unit for implementing the method described in the first aspect and any possible design of the first aspect.
  • the terminal device includes, but is not limited to: intelligent transportation equipment (such as cars, ships, drones, trains, trucks, etc.), intelligent manufacturing equipment (such as robots, industrial equipment, intelligent logistics, intelligent factories, etc.), and intelligent terminals (mobile phones, computers, tablet computers, PDAs, desktops, headphones, speakers, wearable devices, vehicle-mounted devices, etc.).
  • FIG. 1a shows a schematic diagram of a parking space frame provided by the present application
  • FIG1b shows a schematic diagram of a parking space inclination angle provided by the present application
  • FIG2a shows a schematic diagram of a system architecture provided by the present application
  • FIG2b shows another schematic diagram of a system architecture provided by the present application.
  • FIG3 is a schematic diagram showing a flow chart of a parking space opening detection method according to an embodiment of the present application.
  • FIG4 is a schematic diagram showing the principle of an embodiment of the present application.
  • FIG5 is a schematic diagram showing generation of training data according to an embodiment of the present application.
  • FIG6 shows a schematic diagram of data mining in an embodiment of the present application
  • FIG7 shows a schematic diagram of the use of parking space information in an embodiment of the present application.
  • FIG8 shows a schematic diagram of a parking space opening detection device according to an embodiment of the present application.
  • FIG. 9 shows a schematic diagram of a communication device according to an embodiment of the present application.
  • FIG. 1 a it is a schematic diagram of a parking space frame provided in the present application.
  • the parking space frame usually includes four sides.
  • P1P2 is called the entrance line
  • P1P4 and P2P3 can both be called dividing lines, which are actually the isolation boundaries between parking space 1 and other parking spaces (such as parking space 2) or vacant positions or walls or curbs.
  • the intersection of the entrance line and the dividing line can be called the entrance parking spot (or parking mark point), that is, P1 and P2 in Figure 1a are called entrance parking spots; P3 and P4 can be called non-entrance parking spots.
  • P1, P2, P3 and P4 can be collectively referred to as parking spots, and parking spots can also be understood as the corner points at the corners of the parking space frame. Therefore, parking spots can also be called parking corner points.
  • the dividing line is perpendicular to the entrance line. For parking spaces where the dividing line is perpendicular to the entrance line, they can be called vertical parking spaces or parallel parking spaces.
  • a wheel chock in order to park the vehicle safely in the parking space frame, may be included in the parking space frame near the rear of the vehicle (e.g., near the P3P4 dividing line) so that the vehicle stops in time after entering the parking space frame to avoid colliding with other obstacles (e.g., walls or other vehicles) or hindering the travel of other vehicles.
  • the wheel chock may be a separate type or an integrated type, which is not limited in the embodiments of the present application.
  • the parking space inclination angle, or the inclination angle of the dividing line or the inclination angle of the parking space frame refers to the angle between the dividing line and the x-axis of the image.
  • the angle between the parking space frame dividing line and the x-axis of the image is ⁇ , as shown in FIG1b.
  • This type of algorithm can use a template matching method to determine the parking space inclination angle, which is not limited in the present embodiment of the application.
  • the distance between the parking space and the vehicle refers to the distance between the center of the vehicle and the entrance parking space.
  • distance S1 and distance S2 represent the distance between the parking space and the vehicle.
  • distance S3 and distance S4 can represent the distance between the parking space and the vehicle. It should be understood that in different method embodiments, the distance between the parking space and the vehicle can be customized according to the needs of the parking space detection algorithm. For example, the distance between the parking space and the vehicle can also be the distance between the center of the four sides of the parking space and the front of the vehicle.
  • different distance calculation methods can also be configured in the same detection algorithm, so that when the vehicle and the parking space are in different situations, the distance calculation method can be adaptively determined according to the actual situation to calculate the distance between the parking space and the vehicle.
  • the embodiments of the present application do not limit this.
  • Fig. 2a is a schematic diagram of a system architecture applicable to the present application.
  • the system may include a vehicle 101 and a vehicle management server 102 (or referred to as a parking space management server).
  • vehicle 101 refers to a vehicle with the function of collecting images of the surrounding environment and remote communication.
  • a sensor 1011 is provided on the vehicle 101, which can realize the collection of information about the surrounding environment of the vehicle.
  • the sensor can be, for example, an image acquisition device, and the image acquisition device can be, for example, at least one of a fisheye camera, a monocular camera, a depth camera, etc., which can be used to collect visual images of the parking space where the vehicle is to be parked, and can also be used to collect environmental information around the parking space.
  • the sensors are set in the front, rear, left and right directions of the vehicle (which can be combined with FIG2a) as an example to realize the collection of environmental information in the front, rear, left and right directions of the vehicle.
  • the field of view angles of the four fisheye cameras can all be greater than 180 degrees, so as to realize the all-round capture of the surrounding environment of the parking space. It should be understood that the larger the field of view angle of the sensor 1011, the larger the range that the sensor can perceive.
  • the remote communication function of the vehicle 101 can generally be implemented by a communication module disposed on the vehicle 101, and the communication module may include, for example, a remote communication box (telematics box, TBOX) or a wireless communication system (see the introduction of the wireless communication system 244 in FIG. 2b below).
  • the vehicle management server 102 can assist the vehicle in realizing the parking space detection function.
  • the vehicle management server 102 can be a single server or a server cluster composed of multiple servers.
  • the vehicle management server 102 can also be a cloud server (or cloud, cloud, cloud server, cloud controller or Internet of Vehicles server, etc.).
  • a cloud server is a general term for devices or devices with data processing capabilities, such as It may include physical devices such as a host or processor, virtual devices such as a virtual machine or container, and chips or integrated circuits.
  • the vehicle management server 102 may integrate all functions on an independent physical device, or may deploy different functions on different physical devices, which is not limited in this application.
  • one vehicle management server 102 may communicate with multiple vehicles 101.
  • the number of vehicles 101, vehicle management server 102, and sensors 1011 in the system architecture shown in Figure 2a above is only an example, and this application does not limit this.
  • the name of the vehicle management server 102 in the system is only an example, and the specific implementation may also have other possible names, for example, it may also be called a parking space opening detection device, and this application does not limit this.
  • the functions of the parking space opening detection device may also be deployed separately on different devices. For example, some functions of the parking space opening detection device may be deployed on the vehicle management server 102, and other functions may be deployed on the vehicle. This embodiment of the application does not limit this. It should be understood that the vehicle 101 in Figure 2a above may be the vehicle of Figure 2b below.
  • the vehicle can be configured to be a fully or partially automatic driving mode.
  • Components coupled to or included in the vehicle 200 may include a propulsion system 210, a sensor system 220, a control system 230, a peripheral device 240, a power supply 250, a computer system 260, and a user interface 270.
  • the components of the vehicle 200 can be configured to work in a manner interconnected with each other and/or with other components coupled to each system.
  • the power supply 250 can provide power to all components of the vehicle 200.
  • the computer system 260 can be configured to receive data from the propulsion system 210, the sensor system 220, the control system 230, and the peripheral device 240 and control them.
  • the computer system 260 can also be configured to generate a display of an image on the user interface 270 and receive input from the user interface 270.
  • vehicle 200 may include more, fewer or different systems, and each system may include more, fewer or different components.
  • systems and components shown may be combined or divided in any manner, and this application does not specifically limit this.
  • the propulsion system 210 can provide power movement for the vehicle 200.
  • the propulsion system 210 may include an engine/motor 214, an energy source 213, a transmission 212, and wheels/tires 211.
  • the propulsion system 210 may additionally or alternatively include other components in addition to the components shown in FIG2b. This application does not specifically limit this.
  • the sensor system 220 may include several sensors for sensing information about the environment in which the vehicle 200 is located. As shown in FIG. 2b, the sensors of the sensor system 220 may include a camera sensor 223. The camera sensor 223 may be used to capture multiple images of the surrounding environment of the vehicle 200. The camera sensor 223 may be a static camera or a video camera. Further, optionally, the sensor system 220 may also include a global positioning system (GPS) 226, an inertial measurement unit (IMU) 225, a laser radar, a millimeter wave radar, and a brake 221 for modifying the position and/or orientation of the sensor. The millimeter wave radar may use radio signals to sense targets in the surrounding environment of the vehicle 200.
  • GPS global positioning system
  • IMU inertial measurement unit
  • the millimeter wave radar may use radio signals to sense targets in the surrounding environment of the vehicle 200.
  • the millimeter wave radar 224 may also be used to sense the speed and/or forward direction of the target.
  • the laser radar 224 may use lasers to sense targets in the environment in which the vehicle 200 is located.
  • GPS 226 may be any sensor for estimating the geographic location of the vehicle 200.
  • GPS 226 may include a transceiver 262 to estimate the position of vehicle 200 relative to the earth based on satellite positioning data.
  • computer system 260 may be used to use GPS 226 in conjunction with map data to estimate the road that vehicle 200 is traveling.
  • IMU 225 may be used to sense position and orientation changes of vehicle 200 based on inertial acceleration and any combination thereof.
  • the combination of sensors in IMU 225 may include, for example, an accelerometer and a gyroscope. In addition, other combinations of sensors in IMU 225 are also possible.
  • the control system 230 is for controlling the operation of the vehicle 200 and its components.
  • the control system 230 may include various components, including a steering unit 236, a throttle 235, a brake unit 234, a sensor fusion algorithm 233, a computer vision system 232, a route control system 231, and an obstacle avoidance system 237.
  • the steering unit 236 is operable to adjust the forward direction of the vehicle 200. For example, in one embodiment, it can be a steering wheel system.
  • the throttle 235 is used to control the operating speed of the engine 214 and thus control the speed of the vehicle 200.
  • the control system 230 may additionally or alternatively include other components in addition to the components shown in FIG. 2 b. This application does not specifically limit this.
  • the brake unit 234 is used to control the deceleration of the vehicle 200.
  • the brake unit 234 can use friction to slow down the wheel 211.
  • the brake unit 234 can convert the kinetic energy of the wheel 211 into electric current.
  • the brake unit 234 can also take other forms to slow down the rotation speed of the wheel 211 to control the speed of the vehicle 200.
  • the computer vision system 232 can operate to process and analyze the images captured by the camera sensor 223 to identify targets and/or features in the surrounding environment of the vehicle 200.
  • the targets and/or features may include traffic signals, road boundaries and obstacles.
  • the computer vision system 232 can use target recognition algorithms, structure from motion (SFM) algorithms, video tracking and other computer vision technologies.
  • the computer vision system 232 can be used to map the environment, track targets, estimate the speed of targets, and so on.
  • the route control system 231 is used to determine the driving route of the vehicle 200.
  • the route control system 231 can combine data from the sensor system 220, GPS 226 and one or more predetermined maps to determine the driving route for the vehicle 200.
  • Route (such as parking route).
  • Obstacle avoidance system 237 is used to identify, evaluate and avoid or otherwise cross potential obstacles in the environment of vehicle 200.
  • control system 230 can include components other than those shown and described in addition or in place. Or some of the components shown above can be reduced.
  • Peripheral device 240 can be configured to allow vehicle 200 to interact with external sensors, other vehicles and/or users.
  • peripheral device 240 can include, for example, wireless communication system 244, touch screen 243, microphone 242 and/or speaker 241.
  • Peripheral device 240 can additionally or alternatively include other components in addition to the components shown in FIG. 2 b. This application does not specifically limit this.
  • the peripheral device 240 provides a means for the user of the vehicle 200 to interact with the user interface 270.
  • the touch screen 243 can provide information to the user of the vehicle 200.
  • the user interface 270 can also operate the touch screen 243 to receive input from the user.
  • the peripheral device 240 can provide a means for the vehicle 200 to communicate with other devices located in the vehicle.
  • the microphone 242 can receive audio (e.g., voice commands or other audio input) from the user of the vehicle 200.
  • the speaker 241 can output audio to the user of the vehicle 200.
  • the wireless communication system 244 can communicate wirelessly with one or more devices directly or via a communication network.
  • the wireless communication system 244 can use 3G cellular communication, such as code division multiple access (CDMA), EVDO, global system for mobile communications (GSM)/general packet radio service (GPRS), or 4G cellular communication, such as long term evolution (LTE), or 5G cellular communication.
  • the wireless communication system 244 can communicate with a wireless local area network (WLAN) using WiFi.
  • the wireless communication system 244 can communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, the wireless communication system 244 may include one or more dedicated short range communications (DSRC) devices, which may include public and/or private data communications between vehicles and/or roadside stations.
  • DSRC dedicated short range communications
  • Power supply 250 can be configured to provide power to some or all components of vehicle 200.
  • power supply 250 can include, for example, a rechargeable lithium-ion or lead-acid battery.
  • one or more battery packs can be configured to provide power.
  • Other power supply materials and configurations are also possible.
  • power supply 250 and energy source 213 can be implemented together, as in some all-electric vehicles.
  • the components of vehicle 200 can be configured to work in a manner interconnected with other components inside and/or outside their respective systems. To this end, the components and systems of vehicle 200 can be linked together communicatively via a system bus, a network, and/or other connection mechanisms.
  • the computer system 260 may include at least one processor 261 that executes instructions 2631 stored in a computer-readable medium such as a memory 263.
  • the computer system 260 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 200 in a distributed manner.
  • Processor 261 can be any conventional processor, such as a central processing unit (CPU). Alternatively, it can also be other general-purpose processors, digital signal processors (DSP), graphic processing units (GPU), application-specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • the general-purpose processor can be a microprocessor or any conventional processor. It should be understood that the present application does not limit the number of sensors and processors included in the above-mentioned vehicle system.
  • FIG. 2b functionally illustrates the processor, memory, and other elements in the computer system 260
  • the processor, computer, or memory may actually include multiple processors, computers, or memories that may or may not be stored in the same physical housing.
  • the memory can be a hard disk drive or other storage medium located in a housing different from the computer system 260. Therefore, references to a processor or computer will be understood to include references to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering assembly and the deceleration assembly, may each have their own processor that performs only calculations related to the functions specific to the component.
  • the processor may be located remote from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle and others are performed by a remote processor, including taking the necessary steps to perform a single maneuver.
  • memory 263 may include instructions 2631 (e.g., program logic) that may be executed by processor 261 to perform various functions of vehicle 200, including those described above.
  • Memory 214 may also include additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of propulsion system 210, sensor system 220, control system 230, and peripheral device 240.
  • the memory 263 may also store data such as road maps, route information, the vehicle's location, direction, speed, and other such vehicle data, as well as other information. This information may be used when the vehicle 200 is in autonomous, semi-autonomous, and/or manual mode. Used by vehicle 200 and computer system 260 during operation.
  • User interface 270 is used to provide information to or receive information from a user of vehicle 200.
  • user interface 270 may include one or more input/output devices within the set of peripherals 240, such as wireless communication system 244, touch screen 243, microphone 242, and speaker 241.
  • Computer system 260 may control functions of vehicle 200 based on input received from various subsystems (e.g., propulsion system 210, sensor system 220, and control system 230) and from user interface 270.
  • computer system 260 may utilize input from control system 230 in order to control steering unit 236 to avoid obstacles detected by sensor system 220 and obstacle avoidance system 237.
  • computer system 260 may be operable to provide control over many aspects of vehicle 200 and its subsystems.
  • one or more of the above-mentioned components may be installed or associated separately from the vehicle 200.
  • the memory 263 may exist partially or completely separately from the vehicle 200.
  • the above-mentioned components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 2b should not be understood as a limitation on the embodiments of the present application.
  • the above-mentioned vehicles include but are not limited to unmanned vehicles, intelligent vehicles (such as automated guided vehicles (AGV)), electric vehicles, digital vehicles, and intelligent manufacturing vehicles.
  • intelligent vehicles such as automated guided vehicles (AGV)
  • electric vehicles such as digital vehicles, and intelligent manufacturing vehicles.
  • the parking space opening detection method provided in the present application can be applied to the fields of advanced driving assistant system (ADAS), automatic driving system or intelligent driving system, and is particularly applicable to automatic parking related functions, such as automatic parking assist (APA) technology, remote parking assist (RPA) technology or automated valet parking (AVP) technology, etc.
  • the parking space information detected by the parking space opening detection method may include but is not limited to the parking space opening of the parking space, etc., and may also include the predicted parking space type.
  • the parking space opening detection method can also be applied to the use of more advanced functions using parking space information as a constraint, such as three-dimensional modeling based on parking space information, etc., which is not limited in the embodiments of the present application.
  • parking space detection is required during the automatic parking process.
  • parking space opening prediction and parking space type prediction are also required.
  • how to get rid of the limitations of usage scenarios and further improve the accuracy and generalization ability of these technologies are still important issues that need to be solved urgently.
  • the embodiments of the present application propose a parking opening detection method and device, which are used to improve the accuracy of parking opening detection and improve the generalization ability of parking detection models.
  • the method and the device are based on the same technical concept. Since the principles of solving problems by the method and the device are similar, the implementation of the device and the method can refer to each other, and the repeated parts will not be repeated.
  • the terms and/or descriptions between the various embodiments are consistent and can be referenced to each other, and the technical features in different embodiments can be combined to form new embodiments according to their internal logical relationships.
  • At least one refers to one or more
  • plural refers to two or more.
  • “And/or” describes the association relationship of associated objects, indicating that three relationships may exist.
  • a and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • the character “/” generally indicates that the previous and next associated objects are in an “or” relationship.
  • “At least one of the following” or similar expressions refers to any combination of these items, including any combination of single or plural items.
  • At least one of a, b, or c can represent: a, b, c, a and b, a and c, b and c, or a and b and c, where a, b, c can be single or multiple.
  • the ordinal numbers such as “first” and “second” mentioned in the embodiments of the present application are used to distinguish multiple objects, and are not used to limit the priority or importance of multiple objects.
  • the first device and the second device are only used to distinguish different electronic devices, rather than to indicate the difference in priority or importance of the two devices.
  • the method steps performed by the first device and the second device can be interchangeable.
  • FIG3 shows a flow chart of a parking space opening detection method according to an embodiment of the present application.
  • the method may be performed by a parking space opening detection device, which may be deployed in the vehicle management server 102 in FIG2a above, or in the vehicle in FIG2b above.
  • the method may include the following steps:
  • the parking space opening detection device obtains at least one feature information of the parking space.
  • the at least one item of characteristic information may include, but is not limited to, at least one of the following characteristic information of the parking space: length, width, type, information about other parking spaces within a predetermined range around the parking space, information about obstacles inside and outside the parking space, distance information between the parking space and the vehicle, and coordinate information of the parking space in the coordinate system of the vehicle.
  • the length of the dividing line P1P4 (or P2P3) may represent the length of the parking space
  • the length of the dividing line P1P2 (or P3P4) may represent the width of the parking space.
  • the parking space type can include large parking spaces and small parking spaces.
  • the length of a large parking space is, for example, 15.6 meters, and the width is, for example, 3.25 meters, which is suitable for medium and large vehicles; the length of a small parking space is, for example, 6 meters, and the width is, for example, 2.5 meters, which is suitable for small vehicles.
  • the parking space types can include balanced parking spaces, inclined parking spaces, and vertical parking spaces.
  • the length of a balanced parking space is, for example, 6 meters, and the width is, for example, 2.5 meters; inclined parking spaces (for example, an inclination of 30°, 45°, 60°): the inclined length is, for example, 6 meters, and the width is, for example, 2.8 meters, and the vertical distance between the two inclined lines is, for example, maintained at a standard of 2.5 meters.
  • Vertical parking spaces the length is greater than or equal to 5 meters, and the width is, for example, 2.5 meters.
  • 2.5x5.3m is the best standard parking space size.
  • the parking space types can include yellow parking spaces, white parking spaces, blue parking spaces, and green parking spaces. Among them, the yellow parking spaces are exclusive parking spaces.
  • Common yellow special parking spaces include those for police use, epidemic prevention and control, new energy vehicles, emergency rescue vehicles, etc., and relevant words may also be marked on the parking space signs and markings.
  • White parking spaces are paid parking spaces, which are currently the most common type of parking spaces. There are no regulations on the parking time for white solid line parking spaces; there are regulations on the parking time for white dotted line parking spaces, and the specific time is subject to the parking space marking.
  • Blue parking spaces are free parking spaces, but there are time regulations for parking, and the free parking time period will be indicated on the road surface or on signboards.
  • Green parking spaces Only used in a few cities, they are limited-time free parking spaces, which meet short-term parking needs for a limited time and can solve the temporary parking needs of citizens for shopping, doing business, etc. The embodiments of this application do not limit this parking space type.
  • Other parking space information within a predetermined range around the parking space may include, for example, the parking space information of parking space 1 in FIG. 1a .
  • the predetermined range may be, for example, a range of 1 meter around the parking space, or other preset range values, which is not limited in this embodiment of the present application.
  • the obstacle information inside and outside the parking space may include whether there is a parking lock, other vehicles or other people or objects that affect parking in the parking space, and whether there are other vehicles or other people or objects that affect parking outside the parking space.
  • the obstacle information inside the parking space may include the status information of the wheel chocks in the parking space, for example, it may be whether there are wheel chocks or no wheel chocks in the parking space.
  • the dotted line in Figure 1a indicates that the wheel chocks in the parking space 2 are optional.
  • the obstacle information inside the parking space may include parking lock information, for example, it may be the use state or non-use state of the parking lock.
  • the obstacle information outside the parking space may include curb/wall information within a predetermined range around the parking space, and the predetermined range may be within 0-1 meters.
  • the distance information between the parking space and the vehicle can be the distance S1 and distance S2 introduced in the previous description (see Figure 1a and Figure 1b), or it can be the distance between the center of the four sides of the parking space and the front of the vehicle.
  • the embodiment of the present application does not limit the definition method of the distance information.
  • the vehicle's coordinate system may be a coordinate system established with the center of the vehicle as the coordinate origin, the width of the vehicle as the x-axis, the length of the vehicle as the y-axis, and the direction perpendicular to the vehicle as the z-axis.
  • the coordinate information of the parking space in the vehicle's coordinate system is the relative coordinate system of the parking space.
  • the parking space opening detection device can convert the absolute coordinate information into the coordinate information in the vehicle's coordinate system, which will not be repeated here.
  • the parking space opening detection device may obtain at least one feature information of the parking space through at least one technical means.
  • the parking space opening detection device may obtain environmental information inside and/or outside the parking space through at least one sensor associated with the vehicle, and extract the at least one feature information from the environmental information inside and/or outside the parking space.
  • the at least one sensor may include but is not limited to a surround view camera, ultrasonic radar, laser radar, etc. installed on the vehicle.
  • the surround view camera can be used to collect visual images of the parking space, and the visual image can be provided to the parking space opening detection device.
  • the parking space opening detection device can analyze the visual image to extract characteristic information such as the length, width, distance information between the parking space and the vehicle, and other parking space information within a predetermined range around the parking space, or determine the parking space type, the coordinate information of the parking space in the coordinate system of the vehicle, etc. based on the extracted characteristic information.
  • the ultrasonic radar can use laser to sense targets in the environment where the vehicle is located, and the laser radar can use laser to sense targets in the environment where the vehicle is located.
  • the parking space opening detection device can use a dynamic target detection method to analyze the targets sensed by the ultrasonic radar or the laser radar, so as to determine whether there are obstacles inside and outside the parking space.
  • the at least one sensor may include a GPS of the vehicle, and the GPS may be any sensor for estimating the geographic location of the vehicle.
  • the parking space opening detection device may obtain local map information at the geographic location according to the geographic location provided by the GPS, so as to determine static obstacle information inside and outside the parking space, such as walls, curbs, fences, etc.
  • the parking space opening detection device can also obtain other characteristic information and adopt corresponding technical means to obtain the characteristic information, which will not be repeated here.
  • the parking space opening detection device determines the parking space information of the parking space according to the at least one feature information and the parking space detection model.
  • the parking space information may include the parking space opening of the parking space.
  • the parking space detection model may specifically be a parking space opening detection model, or may be a machine learning model that is pre-trained as needed, such as a decision tree model.
  • the model training device can use a method similar to S310 to extract features from a large amount of input parking space data (such as visual images of parking spaces, local map information, dynamic target detection information, environmental information, etc., for details, please refer to the relevant description of S310 in the previous text, which will not be repeated here) to obtain feature information of different parking spaces.
  • the model training device can obtain true value information for different parking spaces, so as to generate a training data set based on the feature information of the different parking spaces and the acquired true value information, thereby using the training data set to perform model training and obtain the parking space detection model.
  • the trained parking space detection model can be used in the parking space detection process of the embodiment of the present application to predict the parking space opening of the parking space, etc., so as to output the parking space information, which includes the predicted parking space opening.
  • the model training device when the model training device generates the training data set, as shown in FIG5, different true values can be set for the four sides of the parking space, for example, 0, 1, 2, and 3. Further, the model training device can perform inter-frame matching according to the truth information table sequence and the feature information table sequence (for example, the timestamp difference is less than the first threshold), and calculate the opening value of any border of the parking space according to the matching of the feature information and the truth information in the same frame (for example, the intersection over Union (IOU) is less than the second threshold), such as any one of 0, 1, 2, and 3, to obtain the training label of each border of different parking spaces. Further, the model training device can generate a training data set based on the obtained feature information and training labels.
  • the model training device can also obtain difficult scene data through data mining, and gradually enrich the training data set to improve the robustness of the parking space detection model.
  • the data mining process can refer to Figure 6, which generalizes the test data.
  • the test data is used for the first model prediction to obtain first prediction data; on the other hand, the test data is used for the second model prediction to obtain second prediction data.
  • data of difficult scenarios for example, called target scenarios
  • training data is generated based on the data of the difficult scenarios and supplemented to the training data set of Figure 4.
  • both the first model and the second model can be deep learning models, the first model is trained using data in a first range, and the second model is trained using data in a second range, and the first range is larger than the second range. Because the first model is trained using data in a larger range, the model structure is complex and has many parameters, and it is more suitable for different scenarios. Although the prediction accuracy is worse than the second model, its real-time performance when applied to the field of vehicle assisted driving is poor and is not as good as the second model. In the embodiment of the present application, based on the difference in the predicted data obtained by the first model and the second model respectively, the relevant data of difficult scenarios can be supplemented, which helps to improve the robustness of the parking space detection model to be trained.
  • model training device can perform iterative training based on the obtained training data set until better model parameters are obtained, and then the model training can be stopped.
  • the model expression of the decision tree model can be shown as follows:
  • X represents any parking space feature
  • m represents the mth decision tree
  • m ⁇ N the m ⁇ N
  • m and N are integers greater than or equal to 1.
  • the decision tree model can be trained by the following steps:
  • the parking space detection model of the embodiment of the present application can be obtained.
  • the category with the largest probability is selected as the predicted value of the parking space opening.
  • the predicted values are 0, 1, 2, and 3, corresponding to the four bounding boxes of the parking space.
  • the parking space information of the parking space can be obtained, and the parking space information may include the parking space opening.
  • the method can use the parking space feature information to obtain a training data set, generate a parking space detection model based on the training data set, and use the parking space detection model to predict the parking space opening, which can reduce the prejudgment rate of the parking space opening and improve the generalization ability of the model.
  • the parking space feature information can be used as input, which can better integrate and utilize environmental perception information such as obstacles around the parking space, which helps to reduce the prejudgment rate of the parking space opening and improve the generalization ability of the model.
  • the extracted parking space feature information is diversified, which helps to reduce the prejudgment rate of the parking space opening and improve the generalization ability of the model.
  • using data mining strategies to obtain data of difficult scenes to enrich the training data set can achieve the purpose of improving the robustness of the parking space detection model.
  • the parking space information can be provided to the path planning and vehicle control unit of the vehicle, so that the path planning and vehicle control unit can realize parking path planning and vehicle control during parking based on the parking space information to realize automatic parking.
  • the parking space information can be provided to the human-machine interface (HMI) and output on the HMI so that the user can view the parking space (including the parking space opening), confirm whether to start parking and view the parking path, dynamic parking process, etc.
  • HMI human-machine interface
  • the use of the parking space information is not limited in the embodiment of the present application and will not be repeated here.
  • the embodiment of the present application also provides a parking space opening detection device, which is used to execute the method executed by the parking space opening detection device in the above method embodiment.
  • a parking space opening detection device which is used to execute the method executed by the parking space opening detection device in the above method embodiment.
  • the relevant features can be found in the above method embodiment and will not be repeated here.
  • the communication device 800 may include: an acquisition unit 801, used to acquire at least one feature information of a parking space; a determination unit 802, used to determine the parking space information of the parking space based on the at least one feature information and a parking space detection model, the parking space information including the parking space opening of the parking space.
  • the at least one item of characteristic information includes at least one of the following characteristic information of the parking space: length, width, type, information of other parking spaces within a predetermined range around the parking space, information of obstacles inside and outside the parking space, distance information between the parking space and the vehicle, and coordinate information of the parking space in the coordinate system of the vehicle.
  • the acquisition unit 801 is specifically used to: acquire environmental information inside and/or outside the parking space through at least one sensor associated with the vehicle; and extract at least one feature information from the environmental information inside and/or outside the parking space.
  • the device also includes: a prediction unit 803, used to obtain first prediction data based on test data and the first model; obtain second prediction data based on test data and the second model, wherein the first model and the second model are deep learning models, the first model is trained using data from a first range, and the second model is trained using data from a second range, and the first range is larger than the second range; the acquisition unit 801 is also used to obtain a data segment of a target scene based on a difference between the first prediction data and the second prediction data; obtain a training data set based on the data segment of the target scene, and the training data set is used to train the parking space detection model.
  • a prediction unit 803 used to obtain first prediction data based on test data and the first model
  • obtain second prediction data based on test data and the second model wherein the first model and the second model are deep learning models, the first model is trained using data from a first range, and the second model is trained using data from a second range, and the first range is larger than the second range
  • the parking space detection model includes a decision tree model.
  • the model parameters of the decision tree model include at least one of the following: the number of decision trees, the maximum depth of the decision tree, the number of categories, the learning rate, the positive sample weight/negative sample weight, the sample random sampling rate and the sampling frequency.
  • the division of the units in the above device is only a division of logical functions, and in actual implementation, they can be fully or partially integrated into one physical entity, or they can be physically separated.
  • the units in the device can be implemented in the form of a processor calling software; for example, the device includes a processor, the processor is connected to a memory, and instructions are stored in the memory.
  • the processor calls the instructions stored in the memory to implement any of the above methods or implement the functions of the units of the device, wherein the processor is, for example, a general-purpose processor, such as a central processing unit (CPU) or a microprocessor, and the memory is a memory inside the device or a memory outside the device.
  • CPU central processing unit
  • microprocessor a microprocessor
  • the units in the device may be implemented in the form of hardware circuits, and the functions of some or all of the units may be realized by designing the hardware circuits, and the hardware circuits may be understood as one or more processors; for example, in one implementation, the hardware circuit is an application-specific integrated circuit (ASIC), and the functions of some or all of the above units may be realized by designing the logical relationship of the components within the circuit; for another example, in another implementation, the hardware circuit may be implemented by a programmable logic device (PLD), taking a field programmable gate array (FPGA) as an example, which may include a large number of logic gate circuits, and the connection relationship between the logic gate circuits may be configured by a configuration file, thereby realizing the functions of some or all of the above units.
  • PLD programmable logic device
  • FPGA field programmable gate array
  • All units of the above devices may be implemented by a programmable logic device (PLD), and a field programmable gate array (FPGA) may be used as an example, and the connection relationship between the logic gate circuits may be configured by a configuration file, thereby realizing the functions of some or all of the above units.
  • PLD programmable logic device
  • FPGA field programmable gate array
  • the element may be implemented entirely in the form of a processor calling software, or entirely in the form of a hardware circuit, or partially in the form of a processor calling software and the rest in the form of a hardware circuit.
  • the processor is a circuit with signal processing capability.
  • the processor may be a circuit with instruction reading and running capability, such as a CPU, a microprocessor, a graphics processing unit (GPU) (which may be understood as a microprocessor), or a digital signal processor (DSP), etc.; in another implementation, the processor may implement certain functions through the logical relationship of a hardware circuit, and the logical relationship of the hardware circuit may be fixed or reconfigurable, such as a hardware circuit implemented by an ASIC or PLD, such as an FPGA.
  • the process of the processor loading a configuration document to implement the hardware circuit configuration may be understood as the process of the processor loading instructions to implement the functions of some or all of the above units.
  • it may also be a hardware circuit designed for artificial intelligence, which may be understood as an ASIC, such as a neural network processing unit (NPU), a tensor processing unit (TPU), a deep learning processing unit (DPU), etc.
  • NPU neural network processing unit
  • TPU tensor processing unit
  • each unit in the above device can be one or more processors (or processing circuits) configured to implement the above method, such as: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA, or a combination of at least two of these processor forms.
  • processors or processing circuits
  • the units in the above device can be fully or partially integrated together, or can be implemented independently. In one implementation, these units are integrated together and implemented in the form of a system-on-a-chip (SOC).
  • SOC may include at least one processor for implementing any of the above methods or implementing the functions of each unit of the device.
  • the type of the at least one processor may be different, for example, including a CPU and an FPGA, a CPU and an artificial intelligence processor, a CPU and a GPU, etc.
  • the device 900 shown in Fig. 9 includes at least one processor 910 and a communication interface 930.
  • a memory 920 may also be included.
  • connection medium between the processor 910 and the memory 920 is not limited in the embodiment of the present application.
  • the processor 910 may perform data transmission through the communication interface 930 when communicating with other devices.
  • the processor 910 in FIG. 9 can call the computer execution instructions stored in the memory 920 so that the device 900 can execute any of the above method embodiments.
  • An embodiment of the present application also relates to a chip system, which includes a processor for calling a computer program or computer instructions stored in a memory so that the processor executes the method of any of the above embodiments.
  • the processor may be coupled to the memory through an interface.
  • the chip system may also directly include a memory, in which a computer program or computer instructions are stored.
  • the memory may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memory.
  • the nonvolatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (RAM), which is used as an external cache.
  • RAM synchronous RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous link DRAM
  • DR RAM direct rambus RAM
  • An embodiment of the present application also relates to a processor, which is used to call a computer program or computer instruction stored in a memory so that the processor executes the method described in any of the above embodiments.
  • the processor is an integrated circuit chip with signal processing capabilities.
  • the processor can be an FPGA, a general-purpose processor, a DSP, an ASIC or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, a system on chip (SoC), a CPU, a network processor (NP), a microcontroller unit (MCU), a PLD or other integrated chip, which can implement or execute the methods, steps and logic diagrams disclosed in the embodiments of the present application.
  • SoC system on chip
  • a general-purpose processor can be a microprocessor.
  • the processor or the processor may also be any conventional processor, etc.
  • the steps of the method disclosed in the embodiment of the present application can be directly embodied as being executed by a hardware decoding processor, or can be executed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the art such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • an embodiment of the present application provides a computer-readable storage medium, which stores a program code.
  • the program code runs on the computer, the computer executes the above method embodiment.
  • an embodiment of the present application provides a computer program product.
  • the computer program product When the computer program product is run on a computer, the computer is enabled to execute the above method embodiment.
  • the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware.
  • the present application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车位开口检测方法及装置,涉及车辆技术领域。该方法可以包括:获取车位的至少一项特征信息;根据所述至少一项特征信息和车位检测模型,确定所述车位的车位信息,所述车位信息包括所述车位的车位开口。通过该方法,可以提升车位开口检测的准确性以及提升车位检测模型的泛化能力。

Description

一种车位开口检测方法及装置
相关申请的交叉引用
本申请要求在2022年11月22日提交中华人民共和国知识产权局、申请号为202211469399.9、申请名称为“一种车位开口检测方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及车辆技术领域,特别涉及一种车位开口检测方法及装置。
背景技术
随着自动驾驶技术的发展,自动泊车技术迅速崛起,例如自动泊车辅助(automated parking assist,APA)技术、远程遥控泊车(remote parking assist,RPA)技术或者自主代客泊车(automated valet parking,AVP)技术等。在自动泊车过程中,需要进行车位检测。在一些场景中,为了保障顺利泊车,还需要进行车位类型预测以及车位开口预测。在这之中,如何摆脱使用场景的限制,并进一步提升这些技术的准确性和泛化能力,仍为亟需解决的重要问题。
发明内容
本申请实施例提供一种车位开口检测方法及装置,用于提升车位开口检测的准确性以及提升车位检测模型的泛化能力。
第一方面,本申请实施例提供了一种车位开口检测方法,该方法可由车位开口检测装置实现,该车位开口检测装置可以是独立设备,也可以是设备中的芯片或部件,还可以是软件模块,可以部署在车辆上,或所述车辆上的车载设备或智能设备中,本申请实施例对该车位开口检测装置的产品形态不做限定。该方法可以包括:获取车位的至少一项特征信息;根据所述至少一项特征信息和车位检测模型,确定所述车位的车位信息,所述车位信息包括所述车位的车位开口。示例地,该车位检测模型具体可以是车位开口检测模型。具体实施时,该车位检测模型可以是决策树模型。
通过以上方法,车位开口检测装置可以根据针对车位所提取的不同车位特征信息和预设的车位检测模型,预测车位开口,以辅助确定该车位是否可以停驶车辆,以及车辆该从哪个入口驶入该车位,该方法有助于提升车位开口检测的准确性以及提升车位检测模型的泛化能力。
结合第一方面,在一种可能的设计中,所述至少一项特征信息包括所述车位的以下至少一项特征信息:长度、宽度、类型、所述车位周围预定范围内的其它车位信息、所述车位内外的障碍物信息、所述车位与所述车辆之间的距离信息、所述车位在所述车辆的坐标系下的坐标信息。应理解,此处仅是对车位特征信息的示例说明而非任何限定,在具体实施时,还可以根据业务需要或者应用场景等,获取其它车位特征信息,本申请实施例对此不做限定。
结合第一方面,在一种可能的设计中,所述获取车位的至少一项特征信息,可以包括:通过所述车辆关联的至少一种传感器获取所述车位内和/或所述车位外的环境信息;从所述车位内和/或所述车位外的环境信息中提取所述至少一项特征信息。
通过上述方法,车位开口检测装置可以通过不同的手段获取车位特征信息,提供灵活的车位检测方式。
结合第一方面,在一种可能的设计中,所述方法还包括:根据测试数据和第一模型获取第一预测数据;根据测试数据和第二模型获取第二预测数据,其中,所述第一模型和所述第二模型为深度学习模型,所述第一模型使用第一范围的数据训练得到,所述第二模型使用第二范围的数据训练得到,所述第一范围大于所述第二范围;根据所述第一预测数据和所述第二预测数据的差异获取目标场景(例如困难场景或多样性场景)的数据;根据所述目标场景的数据获取训练数据集合,该训练数据集合用于训练得到所述车位检测模型。
通过以上方法,车位开口检测装置可以利用相同的测试数据和不同的模型分别进行预测,并利用该不同模型的预测差异获取目标场景的数据,以丰富训练数据集合,提升本申请实施例的车位检测模型的鲁棒性。需要说明的是,本申请实施例中,第一模型和第二模型可以是已存的深度学习模型,该第一模 型因使用较大范围的数据训练得到,模型结构复杂、参数多,虽预测的准确性优于第二模型,但是在应用到车辆辅助驾驶领域时的实时性较差不如第二模型。本申请实施例中,基于第一模型和第二模型分别得到的预测数据的差异,可以补充困难场景的相关数据,有助于提升待训练的车位检测模型的鲁棒性。
结合第一方面,在一种可能的设计中,所述车位检测模型包括决策树模型。示例地,所述决策树模型的模型参数包括以下至少一项:决策树数量,决策树最大深度,类别数量,学习率,正样本权重/负样本权重,样本随机采样率和采样频率。
第二方面,本申请实施例提供了一种车位开口检测装置,该装置可以包括:获取单元,用于获取车位的至少一项特征信息;确定单元,用于根据所述至少一项特征信息和车位检测模型,确定所述车位的车位信息,所述车位信息包括所述车位的车位开口。
结合第二方面,在一种可能的设计中,所述至少一项特征信息包括所述车位的以下至少一项特征信息:长度、宽度、类型、所述车位周围预定范围内的其它车位信息、所述车位内外的障碍物信息、所述车位与所述车辆之间的距离信息、所述车位在所述车辆的坐标系下的坐标信息。
结合第二方面,在一种可能的设计中,所述获取单元具体用于:通过所述车辆关联的至少一种传感器获取所述车位内和/或所述车位外的环境信息;从所述车位内和/或所述车位外的环境信息中提取所述至少一项特征信息。
结合第二方面,在一种可能的设计中,所述装置还可以包括:预测单元,用于根据测试数据和第一模型获取第一预测数据;根据测试数据和第二模型获取第二预测数据,其中,所述第一模型和所述第二模型为深度学习模型,所述第一模型使用第一范围的数据训练得到,所述第二模型使用第二范围的数据训练得到,所述第一范围大于所述第二范围;所述获取单元还用于根据所述第一预测数据和所述第二预测数据的差异获取目标场景的数据;根据所述目标场景的数据获取训练数据集合,所述训练数据集合用于训练得到所述车位检测模型。
结合第二方面,在一种可能的设计中,所述车位检测模型包括决策树模型。示例地,所述决策树模型的模型参数包括以下至少一项:决策树数量,决策树最大深度,类别数量,学习率,正样本权重/负样本权重,样本随机采样率和采样频率。
第三方面,本申请实施例提供了一种通信装置,包括:处理器和存储器;所述存储器用于存储程序;所述处理器用于执行所述存储器所存储的程序,以使所述装置实现如第一方面以及第一方面任一可能设计所述的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读介质存储有程序代码,当所述程序代码在计算机上运行时,使得计算机执行如上述第一方面以及第一方面任一可能设计所述的方法。
第五方面,本申请实施例提供了一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如上述第一方面以及第一方面任一可能设计所述的方法。
第六方面,本申请实施例提供了一种终端设备,包括用于实现如上述第一方面以及第一方面任一可能设计所述的方法的单元。示例地,该终端设备包括但不限于:智能运输设备(诸如汽车、轮船、无人机、火车、货车等)、智能制造设备(诸如机器人、工业设备、智能物流、智能工厂等)、智能终端(手机、计算机、平板电脑、掌上电脑、台式机、耳机、音响、穿戴设备、车载设备等)。
本申请实施例在上述各方面提供的实现的基础上,还可以进行进一步组合以提供更多实现。
上述第二方面至第六方面中任一方面中的任一可能实现方式可以达到的技术效果,可以相应参照上述第一方面至第二方面中任一方面中的任一可能实现方式可以达到的技术效果描述,重复之处不予论述。
附图说明
图1a示出了本申请提供的一种车位框的示意图;
图1b示出了本申请提供的一种车位倾斜角的示意图;
图2a示出了本申请提供的一种***架构示意图;
图2b示出了本申请提供的另一种***架构示意图;
图3示出了本申请实施例的车位开口检测方法的流程示意图;
图4示出了本申请实施例的原理示意图;
图5示出了本申请实施例的生成训练数据的示意图;
图6示出了本申请实施例的数据挖掘示意图;
图7示出了本申请实施例的车位信息用途示意图;
图8示出了本申请实施例的车位开口检测装置的示意图;
图9示出了本申请实施例的通信装置的示意图。
具体实施方式
下面将结合附图,对本申请实施例进行详细描述。
以下,对本申请中的部分用语进行解释说明。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。
1)入口线、分割线、泊车标记点:
如图1a所示,为本申请提供的一种车位框的示意图。
其中,车位框通常包括四条边。以车位2为例,P1P2称为入口线,P1P4和P2P3可均称为分割线,实际为车位1与其它车位(例如车位2)或者空闲位置或者墙体或者路沿等之间的隔离界线。入口线与分割线的交点可称为入口车位点(或称为泊车标记点),即图1a中的P1和P2称为入口车位点;P3和P4可称为非入口车位点。P1、P2、P3和P4可统称为车位点,车位点也可以理解为是车位框各个转角处的角点,因此,车位点也可称为车位角点。通常,分割线与入口线垂直,对于分割线与入口线垂直的车位,可称为垂直车位或平行车位。
在一些实施例中,为了使车辆安全停驶在车位框内,车位框内靠近车尾(例如靠近P3P4分割线)处还可以包括轮挡(以虚线框表示),以使得车辆驶入车位框后及时停止,以免碰撞到其它障碍物(例如墙体或者其它车辆)或者妨碍其它车辆的行驶。在实际应用中,该轮挡可以是分离式也可以是一体式,本申请实施例对此不做限定。
2)车位倾斜角:
车位倾斜角或称为分割线的倾斜角或车位框的倾斜角,是指分割线与图像的x轴之间的夹角。通常,车位框分割线与图像的x轴之间的夹角为θ,可参见图1b。
需要说明的是,对于斜向车位,需要判断车位倾斜角。该类型算法可以采用模板匹配的方法来确定车位倾斜角,本申请实施例对此不做限定。
3)车位与自车的距离:
车位与自车的距离指自车的中心到入口车位点之间的距离。结合上述图1a,距离S1和距离S2表示车位与自车的距离。结合上述图1b,距离S3和距离S4表可以示车位与自车的距离。应理解的是,在不同的方法实施例中,车位与自车的距离是可以根据车位检测算法的需要自定义的,例如车位与自车的距离也可以是车位的四条边的中心与车辆车头之间的距离。在一些实施例中,还可以在同一检测算法中配置不同的距离计算方法,以便在车辆与车位处于不同的情形时,根据实际情形自适应确定距离计算方法,来计算车位与车辆之间的距离,本申请实施例对此不做限定。
下面示例性地的示出了本申请可应用的两种可能的***架构。
基于上述内容,图2a是本申请的可应用的一种***架构示意图。该***可包括车辆101及车辆管理服务器102(或者称为车位管理服务器)。
其中,车辆101是指具有采集周围环境的图像及远程通信功能的车辆。示例性地,车辆101上设置有传感器1011,可实现对车辆周围环境信息的采集。传感器例如可以是图像采集设备,图像采集设备例如可以是鱼眼摄像头、单目摄像头、深度摄像头中的至少一种等,可以用于采集车辆待泊入的车位的视觉图像,也可以用于采集该车位周围的环境信息。图2a中以传感器设置于车辆的前、后、左、右四个方向(可结合图2a)为例,以实现对车辆前、后、左、右四个方向的环境信息进行采集。四个鱼眼摄像头的视场角可均大于180度,从而可实现对车位周围环境的全方位捕获。应理解,传感器1011的视场角越大,传感器可感知的范围越大。车辆101的远程通信功能通常可以由设置在车辆101上的通信模块来实现,通信模块例如包括远程通信箱(telematics box,TBOX)或者无线通信***(可参见下述图2b中的无线通信***244的介绍)等。
车辆管理服务器102可以辅助车辆实现车位检测的功能。车辆管理服务器102是单个服务器,也可以是由多个服务器组成的服务器集群。车辆管理服务器102例如也可以是云服务器(或称为云、云端、云端服务器、云端控制器或车联网服务器等)。云服务器是对数据处理能力的设备或器件的统称,诸如 可以包括主机或处理器等实体设备,也可以包括虚拟机或容器等虚拟设备,还可以包括芯片或集成电路等。另外,车辆管理服务器102可以将所有的功能集成在一个独立的物理设备上,或者也可以将不同的功能分别部署在不同的物理设备上,本申请对此不做限定。通常,一台车辆管理服务器102可以与多个车辆101进行通信。
需要说明的是,上述图2a所示的***架构中的车辆101、车辆管理服务器102、传感器1011的数量仅是示例,本申请对此不做限定。另外,该***中的车辆管理服务器102的名称仅是示例,具体实现也可以其它可能的名称,例如也可称为车位开口检测装置,本申请对此不做限定。一种可能的实施方式中,该车位开口检测装置的功能也可以分开部署在不同设备上,例如车位开口检测装置的部分功能可以部署在车辆管理服务器102,另一部分功能可以部署在车辆上,本申请实施例对此不做限定。应理解,上述图2a中的车辆101可以是下述图2b的车辆。
请参阅图2b,是本申请的可应用的另一种***架构示意图。在一个实施例中,车辆可以配置为完全或部分地自动驾驶模式。耦合到车辆200或包括在车辆200中的组件可以包括推进***210、传感器***220、控制***230、***设备240、电源250、计算机***260以及用户接口270。车辆200的组件可以被配置为以与彼此互连和/或与耦合到各***的其它组件互连的方式工作。例如,电源250可以向车辆200的所有组件提供电力。计算机***260可以被配置为从推进***210、传感器***220、控制***230和***设备240接收数据并对它们进行控制。计算机***260还可以被配置为在用户接口270上生成图像的显示并从用户接口270接收输入。
需要说明的是,在其它示例中,车辆200可以包括更多、更少或不同的***,并且每个***可以包括更多、更少或不同的组件。此外,示出的***和组件可以按任意种的方式进行组合或划分,本申请对此不做具体限定。
推进***210可以为车辆200提供动力运动。如图2b所示,推进***210可以包括引擎/发动机214、能量源213、传动装置(transmission)212和车轮/轮胎211。另外,推进***210可以额外地或可替换地包括除了图2b所示出的组件以外的其他组件。本申请对此不做具体限定。
传感器***220可以包括用于感测关于车辆200所位于的环境的信息的若干个传感器。如图2b所示,传感器***220的传感器可包括相机传感器223。相机传感器223可用于捕捉车辆200的周边环境的多个图像。相机传感器223可以是静态相机或视频相机。进一步,可选地,传感器***220还可包括全球定位***(Global PositioningSystem,GPS)226、惯性测量单元(Inertial Measurement Unit,IMU)225、激光雷达、毫米波雷达以及用于修改传感器的位置和/或朝向的制动器221等。毫米波雷达可利用无线电信号来感测车辆200的周边环境内的目标。在一些实施例中,除了感测目标以外,毫米波雷达224还可用于感测目标的速度和/或前进方向。激光雷达224可利用激光来感测车辆200所位于的环境中的目标。GPS 226可以为用于估计车辆200的地理位置的任何传感器。为此,GPS 226可以包括收发器262,基于卫星定位数据估计车辆200相对于地球的位置。在示例中,计算机***260可以用于结合地图数据使用GPS 226来估计车辆200行驶的道路。IMU 225可以用于基于惯性加速度及其任意组合来感测车辆200的位置和朝向变化。在一些示例中,IMU225中传感器的组合可包括例如加速度计和陀螺仪。另外,IMU 225中传感器的其它组合也是可能的。
控制***230为控制车辆200及其组件的操作。控制***230可包括各种元件,其中包括转向单元236、油门235、制动单元234、传感器融合算法233、计算机视觉***232、路线控制***231以及障碍规避***237。转向单元236可操作来调整车辆200的前进方向。例如在一个实施例中可以为方向盘***。油门235用于控制引擎214的操作速度并进而控制车辆200的速度。控制***230可以额外地或可替换地包括除了图2b所示出的组件以外的其他组件。本申请对此不做具体限定。
制动单元234用于控制车辆200减速。制动单元234可使用摩擦力来减慢车轮211。在其他实施例中,制动单元234可将车轮211的动能转换为电流。制动单元234也可采取其他形式来减慢车轮211转速从而控制车辆200的速度。计算机视觉***232可以操作处理和分析由相机传感器223捕捉的图像以便识别车辆200周边环境中的目标和/或特征。目标和/或特征可包括交通信号、道路边界和障碍物。计算机视觉***232可使用目标识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉***232可以用于为环境绘制地图、跟踪目标、估计目标的速度等等。路线控制***231用于确定车辆200的行驶路线。在一些实施例中,路线控制***231可结合来自传感器***220、GPS 226和一个或多个预定地图的数据以为车辆200确定行驶 路线(如泊车路线)。障碍规避***237用于识别、评估和避免或者以其他方式越过车辆200的环境中的潜在障碍物。在另一个实例中,控制***230可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
***设备240可以被配置为允许车辆200与外部传感器、其它车辆和/或用户交互。为此,***设备240可以包括例如无线通信***244、触摸屏243、麦克风242和/或扬声器241。***设备240可以额外地或可替换地包括除了图2b所示出的组件以外的其他组件。本申请对此不做具体限定。
在一些实施例中,***设备240提供车辆200的用户与用户接口270交互的手段。例如,触摸屏243可向车辆200的用户提供信息。用户接口270还可操作触摸屏243来接收用户的输入。在其他情况中,***设备240可提供用于车辆200与位于车内的其它设备通信的手段。例如,麦克风242可从车辆200的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器241可向车辆200的用户输出音频。
无线通信***244可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信***244可使用3G蜂窝通信,例如码分多址(code division multiple access,CDMA)、EVD0、全球移动通信***(global system for mobile communications,GSM)/通用分组无线服务技术(general packet radio service,GPRS),或者4G蜂窝通信,例如长期演进(long term evolution,LTE),或者5G蜂窝通信。无线通信***244可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信***244可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信***,例如,无线通信***244可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
电源250可以被配置为向车辆200的一些或全部组件提供电力。为此,电源250可以包括例如可再充电锂离子或铅酸电池。在一些示例中,一个或多个电池组可被配置为提供电力。其它电源材料和配置也是可能的。在一些示例中,电源250和能量源213可以一起实现,如一些全电动车中那样。车辆200的组件可以被配置为以与在其各自的***内部和/或外部的其它组件互连的方式工作。为此,车辆200的组件和***可以通过***总线、网络和/或其它连接机制通信地链接在一起。
车辆200的部分或所有功能受计算机***260控制。计算机***260可包括至少一个处理器261,处理器261执行存储在例如存储器263这样的计算机可读介质中的指令2631。计算机***260还可以是采用分布式方式控制车辆200的个体组件或子***的多个计算设备。
处理器261可以是任何常规的处理器,诸如中央处理器(central processing unit,CPU)。替选地,还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、图形处理器(graphic processing unit,GPU)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。应理解,本申请对上述车辆***包括的传感器的数量、处理器的数量不做限定。尽管图2b功能性地图示了处理器、存储器、和在计算机***260的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机***260的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,存储器263可包含指令2631(例如,程序逻辑),指令2631可被处理器261执行来执行车辆200的各种功能,包括以上描述的那些功能。存储器214也可包含额外的指令,包括向推进***210、传感器***220、控制***230和***设备240中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令2631以外,存储器263还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆200在自主、半自主和/或手动模式 中操作期间被车辆200和计算机***260使用。
用户接口270,用于向车辆200的用户提供信息或从其接收信息。可选地,用户接口270可包括在***设备240的集合内的一个或多个输入/输出设备,例如无线通信***244、触摸屏243、麦克风242和扬声器241。
计算机***260可基于从各种子***(例如,推进***210、传感器***220和控制***230)以及从用户接口270接收的输入来控制车辆200的功能。例如,计算机***260可利用来自控制***230的输入以便控制转向单元236来避免由传感器***220和障碍规避***237检测到的障碍物。在一些实施例中,计算机***260可操作来对车辆200及其子***的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆200分开安装或关联。例如,存储器263可以部分或完全地与车辆200分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图2b不应理解为对本申请实施例的限制。
需要说明的是,上述车辆包括但不限于无人车、智能车(如自动导引运输车(automated guided vehicle,AGV))、电动车、数字汽车、智能制造车。
本申请所提供的车位开口检测方法可应用于高级驾驶辅助***(advanced driving assistant system,ADAS)、自动驾驶***或智能驾驶***等领域,尤其适用于自动泊车相关功能,例如自动泊车辅助(automated parking assist,APA)技术、远程遥控泊车(remote parking assist,RPA)技术或者自主代客泊车(automated valet parking,AVP)技术等,该车位开口检测方法所检测到的车位信息可以包括但不限于车位的车位开口等,例如还可以包括预测到的车位类型。该车位开口检测方法也可以应用于利用车位信息作为约束进行更加高级功能的使用,如基于车位信息进行三维建模等,本申请实施例对此不做限定。
如背景技术所描述,在自动泊车过程中,需要进行车位检测。在一些场景中,为了保障顺利泊车,还需要进行车位开口预测以及车位类型预测等。在这之中,如何摆脱使用场景的限制,并进一步提升这些技术的准确性和泛化能力,仍为亟需解决的重要问题。
基于此,本申请实施例提出一种车位开口检测方法及装置,用于提升车位开口检测的准确性以及提升车位检测模型的泛化能力。其中,方法和装置是基于同一技术构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。并且,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,各个实施例之间的术语和/或描述具有一致性、且可以相互引用,不同实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
需要说明的是,本申请实施例中“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a和b,a和c,b和c,或a和b和c,其中a,b,c可以是单个,也可以是多个。
以及,除非有特别说明,本申请实施例提及“第一”、“第二”等序数词是用于对多个对象进行区分,不用于限定多个对象的优先级或者重要程度。例如,第一设备、第二设备,只是为了区分不同的电子设备,而不是表示这两个设备的优先级或者重要程度等的不同。例如,在一些实施例中,第一设备与第二设备所执行的方法步骤可以互换。
下面结合图3-图6,对本申请实施例提出的车位开口检测方法进行具体阐述。
图3示出了本申请实施例的一种车位开口检测方法的流程示意图。其中,该方法可由车位开口检测装置执行,该车位开口检测装置可以部署在上述图2a中的车辆管理服务器102,或者部署在上述图2b中的车辆。如图3所示,该方法可以包括以下步骤:
S310:车位开口检测装置获取车位的至少一项特征信息。
示例地,该至少一项特征信息可以包括但不限于该车位的以下特征信息中的至少一项:长度、宽度、类型、所述车位周围预定范围内的其它车位信息、所述车位内外的障碍物信息、所述车位与所述车辆之间的距离信息、所述车位在所述车辆的坐标系下的坐标信息。
其中,以图1a所示的车位2为例,分割线P1P4(或者P2P3)的长度可以表示车位长度,分割线P1P2(或者P3P4)的长度可以表示车位宽度。
按照车位的尺寸大小,车位类型可以包括大型停车位和小型停车位,大型停车位长度例如为15.6米,宽度例如为3.25米,适用于中大型车辆;小型停车位长度例如为6米,宽度例如为2.5米,适用于小型车辆。按照车位排列方式,车位类型可以包括平衡式车位、倾斜式车位和垂直式车位,平衡式车位的长度例如为6米,宽度例如为2.5米;倾斜式车位(例如倾角30°、45°、60°):斜长度例如为6米,宽度例如为2.8米,两斜线垂直距离例如保持2.5米的标准。垂直式车位:长度大于等于5米,宽度例如为2.5米,一般2.5x5.3m为最佳标准停车位尺寸。按照车位框的颜色,车位类型可以包括黄色停车位、白色停车位、蓝色停车位和绿色停车位。其中,黄色停车位为专属停车位,常见的黄色专用停车位包括警务专用、防疫保障专用、新能源汽车专用、应急抢险车辆专用等,停车位标志和标线内还可以标有相关字样。白色停车位为收费停车位,是目前最常见的一种停车位,白色实线车位的停车时间无规定;白色虚线车位的停车时间有规定,具体时间以停车位标注的为准。蓝色停车位为免费停车位,但停车有时间规定,会在路面上或在标志牌注明免费停车的时间段。绿色停车位:仅在少部分城市使用,属限时免费停车位,限时免费满足短时停车需求,可解决市民购物、办事等临时停车需要。本申请实施例对此车位类型不做限定。
车位周围预定范围内的其它车位信息例如可以包括图1a中的车位1的车位信息,该预定范围例如可以是车位周围1米范围内,或是预设的其它范围值,本申请实施例对此不做限定。
车位内外的障碍物信息可以包括车位内是否存在车位锁、其它车辆或是其它影响停车的人或物体,车位外是否存在其它车辆或是其它影响停车的人或物体。示例地,该车位内的障碍物信息可以包括车位内的轮挡的状态信息,例如可以是车位内有轮挡或者无轮挡,图1a中的虚线表示该车位2内的轮挡是可选的。或者,该车位内的障碍物信息可以包括车位锁信息,例如可以是车位锁所处的使用状态或者非使用状态等。示例地,车位外的障碍物信息,可以包括车位周围预定范围内的路沿/墙体信息,该预定范围可以在0-1米范围内。
车位与车辆之间的距离信息可以是前文描述中介绍的距离S1和距离S2(参见图1a和图1b),也可以是车位的四条边的中心与车辆车头之间的距离,本申请实施例对该距离信息的定义方式不做限定。
车辆的坐标系,具体可以是以车辆中心为坐标原点,以车辆的宽度方向为x轴、车辆的长度方向为y轴、以垂直于车辆向上的方向为z轴所建立的坐标系。车位在所述车辆的坐标系下的坐标信息是车位的相对坐标系,车位开口检测装置可以在获取到车位的绝对坐标信息(例如经纬度信息、海拔信息等)后,将该绝对坐标信息转化为在车辆的坐标系下的坐标信息,在此不再赘述。
本申请实施例中,实施S310时,车位开口检测装置可以通过至少一种技术手段获取车位的至少一项特征信息。例如,车位开口检测装置可以通过车辆关联的至少一种传感器获取车位内和/或车位外的环境信息,并从该车位内和/或车位外的环境信息中提取该至少一项特征信息。
具体实施时,该至少一种传感器可以包括但不限于车辆上安装的环视摄像头、超声波雷达、激光雷达等。该环视摄像头可以用于采集车位的视觉图像,并可以将该视觉图像提供给车位开口检测装置。车位开口检测装置可以通过对该视觉图像的解析,以从视觉图像中提取该车位的长度、宽度、车位与车辆之间的距离信息、车位周围预定范围内的其它车位信息等特征信息,或者基于所提取的这些特征信息来确定车位类型、车位在所述车辆的坐标系下的坐标信息等特征信息。该超声波雷达可利用激光来感测车辆所位于的环境中的目标,该激光雷达可利用激光来感测车辆所位于的环境中的目标,车位开口检测装置可以采用动态目标检测方法对超声波雷达或者激光雷达所感测到的目标进行解析,以便确定车位内外是否存在障碍物等。
又例如,该至少一种传感器可以包括车辆的GPS,该GPS可以为用于估计车辆的地理位置的任何传感器。车位开口检测装置可以根据GPS提供的地理位置,获取该地理位置处的局部地图(localmap)信息,以便确定该车位内外的静态障碍物信息,例如墙体、路沿、围栏等。
应理解,此处仅是对本申请实施例中获取的车位特征信息以及获取这些特征信息的技术手段的示例说明而非任何限定,在具体应用中,车位开口检测装置还可以获取其它特征信息,并采用相应的技术手段获取该特征信息,在此不再赘述。
S320:车位开口检测装置根据所述至少一项特征信息和车位检测模型,确定所述车位的车位信息。示例地,该车位信息可以包括所述车位的车位开口。
本申请实施例中,该车位检测模型具体可以是车位开口检测模型,可以是预先根据需要训练的机器学习模型,例如决策树模型。
如图4所示,在模型训练过程中训练该车位检测模型时,模型训练装置可以采用与S310相似的方法,对输入的海量的车位数据(例如车位的视觉图像、局部地图信息、动态目标检测信息、环境信息等,具体细节可参见前文中结合S310的相关描述,在此不再赘述)进行特征提取,获取不同车位的特征信息。同时,模型训练装置可以为不同车位获取真值信息,以根据该不同车位的特征信息以及获取到的真值信息生成训练数据集合,从而利用该训练数据集合进行模型训练,获得该车位检测模型。进一步,所训练获得的车位检测模型可以用于本申请实施例的车位检测过程中,以预测车位的车位开口等,从而输出车位信息,该车位信息包括预测到的车位开口。
其中,以该车位检测模型用于预测车位开口为例,模型训练装置在生成训练数据集合时,如图5所示,可以为车位的四条边设置不同的真值,例如,0、1、2、3。进一步,模型训练装置可以根据真值信息表序列以及特征信息表序列进行帧间匹配(例如时间戳差值小于第一阈值),根据同一帧内特征信息与真值信息的匹配(例如交并比(intersection overUnion,IOU)小于第二阈值)情况,计算车位的任一边框的开口值,例如0、1、2、3中的任一项,以获得不同车位的每条边框的训练标签。进一步,模型训练装置可以基于所获得的特征信息以及训练标签生成训练数据集合。
本申请实施例中,模型训练装置还可以通过数据挖掘获取困难场景数据,逐步丰富该训练数据集合,以提升该车位检测模型的鲁棒性。
示例地,该数据挖掘流程可以参阅图6所示,通过泛化测试数据,一方面将测试数据用于第一模型预测,获得第一预测数据;另一方面用于将测试数据用于第二模型预测,获得第二预测数据,根据该第一预测数据和第二预测数据的差异获取困难场景(例如称为目标场景)的数据,根据该困难场景的数据生成训练数据,并补充到图4的训练数据集合中。
需要说明的是,此处的第一模型和第二模型均可以为深度学习模型,该第一模型使用第一范围的数据训练得到,该第二模型使用第二范围的数据训练得到,该第一范围大于所述第二范围。该第一模型因使用较大范围的数据训练得到,模型结构复杂、参数多,更适于不同的场景,虽预测的准确性优于第二模型,但是在应用到车辆辅助驾驶领域时的实时性较差,不如第二模型。本申请实施例中,基于第一模型和第二模型分别得到的预测数据的差异,可以补充困难场景的相关数据,有助于提升待训练的车位检测模型的鲁棒性。
进一步,模型训练装置可以根据所获得的训练数据集合进行迭代训练,直至获得较佳的模型参数,则可停止模型训练。
本申请实施例中,以该车位检测模型为决策树模型为例,在训练该决策树模型时,例如可以设置包括但不限于以下参数:决策树数量N,决策树最大深度(maxDepth),类别数量nClass(n=4,表示车位的4个边框),学习率r,正样本权重/负样本权重(posWeight/negWeight),样本随机采样率baggingFrac和采样频率baggingFreq。
该决策树模型的模型表达式可以如下所示:
其中,X表示任一车位特征,m表示第m个决策树,m≤N,m、N为大于或等于1的整数。
基于以上参数以及模型表达式,可以通过以下步骤训练该决策树模型:
1)、初始化f0(X)=0;
2)、学习第m(m=1,2,…,N)个决策树:
a)首先计算当前树的学习目标;
b)基于当前节点下所有样本,遍历所有特征的所有分割点,获得最优分割点,以完成当前节点的学习;
c)学习左右子树参数,直到整个决策树学习完成,进入下一轮迭代学习。
在模型训练结束后,即可获得本申请实施例的车位检测模型。
在实施S320时,则可使用该车位检测模型,以在S310针对待检测的车位获取到的至少一项特征信息分别作为X,利用该车位检测模型进行预测,并通过以下表达式(2)计算该车位的4个边框的类别概率:
Proba_((1,4))=softmax(F(X))  (2);
根据上述车位开口类别概率取值[0,1],选取概率最大的类别为车位开口的预测值,预测值取值0、1、2、3,对应该车位的4个边框。
由此,则可获取该车位的车位信息,该车位信息可以包括车位开口。
至此,已经结合附图及实施例详细介绍了本申请实施例的模型训练过程以及基于该车位检测模型的车位检测过程,该方法可以利用车位特征信息获取训练数据集合,基于训练数据集合生成车位检测模型,并使用该车位检测模型进行预测车位开口,可以降低车位开口的预判率,并提升模型的泛化能力。其中,可以使用包括但不限于视觉图像、局部地图信息、动态目标检测信息以及环境信息等作为输入,能够较好地融合和利用车位周围障碍物等环境感知信息,有助于降低车位开口的预判率,并提升模型的泛化能力。所提取到车位特征信息多样化,有助于降低车位开口的预判率,并提升模型的泛化能力。并且,使用数据挖掘策略来获取困难场景的数据,以丰富训练数据集合,可以达到提升车位检测模型的鲁棒性的目的。
进一步,如图7所示,基于通过上述方法所获得的车位信息,一方面,该车位信息可以提供至车辆的路径规划与车辆控制单元,使得该路径规划与车辆控制单元基于该车位信息,实现泊车路径规划和泊车过程中的车辆控制,以实现自动泊车。另一方面,该车位信息可以提供至人机交互界面(human machine interface,HMI)并在该HMI输出,以供用户查看车位(包括车位开口)、确认是否开始泊车和查看泊车路径、动态的泊车过程等。本申请实施例对该车位信息的用途不做限定,在此不再赘述。
本申请实施例还提供了一种车位开口检测装置,用于执行上述方法实施例中车位开口检测装置所执行的方法,相关特征可以参见上述方法实施例,在此不再赘述。
如图8所示,该通信装置800可以包括:获取单元801,用于获取车位的至少一项特征信息;确定单元802,用于根据所述至少一项特征信息和车位检测模型,确定所述车位的车位信息,所述车位信息包括所述车位的车位开口。
在一种可能的设计中,所述至少一项特征信息包括所述车位的以下至少一项特征信息:长度、宽度、类型、所述车位周围预定范围内的其它车位信息、所述车位内外的障碍物信息、所述车位与所述车辆之间的距离信息、所述车位在所述车辆的坐标系下的坐标信息。
在一种可能的设计中,所述获取单元801具体用于:通过所述车辆关联的至少一种传感器获取所述车位内和/或所述车位外的环境信息;从所述车位内和/或所述车位外的环境信息中提取所述至少一项特征信息。
在一种可能的设计中,所述装置还包括:预测单元803,用于根据测试数据和第一模型获取第一预测数据;根据测试数据和第二模型获取第二预测数据,其中,所述第一模型和所述第二模型为深度学习模型,所述第一模型使用第一范围的数据训练得到,所述第二模型使用第二范围的数据训练得到,所述第一范围大于所述第二范围;所述获取单元801还用于根据所述第一预测数据和所述第二预测数据的差异获取目标场景的数据片段;根据所述目标场景的数据片段获取训练数据集合,所述训练数据集合用于训练得到所述车位检测模型。
在一种可能的设计中,所述车位检测模型包括决策树模型。示例地,所述决策树模型的模型参数包括以下至少一项:决策树数量,决策树最大深度,类别数量,学习率,正样本权重/负样本权重,样本随机采样率和采样频率。
应理解,以上装置中各单元的划分仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。此外,装置中的单元可以以处理器调用软件的形式实现;例如装置包括处理器,处理器与存储器连接,存储器中存储有指令,处理器调用存储器中存储的指令,以实现以上任一种方法或实现该装置各单元的功能,其中处理器例如为通用处理器,例如中央处理单元(Central Processing Unit,CPU)或微处理器,存储器为装置内的存储器或装置外的存储器。或者,装置中的单元可以以硬件电路的形式实现,可以通过对硬件电路的设计实现部分或全部单元的功能,该硬件电路可以理解为一个或多个处理器;例如,在一种实现中,该硬件电路为专用集成电路(application-specific integrated circuit,ASIC),通过对电路内元件逻辑关系的设计,实现以上部分或全部单元的功能;再如,在另一种实现中,该硬件电路为可以通过可编程逻辑器件(programmable logic device,PLD)实现,以现场可编程门阵列(Field Programmable Gate Array,FPGA)为例,其可以包括大量逻辑门电路,通过配置文件来配置逻辑门电路之间的连接关系,从而实现以上部分或全部单元的功能。以上装置的所有单 元可以全部通过处理器调用软件的形式实现,或全部通过硬件电路的形式实现,或部分通过处理器调用软件的形式实现,剩余部分通过硬件电路的形式实现。
在本申请实施例中,处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如CPU、微处理器、图形处理器(graphics processing unit,GPU)(可以理解为一种微处理器)、或数字信号处理器(digital singnal processor,DSP)等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为ASIC或PLD实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如神经网络处理单元(Neural Network Processing Unit,NPU)张量处理单元(Tensor Processing Unit,TPU)、深度学习处理单元(Deep learning Processing Unit,DPU)等。
可见,以上装置中的各单元可以是被配置成实施以上方法的一个或多个处理器(或处理电路),例如:CPU、GPU、NPU、TPU、DPU、微处理器、DSP、ASIC、FPGA,或这些处理器形式中至少两种的组合。
此外,以上装置中的各单元可以全部或部分可以集成在一起,或者可以独立实现。在一种实现中,这些单元集成在一起,以片上***(system-on-a-chip,SOC)的形式实现。该SOC中可以包括至少一个处理器,用于实现以上任一种方法或实现该装置各单元的功能,该至少一个处理器的种类可以不同,例如包括CPU和FPGA,CPU和人工智能处理器,CPU和GPU等。
在一个简单的实施例中,本领域的技术人员可以想到上述实施例中的通信装置均可采用图9所示的形式。
如图9所示的装置900,包括至少一个处理器910和通信接口930。在一种可选的设计中,还可以包括存储器920。
本申请实施例中不限定上述处理器910以及存储器920之间的具体连接介质。
在如图9的装置中,处理器910在与其他设备进行通信时,可以通过通信接口930进行数据传输。
当通信装置采用图9所示的形式时,图9中的处理器910可以通过调用存储器920中存储的计算机执行指令,使得装置900可以执行上述任一方法实施例。
本申请实施例还涉及一种芯片***,该芯片***包括处理器,用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行上述任一实施例的方法。
在一种可能的实现方式中,该处理器可以通过接口与存储器耦合。
在一种可能的实现方式中,该芯片***还可以直接包括存储器,该存储器中存储有计算机程序或计算机指令。
示例地,存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
本申请实施例还涉及一种处理器,该处理器用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行上述任一实施例所述的方法。
示例地,在本申请实施例中,处理器是一种集成电路芯片,具有信号的处理能力。例如,该处理器可以是FPGA,可以是通用处理器、DSP、ASIC或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,还可以是***芯片(system on chip,SoC),还可以是CPU,还可以是网络处理器(network processor,NP),还可以是微控制器(micro controller unit,MCU),还可以是PLD或其他集成芯片,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处 理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
应明白,本申请的实施例可提供为方法、***、或计算机程序产品。
在一种可能的实现方式中,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有程序代码,当所述程序代码在所述计算机上运行时,使得计算机执行上述方法实施例。
在一种可能的实现方式中,本申请实施例提供了一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例。
因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,各个实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。

Claims (14)

  1. 一种车位开口检测方法,其特征在于,包括:
    获取车位的至少一项特征信息;
    根据所述至少一项特征信息和车位检测模型,确定所述车位的车位信息,所述车位信息包括所述车位的车位开口。
  2. 根据权利要求1所述的方法,其特征在于,所述至少一项特征信息包括所述车位的以下至少一项特征信息:长度、宽度、类型、所述车位周围预定范围内的其它车位信息、所述车位内外的障碍物信息、所述车位与所述车辆之间的距离信息、所述车位在所述车辆的坐标系下的坐标信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述获取车位的至少一项特征信息,包括:
    通过所述车辆关联的至少一种传感器获取所述车位内和/或所述车位外的环境信息;
    从所述车位内和/或所述车位外的环境信息中提取所述至少一项特征信息。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述方法还包括:
    根据测试数据和第一模型获取第一预测数据;
    根据测试数据和第二模型获取第二预测数据,其中,所述第一模型和所述第二模型为深度学习模型,所述第一模型使用第一范围的数据训练得到,所述第二模型使用第二范围的数据训练得到,所述第一范围大于所述第二范围;
    根据所述第一预测数据和所述第二预测数据的差异获取目标场景的数据;
    根据所述目标场景的数据获取训练数据集合,所述训练数据集合用于训练得到所述车位检测模型。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述车位检测模型包括决策树模型。
  6. 一种车位开口检测装置,其特征在于,包括:
    获取单元,用于获取车位的至少一项特征信息;
    确定单元,用于根据所述至少一项特征信息和车位检测模型,确定所述车位的车位信息,所述车位信息包括所述车位的车位开口。
  7. 根据权利要求6所述的装置,其特征在于,所述至少一项特征信息包括所述车位的以下至少一项特征信息:长度、宽度、类型、所述车位周围预定范围内的其它车位信息、所述车位内外的障碍物信息、所述车位与所述车辆之间的距离信息、所述车位在所述车辆的坐标系下的坐标信息。
  8. 根据权利要求6或7所述的装置,其特征在于,所述获取单元具体用于:
    通过所述车辆关联的至少一种传感器获取所述车位内和/或所述车位外的环境信息;
    从所述车位内和/或所述车位外的环境信息中提取所述至少一项特征信息。
  9. 根据权利要求6-8中任一项所述的装置,其特征在于,所述装置还包括:
    预测单元,用于根据测试数据和第一模型获取第一预测数据;根据测试数据和第二模型获取第二预测数据,其中,所述第一模型和所述第二模型为深度学习模型,所述第一模型使用第一范围的数据训练得到,所述第二模型使用第二范围的数据训练得到,所述第一范围大于所述第二范围;
    所述获取单元还用于根据所述第一预测数据和所述第二预测数据的差异获取目标场景的数据;根据所述目标场景的数据获取训练数据集合,所述训练数据集合用于训练得到所述车位检测模型。
  10. 根据权利要求6-9中任一项所述的装置,其特征在于,所述车位检测模型包括决策树模型。
  11. 一种通信装置,其特征在于,包括:处理器和存储器;
    所述存储器用于存储程序;
    所述处理器用于执行所述存储器所存储的程序,以使所述装置实现如所述权利要求1-5中任一项所述的方法。
  12. 一种计算机可读存储介质,其特征在于,包括计算机可读指令,当所述计算机可读指令被执行时,实现如权利要求1-5中任一项所述的方法。
  13. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-5中任一项所述的方法。
  14. 一种终端设备,其特征在于,包括用于实现如权利要求1-5中任一项所述的方法的单元。
PCT/CN2023/104957 2022-11-22 2023-06-30 一种车位开口检测方法及装置 WO2024109079A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211469399.9A CN118072545A (zh) 2022-11-22 2022-11-22 一种车位开口检测方法及装置
CN202211469399.9 2022-11-22

Publications (1)

Publication Number Publication Date
WO2024109079A1 true WO2024109079A1 (zh) 2024-05-30

Family

ID=91097749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/104957 WO2024109079A1 (zh) 2022-11-22 2023-06-30 一种车位开口检测方法及装置

Country Status (2)

Country Link
CN (1) CN118072545A (zh)
WO (1) WO2024109079A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200294310A1 (en) * 2019-03-16 2020-09-17 Nvidia Corporation Object Detection Using Skewed Polygons Suitable For Parking Space Detection
CN112668588A (zh) * 2020-12-29 2021-04-16 禾多科技(北京)有限公司 车位信息生成方法、装置、设备和计算机可读介质
CN113537105A (zh) * 2021-07-23 2021-10-22 北京经纬恒润科技股份有限公司 一种车位检测方法及装置
CN113901961A (zh) * 2021-12-02 2022-01-07 禾多科技(北京)有限公司 车位检测方法、装置、设备及存储介质
CN114511023A (zh) * 2022-01-27 2022-05-17 腾讯科技(深圳)有限公司 分类模型训练方法以及分类方法
CN114842446A (zh) * 2022-04-14 2022-08-02 合众新能源汽车有限公司 车位检测方法、装置及计算机存储介质
CN115346193A (zh) * 2022-08-23 2022-11-15 上海保隆领目汽车科技有限公司 一种车位检测方法及其跟踪方法、车位检测装置、车位检测设备及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200294310A1 (en) * 2019-03-16 2020-09-17 Nvidia Corporation Object Detection Using Skewed Polygons Suitable For Parking Space Detection
CN112668588A (zh) * 2020-12-29 2021-04-16 禾多科技(北京)有限公司 车位信息生成方法、装置、设备和计算机可读介质
CN113537105A (zh) * 2021-07-23 2021-10-22 北京经纬恒润科技股份有限公司 一种车位检测方法及装置
CN113901961A (zh) * 2021-12-02 2022-01-07 禾多科技(北京)有限公司 车位检测方法、装置、设备及存储介质
CN114511023A (zh) * 2022-01-27 2022-05-17 腾讯科技(深圳)有限公司 分类模型训练方法以及分类方法
CN114842446A (zh) * 2022-04-14 2022-08-02 合众新能源汽车有限公司 车位检测方法、装置及计算机存储介质
CN115346193A (zh) * 2022-08-23 2022-11-15 上海保隆领目汽车科技有限公司 一种车位检测方法及其跟踪方法、车位检测装置、车位检测设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN118072545A (zh) 2024-05-24

Similar Documents

Publication Publication Date Title
US11941873B2 (en) Determining drivable free-space for autonomous vehicles
CN113168708B (zh) 车道线跟踪方法和装置
US11132780B2 (en) Target detection method, training method, electronic device, and computer-readable medium
CN110543814B (zh) 一种交通灯的识别方法及装置
CN112639883B (zh) 一种相对位姿标定方法及相关装置
WO2022104774A1 (zh) 目标检测方法和装置
CN110930323B (zh) 图像去反光的方法、装置
RU2759975C1 (ru) Операционное управление автономным транспортным средством с управлением восприятием визуальной салиентности
CN113916242A (zh) 车道定位方法和装置、存储介质及电子设备
CN113228135B (zh) 一种盲区图像获取方法及相关终端装置
CN113591518B (zh) 一种图像的处理方法、网络的训练方法以及相关设备
CN117440908A (zh) 用于自动驾驶***中基于图神经网络的行人动作预测的方法和***
CN112534483A (zh) 预测车辆驶出口的方法和装置
WO2022052765A1 (zh) 目标跟踪方法及装置
CN114693540A (zh) 一种图像处理方法、装置以及智能汽车
CN112810603B (zh) 定位方法和相关产品
Gajjar et al. A comprehensive study on lane detecting autonomous car using computer vision
CN114445490A (zh) 一种位姿确定方法及其相关设备
CN115164910B (zh) 行驶路径生成方法、装置、车辆、存储介质及芯片
WO2024109079A1 (zh) 一种车位开口检测方法及装置
WO2022266854A1 (zh) 一种车位检测方法及装置
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
CN113859265A (zh) 一种驾驶过程中的提醒方法及设备
CN114821212A (zh) 交通标志物的识别方法、电子设备、车辆和存储介质
US20240101106A1 (en) Systems and methods for scene understanding