WO2020238284A1 - Procédé et appareil de détection de place de stationnement, et dispositif électronique - Google Patents

Procédé et appareil de détection de place de stationnement, et dispositif électronique Download PDF

Info

Publication number
WO2020238284A1
WO2020238284A1 PCT/CN2020/075065 CN2020075065W WO2020238284A1 WO 2020238284 A1 WO2020238284 A1 WO 2020238284A1 CN 2020075065 W CN2020075065 W CN 2020075065W WO 2020238284 A1 WO2020238284 A1 WO 2020238284A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking space
image
free
point information
corner point
Prior art date
Application number
PCT/CN2020/075065
Other languages
English (en)
Chinese (zh)
Inventor
王哲
丁明宇
石建萍
何宇帆
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201910458754.4A external-priority patent/CN112016349B/zh
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2021531322A priority Critical patent/JP2022510329A/ja
Priority to KR1020217016722A priority patent/KR20210087070A/ko
Publication of WO2020238284A1 publication Critical patent/WO2020238284A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • This application relates to artificial intelligence technology, especially a parking space detection method, device and electronic equipment.
  • a key task in intelligent driving is the detection of parking spaces.
  • the purpose of the parking space detector is to automatically find free parking spaces and park the vehicle on the free parking spaces.
  • the embodiments of the present application provide a parking space detection method, device and electronic equipment.
  • an embodiment of the present application provides a parking space detection method, the method includes: obtaining a parking space image; inputting the parking space image into an instance segmentation neural network to obtain free parking spaces in the parking space image Based on the area information and/or corner information of the free parking space in the parking space image, determine the detection result of the free parking space in the parking space image.
  • an embodiment of the present application provides a parking space detection device, the device including:
  • the first acquisition module is used to acquire parking space images
  • a processing module configured to input the parking space image into a neural network to obtain area information and/or corner point information of free parking spaces in the parking space image;
  • the determining module is configured to determine the detection result of the free parking space in the parking space image based on the area information and/or corner point information of the free parking space in the parking space image.
  • an embodiment of the present application provides an electronic device, including: a memory, configured to store a computer program; a processor, configured to execute the computer program to realize the parking space detection according to any one of the first aspect method.
  • an embodiment of the present application provides a computer storage medium in which a computer program is stored, and the computer program implements the parking space detection method described in any one of the first aspect when executed.
  • the parking space detection method, device and electronic equipment provided by the embodiments of the present application obtain the parking space image and input the parking space image into the neural network to obtain the area information and/or angle of the free parking space in the parking space image Point information; based on the area information and/or corner point information of the free parking space in the parking space image, determine the detection result of the free parking space in the parking space image.
  • the detection method of the embodiment of the present application only needs to input the obtained parking space image into the neural network to obtain the accurate area information and/or corner point information of the free parking space, without the need for pre-image processing, the entire detection
  • the process is simple and time-consuming, and based on the area information and/or corner information of the free parking space in the parking space image, the detection result of the free parking space in the parking space image is determined, which effectively improves the detection accuracy of the free parking space .
  • FIG. 1 is a schematic flow chart 1 of a parking space detection method provided by an embodiment of this application;
  • Figure 2 is an example diagram of parking spaces
  • FIG. 3 is a second schematic flowchart of a parking space detection method provided by an embodiment of the application.
  • Fig. 4a is an example diagram of a parking space training image used in an embodiment of the application.
  • Fig. 4b is an image after the key points of Fig. 4a are marked;
  • FIG. 5 is a training flowchart of the instance segmentation network involved in an embodiment of this application.
  • FIG. 6 is a schematic structural diagram of an instance segmentation network involved in an embodiment of this application.
  • FIG. 7 is a schematic diagram of a parking space detection result related to an embodiment of the application.
  • FIG. 8 is a first structural diagram of a parking space detection device provided by an embodiment of the application.
  • FIG. 9 is a second structural diagram of a parking space detection device provided by an embodiment of the application.
  • FIG. 10 is a third structural diagram of a parking space detection device provided by an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • the embodiments of the present application can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with many other general or special computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, etc. include but are not limited to at least one of the following: personal computer systems, server computer systems, thin clients Computers, thick clients, handheld or laptop devices, systems based on microprocessors, central processing units (CPU), graphics processing units (GPUs), vehicle systems, set-top boxes, programmable consumer electronics Products, network personal computers, small computer systems, large computer systems and distributed cloud computing technology environments including any of the above systems, etc.
  • Electronic devices such as terminal devices, computer systems, and servers can be described in the general context of computer system executable instructions (such as program modules) executed by the computer system.
  • program modules may include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment. In the distributed cloud computing environment, tasks are executed by remote processing equipment linked through a communication network. In a distributed cloud computing environment, program modules may be located on storage media of local or remote computing systems including storage devices.
  • the above-mentioned electronic device is installed on a vehicle and can be connected to a reversing system to assist the reversing system to park the vehicle in a free parking space.
  • the electronic device is connected to the driving assistance system, and the electronic device sends the obtained detection result of the free parking space to the driving assistance system, so that the driving assistance system can according to the detection result of the free parking space
  • the electronic device can also be directly part or all of the driving assistance system, or part or all of the reversing system.
  • the electronic device can also be connected with other vehicle control systems according to actual needs, which is not limited in the embodiment of the present application.
  • FIG. 1 is a schematic diagram 1 of a flow chart of a parking space detection method provided by an embodiment of the application. As shown in Figure 1, the method of this embodiment may include:
  • the execution subject is an electronic device as an example for description.
  • the electronic device may be, but is not limited to, a smart phone, a computer, a vehicle-mounted system, and the like.
  • FIG. 2 is an example diagram of a parking space.
  • the electronic device of this embodiment may also have a camera, through which the driving environment of the vehicle can be photographed.
  • the camera can photograph the parking space around the road on which the vehicle is traveling.
  • Image the parking space image is obtained, and the parking space image is sent to the processor of the electronic device, so that the processor executes the method of this embodiment to obtain the detection result of the free parking space in the parking space image.
  • the electronic device of this embodiment may be connected to an external camera, and the driving environment of the vehicle is captured by the external camera, so as to obtain a parking space image.
  • the imaging component of the camera in the embodiment of the present application may be, but is not limited to, a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) or a charge coupled device (Charge Coupled Device, CCD).
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • S102 Input the parking space image into a neural network to obtain area information and/or corner point information of an idle parking space in the parking space image.
  • the neural networks in the embodiments of the present application include but are not limited to Back Propagation (BP) neural networks, Radial Basis Function (RBF) neural networks, perceptron neural networks, linear neural networks, feedback neural networks, etc. .
  • BP Back Propagation
  • RBF Radial Basis Function
  • perceptron neural networks linear neural networks
  • feedback neural networks etc.
  • the aforementioned neural network may implement instance segmentation, where instance segmentation refers to not only pixel-level classification, but also different instances need to be distinguished on the basis of specific categories. For example, there are multiple free parking spaces A, B, and C in the parking space image, and instance segmentation can identify these 3 free parking spaces as different objects.
  • the area information of the free parking space in the parking space image and/or the corner point information of the free parking space can be detected through the neural network.
  • the neural network is trained in advance through the parking space image set with the area information of the free parking space and/or the corner point information of the free parking space, so that the neural network learns to extract the area information and/or corner point of the free parking space Information capability, so that the parking space image shown in Figure 2 can be input to the neural network, and the area information and/or angle of the free parking space in the parking space image can be output through the neural network processing the parking space image Point information.
  • the area information of the free parking space may include information such as the position and size of the free parking space; the corner point information includes the position information of the corner points of the free parking space.
  • the corner point information of the free parking space may include corner point information of at least three corner points of the free parking space. Since the parking space is usually rectangular, the area information of the free parking space can be determined according to the corner point information of at least three corner points of the free parking space.
  • the detection method of the embodiment of the application only needs to input the obtained parking space image into the neural network to obtain the accurate area information and/or corner point information of the free parking space, without the need for pre-image processing, and the entire detection process Simple and time-consuming.
  • S103 Determine a detection result of the free parking space in the parking space image based on the area information and/or corner point information of the free parking space in the parking space image.
  • the free parking area information in the parking image can be used as the detection result of the free parking space.
  • the corner point information of the free parking in the parking image can be used as the detection result of the free parking space.
  • the determining the detection result of the free parking space in the parking space image based on the area information and corner point information of the free parking space in the parking space image includes: parking the free parking space in the parking space image The area information of the space and the corner point information are merged to determine the detection result of the free parking space in the parking space image.
  • the method of fusing the area information and corner point information of the free parking space to determine the detection result of the free parking space includes but is not limited to the following methods:
  • Method 1 Determine the parking space area information composed of the corner information of the free parking space in the parking space image; merge the area information of the free parking space in the parking space image and the parking space area information composed of the corner information, based on the fusion
  • the area information determines the detection result of the free parking space in the parking space image.
  • the parking space area information surrounded by the corner point information of the free parking space is recorded as parking space area information 1
  • the area information of the free parking space in the parking space image is recorded as area information 2
  • the information 2 is fused to obtain a piece of area information 3.
  • the average value of the parking space area information 1 and the area information 2 is used as the area information 3
  • the merged area information 3 is used as the detection result of the free parking space in the parking space image.
  • Method 2 Determine the corner information of the free parking space in the parking space image; merge the corner information of the free parking space in the parking space image and the corner information of the area information, and determine the parking based on the fused corner information The detection result of the free parking space in the bit image.
  • corner points corresponding to the area information of the free parking space in the parking space image are respectively marked as corner point a1, corner point a2, corner point a3, and corner point a4, and the corner points corresponding to the corner point information of the free parking space in the parking space image They are respectively denoted as corner point b1, corner point b2, corner point b3, and corner point b4, where corner point a1, corner point a2, corner point a3, and corner point a4 are connected to corner point b1, corner point b2, corner point b3,
  • the corner point b4 has a one-to-one correspondence.
  • the two corresponding corner points can be merged into one corner point.
  • the corner point a1 and the corner point b1 are merged into a corner point ab1, so that new corner point information can be obtained.
  • the new corner point information is used as the detection result of the free parking space in the parking space image.
  • the detection result of the free parking space in the parking space image can be determined by fusing the area information of the free parking space in the parking space image and the corner point information, which can improve the detection accuracy of the free parking space.
  • the method of the embodiment of the present application may execute the foregoing steps only when the vehicle is looking for a parking space.
  • the smart driving system controls the electronic equipment to work.
  • the processor in the electronic device controls the camera to capture images of parking spaces around the vehicle.
  • the electronic device sends a photo command to the external camera so that the camera will The captured image of the parking space around the vehicle is sent to the electronic device.
  • the electronic device processes the parking space image to detect the detection result of the free parking space in the parking space image.
  • the electronic device inputs the obtained parking space image into the neural network, and through the processing of the neural network, outputs the area information and/or corner point information of the free parking space in the parking space image, and then based on the free parking space in the parking space image
  • the area information and/or corner point information of the image determine the detection result of the free parking space in the parking space image, and then realize the accurate detection of the free parking space.
  • the electronic device is also connected to the intelligent driving system, and can send the detection result of the idle parking space to the intelligent driving system, and the intelligent driving system controls the vehicle to park in the idle parking space according to the detection result of the idle parking space.
  • the parking space image is obtained by inputting the parking space image into the neural network to obtain the area information and/or corner point information of the free parking space in the parking space image;
  • the area information and/or corner point information of the free parking space in the parking space image determines the detection result of the free parking space in the parking space image.
  • the detection method of the embodiment of the present application only needs to input the obtained parking space image into the neural network to obtain the accurate area information and/or corner point information of the free parking space, without the need for pre-image processing, the entire detection
  • the process is simple and time-consuming, and based on the area information and/or corner information of the free parking space in the parking space image, the detection result of the free parking space in the parking space image is determined, which effectively improves the detection accuracy of the free parking space .
  • the method of the embodiment of the present application inputs the parking space image into the neural network in S102 to obtain the free parking space in the parking space image Before the location information and/or corner point information, it also includes:
  • S102a Extend a preset value outward at the peripheral edges of the parking space image.
  • the preset value is less than or equal to half the length of the parking space.
  • Figure 4a is a parking space image acquired by an electronic device.
  • the parking space image includes two free parking spaces, free parking space 1 and free parking space 2, where, Only a part of the free parking space 2 is included in the parking space image.
  • the peripheral edges of the parking space image shown in FIG. 4a are expanded outward by a preset value, as shown by the black border in FIG. 4b, and the result shown in FIG. 4b is obtained.
  • the viewing angle range of the parking space image can be increased, and free parking spaces partially located outside the parking space image can be detected, which further increases the accuracy of parking space detection.
  • the above S103 inputs the parking space image into the neural network to obtain the area information and/or corner point information of the free parking space in the parking space image, which can be replaced by S103a:
  • S103a Input the expanded parking space image into the neural network, and obtain area information and/or corner point information of free parking spaces in the parking space image.
  • inputting Fig. 4b to the neural network can detect the area information of free parking space 1 and free parking space 2 in Fig. 4b, and/or the corner point information of free parking space 1 and free parking space 2 in Fig. 4b.
  • the preset value is expanded outward on the periphery of the parking space image, and then the expanded parking space
  • the image is input into the neural network, so that it can detect the free parking space that is partially outside the parking space image, which further improves the accuracy and practicability of parking space detection.
  • FIG. 3 is a schematic diagram of the second flow of the parking space detection method provided by the embodiment of the application.
  • the method of the embodiment of this application also includes a process of training a neural network. As shown in FIG. 3, the training process include:
  • the multiple parking space training images may be obtained by the electronic device from the database, or may be taken by the electronic device in the past.
  • the embodiment of the present application does not limit the specific process of obtaining multiple parking space training images by the electronic device.
  • each parking space image includes one or more free parking spaces.
  • FIG. 4b is a parking space training image, which includes free parking space 1 and free parking space 2.
  • the aforementioned parking space training image may be an image collected using a wide-angle camera, and the image has a certain degree of distortion.
  • the neural network can predict parking space images taken from different perspectives after training, and then reduce the cost of parking space images while ensuring the accuracy of the prediction. Shooting requirements.
  • the key points of the free parking space may include points on the edge of the parking space, the corner points of the free parking space, or the intersection of two diagonals of the free parking space. According to these key points, the area and location of the free parking space can be accurately obtained.
  • the above-mentioned parking space training image includes tagging information of key point information of the free parking space.
  • the key points marked with free parking space 4 are: key point 1, key point 2, key point 3, and key point 4.
  • Figure 4b is a way of marking the key points of free parking space 1.
  • the key points of free parking space 1 include but are not limited to the above 4 key points.
  • the specific number and selection of key points of free parking space 1 The manner is determined according to actual needs, which is not limited in the embodiment of the present application.
  • Input a plurality of parking space training images including the annotation information of the key point information of the free parking space into the neural network, based on the difference between the detection result of the neural network input and the annotation information of the key point information of the free parking space Adjust the network parameters of the neural network to complete the training of the neural network.
  • the peripheral edges of the parking space training image used are expanded outward. value.
  • the aforementioned preset value is less than or equal to half the length of the parking space.
  • the peripheral edges of the parking space training image shown in Figure 4b are expanded outward by a preset value, which is less than or equal to half the length of the parking space, as shown by the black border in Figure 4b .
  • a preset value which is less than or equal to half the length of the parking space, as shown by the black border in Figure 4b .
  • the viewing angle range of the parking space training image can be increased, and the purpose is to enable the trained neural network to detect the free parking space partially outside the captured parking space image during subsequent parking space detection.
  • the free parking space 2 in Figure 4b can be displayed completely, so that the key points of the free parking space 2 can be marked, for example, marked with
  • the key points of the free parking space 2 are: key point 11, key point 12, key point 13, key point 14, key point 15 and key point 16, among which key point 14 is located in the expansion area.
  • Figure 4b is only a possible way of marking the key point information of the free parking space 2.
  • the key points of the free parking space 2 include but are not limited to the above 6 key points.
  • the specific key points of the free parking space 2 The quantity and selection method are determined according to actual needs, which are not limited in the embodiment of the present application.
  • the number and selection method of the key points of free parking space 1 and free parking space 2 can be the same or different, as long as it is ensured that the key points of free parking space 1 are connected to the area surrounded by free parking space 1 in turn. Area, the key points of the free parking space 2 are connected to the area enclosed by the free parking space 2 in turn.
  • the key point information of the free parking space in the parking space training image includes at least one corner point information of the free parking space, wherein the corner point corresponding to each corner point information is the intersection of the two sidelines of the free parking space.
  • the peripheral edges of the parking space training image are expanded outward by preset values to supplement incomplete free parking spaces, so that the parking space training image includes standard information of key point information of incomplete free parking spaces.
  • the parking space training image is used to train the neural network, which can make the trained neural network predict that there is no complete free parking space in the parking space image, which improves the comprehensiveness and accuracy of parking space detection.
  • the foregoing S202 uses the multiple parking space training images to train the neural network, which may specifically include:
  • S301 Obtain area information composed of corner point information of the free parking space in the parking space training image and key point information of the free parking space in the parking space training image.
  • the 6 key points of the free parking space 2 include 4 corner points, for example, the four corner points are marked as 1, and the other key points are marked as 0 ,
  • the six key points of free parking space 2 are assumed to be: ⁇ "kpts":[[1346.2850971922246,517.6241900647948,1.0],[1225.010799136069,591.1447084233262,1.0],[1280.6479481641468,666.6522678185745,0.0],[1300.5183585313175,728.2505399568034 , 1.0], [1339.2656587473002, 707.3866090712743, 0.0], [1431.6630669546437, 630.8855291576674, 1.0]] ⁇ .
  • the area information of the free parking space 2 can be used as the true value of the area information in the process of detecting the parking space.
  • the area information composed of the key point information of the free parking space in each of the parking space training images in the multiple parking space training images is obtained, and the ground truth of the free parking space is formed.
  • the corner point information of the free parking space in each of the multiple parking space training images is formed to form the corner point truth value of the free parking space.
  • FIG. 6 is a schematic diagram of a structure of a neural network involved in an embodiment of this application.
  • the neural network involved in an embodiment of this application includes but is not limited to the neural network shown in FIG. 6.
  • the neural network may include an instance segmentation layer, and the instance segmentation layer is used to obtain area information of free parking spaces.
  • the neural network in this embodiment of the application is a neural network with a mask-RCNN structure, as shown in FIG. 6, the neural network also includes: Feature Pyramid Networks (FPN) detection bottom and regional convolutional neural network (Region CNN, RCNN) position regression layer, where the output terminal at the bottom of the FPN detection is connected with the input terminal of the RCNN position regression layer, and the output terminal of the RCNN position regression layer is connected with the input terminal of the instance segmentation layer.
  • FPN detection bottom is used to detect the detection frame of the free parking space from the parking space training image, such as the rectangular frame shown in FIG. 6.
  • the detection frame of the detected free parking space is input to the RCNN position regression layer, and the RCNN position regression layer fine-tunes the detection frame of the free parking space detected at the bottom of the FPN detection.
  • the RCNN position regression layer inputs the fine-tuned detection frame of the free parking space into the instance segmentation layer, and the instance segmentation layer segments the area information of the free parking space, for example, as shown in the white area in FIG. 7.
  • the foregoing example segmentation layer is formed by stacking a series of convolutional layers or pooling layers in a preset order.
  • the neural network may also include a key point detection layer, which is used to obtain corner point information of free parking spaces.
  • the input of the key point detection layer is connected to the output of the RCNN position regression layer.
  • the RCNN position regression layer inputs the fine-tuned free parking space detection frame into the key point detection layer, and the key point detection layer outputs
  • the corner point information of the free parking space for example, is shown in the black corner point in Figure 7. It should be noted that one edge of two adjacent free parking spaces in FIG. 7 overlaps, so that two corner points of the other two free parking spaces overlap.
  • the area information and corner point information of the free parking space that can be predicted through the neural network, and then the area information of the predicted free parking space and the key point information of the free parking space in the parking space training image obtained in the above steps are formed
  • the area information is compared, the predicted corner information of the free parking space is compared with the corner information of the free parking space in the parking space training image obtained in the above steps, and the parameters of the neural network are adjusted. Repeat the above steps until the number of training times of the neural network reaches the preset number, or the prediction error of the neural network reaches the preset error value.
  • the method of the embodiment of the present application obtains the area information formed by the corner point information of the free parking space in the parking space training image and the key point information of the free parking space in the parking space training image; using the parking space training image , And the corner point information and area information of the free parking space in the parking space training image, training the neural network so that the trained neural network can accurately predict the area information and/or corner point information of the free parking space.
  • Any parking space detection method provided in the embodiments of the present application can be executed by any suitable device with data processing capabilities, including but not limited to: terminal devices or servers.
  • any parking space detection method provided in the embodiment of the present application may be executed by a processor.
  • the processor executes any parking space detection method mentioned in the embodiment of the present application by calling a corresponding instruction stored in a memory. I won't repeat it below.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.
  • FIG. 8 is a first structural diagram of a parking space detection device provided by an embodiment of the application. As shown in FIG. 8, the parking space detection device 100 of this embodiment may include:
  • the first acquisition module 110 is used to acquire parking space images
  • the processing module 120 is configured to input the parking space image into a neural network to obtain area information and/or corner point information of an idle parking space in the parking space image;
  • the determining module 130 is configured to determine the detection result of the free parking space in the parking space image based on the area information and/or corner point information of the free parking space in the parking space image.
  • the parking space detection device of the embodiment of the present application can be used to implement the technical solutions of the method embodiment shown above, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the determining module 130 is configured to merge the area information and corner point information of the free parking space in the parking space image to determine the detection result of the free parking space in the parking space image .
  • the determining module 130 is used for parking space area information formed by corner point information of free parking spaces in the parking space image; and calculating the free parking space area in the parking space image The information and the parking space area information formed by the corner point information are fused to determine the detection result of the free parking space in the parking space image.
  • FIG. 9 is a schematic structural diagram of a parking space detection device provided by an embodiment of the application.
  • the parking space detection device 100 further includes an expansion module 140,
  • the expansion module 140 is configured to expand a preset value outward on the peripheral edges of the parking space image, and the preset value is less than or equal to half the length of the parking space;
  • the processing module 120 is configured to input the expanded parking space image into the neural network to obtain area information and/or corner point information of free parking spaces in the parking space image.
  • the parking space detection device of the embodiment of the present application can be used to implement the technical solutions of the method embodiment shown above, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 10 is a third structural diagram of a parking space detection device provided by an embodiment of the application, and the parking space detection device 100 includes:
  • the second acquisition module 150 is used to acquire multiple parking space training images
  • the training module 160 is configured to use the multiple parking space training images to train the neural network, wherein the parking space training image includes annotation information for key point information of the free parking space.
  • the peripheral edges of the parking space training image are expanded outward by a preset value, and the preset value is less than or equal to half the length of the parking space.
  • the key point information of the free parking space in the parking space training image includes at least one corner point information of the free parking space.
  • the training module 160 is configured to obtain corner point information of the free parking space in the parking space training image and key point information of the free parking space in the parking space training image Area information; use the parking space training image, and the corner information and area information of the free parking space in the parking space training image to train the neural network.
  • the parking space training image is an image taken by a wide-angle camera.
  • the parking space detection device of the embodiment of the present application can be used to implement the technical solutions of the method embodiment shown above, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of this application. As shown in FIG. 11, the electronic device 30 of this embodiment includes:
  • the memory 310 is used to store computer programs
  • the processor 320 is configured to execute the computer program to implement the above-mentioned parking space detection method.
  • the implementation principles and technical effects are similar, and will not be repeated here.
  • the embodiment of the present application also provides a computer storage medium, which is a volatile or non-volatile computer storage medium.
  • the medium is used to store the above-mentioned computer software instructions for detecting parking spaces, and when running on a computer, the computer can execute various possible parking space detecting methods in the above method embodiments.
  • the processes or functions described in the embodiments of the present application can be generated in whole or in part.
  • the computer instructions can be stored in a computer storage medium, or transmitted from one computer storage medium to another computer storage medium, and the transmission can be transmitted to another by wireless (such as cellular communication, infrared, short-range wireless, microwave, etc.) Website site, computer, server or data center for transmission.
  • the computer storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a Digital Versatile Disc (DVD)), or a semiconductor medium (for example, a solid state drive (Solid State Disk, SSD)) )Wait.
  • the computer may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD).
  • At least one refers to one or more, and “multiple” refers to two or more.
  • “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship; in the formula, the character “/” indicates that the associated objects before and after are in a “division” relationship.
  • “The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or plural items (a).
  • at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple One.
  • the size of the sequence numbers of the aforementioned processes does not mean the order of execution.
  • the execution order of the processes should be determined by their functions and internal logic, and should not be used in the embodiments of the present disclosure.
  • the implementation process constitutes any limitation.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

La présente invention concerne un procédé et un appareil de détection de place de stationnement, ainsi qu'un dispositif électronique. Le procédé consiste à: acquérir une image de place de stationnement (S101); entrer l'image de place de stationnement dans un réseau neuronal pour obtenir des informations de région et/ou des informations de coin d'une place de stationnement libre dans l'image de place de stationnement (S102); et déterminer, sur la base des informations de région et/ou des informations de coin de la place de stationnement libre dans l'image de place de stationnement, un résultat de détection de la place de stationnement libre dans l'image de place de stationnement (S103).
PCT/CN2020/075065 2019-05-29 2020-02-13 Procédé et appareil de détection de place de stationnement, et dispositif électronique WO2020238284A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021531322A JP2022510329A (ja) 2019-05-29 2020-02-13 駐車スペース検出方法、装置及び電子機器
KR1020217016722A KR20210087070A (ko) 2019-05-29 2020-02-13 주차 공간의 검출 방법, 장치 및 전자 기기

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910458754.4A CN112016349B (zh) 2019-05-29 停车位的检测方法、装置与电子设备
CN201910458754.4 2019-05-29

Publications (1)

Publication Number Publication Date
WO2020238284A1 true WO2020238284A1 (fr) 2020-12-03

Family

ID=73501819

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/075065 WO2020238284A1 (fr) 2019-05-29 2020-02-13 Procédé et appareil de détection de place de stationnement, et dispositif électronique

Country Status (3)

Country Link
JP (1) JP2022510329A (fr)
KR (1) KR20210087070A (fr)
WO (1) WO2020238284A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560689A (zh) * 2020-12-17 2021-03-26 广州小鹏自动驾驶科技有限公司 一种车位检测方法、装置、电子设备和存储介质
CN113408509A (zh) * 2021-08-20 2021-09-17 智道网联科技(北京)有限公司 用于自动驾驶的标识牌识别方法及装置
CN113674199A (zh) * 2021-07-06 2021-11-19 浙江大华技术股份有限公司 停车位检测方法、电子设备及存储介质
CN113822156A (zh) * 2021-08-13 2021-12-21 北京易航远智科技有限公司 车位检测处理方法、装置、电子设备及存储介质
CN113870613A (zh) * 2021-10-14 2021-12-31 中国第一汽车股份有限公司 停车位确定方法、装置、电子设备及存储介质
CN113903188A (zh) * 2021-08-17 2022-01-07 浙江大华技术股份有限公司 车位检测方法、电子设备及计算机可读存储介质
CN114359231A (zh) * 2022-01-06 2022-04-15 腾讯科技(深圳)有限公司 一种停车位的检测方法、装置、设备及存储介质
CN115131762A (zh) * 2021-03-18 2022-09-30 广州汽车集团股份有限公司 一种车辆泊车方法、***及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170177956A1 (en) * 2015-12-18 2017-06-22 Fujitsu Limited Detection apparatus and method for parking space, and image processing device
CN107424116A (zh) * 2017-07-03 2017-12-01 浙江零跑科技有限公司 基于侧环视相机的泊车位检测方法
CN108281041A (zh) * 2018-03-05 2018-07-13 东南大学 一种基于超声波和视觉传感器相融合的泊车车位检测方法
CN109086708A (zh) * 2018-07-25 2018-12-25 深圳大学 一种基于深度学习的停车位检测方法及***
CN109685000A (zh) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 一种基于视觉的车位检测方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6761708B2 (ja) * 2016-09-05 2020-09-30 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 駐車位置特定方法、駐車位置学習方法、駐車位置特定システム、駐車位置学習装置およびプログラム
JP6813178B2 (ja) * 2016-12-07 2021-01-13 学校法人常翔学園 生体画像処理装置、出力画像製造方法、学習結果製造方法、及びプログラム
JP6887154B2 (ja) * 2017-06-08 2021-06-16 国立大学法人 筑波大学 画像処理システム、評価モデル構築方法、画像処理方法及びプログラム
JP2019096072A (ja) * 2017-11-22 2019-06-20 株式会社東芝 物体検出装置、物体検出方法およびプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170177956A1 (en) * 2015-12-18 2017-06-22 Fujitsu Limited Detection apparatus and method for parking space, and image processing device
CN107424116A (zh) * 2017-07-03 2017-12-01 浙江零跑科技有限公司 基于侧环视相机的泊车位检测方法
CN108281041A (zh) * 2018-03-05 2018-07-13 东南大学 一种基于超声波和视觉传感器相融合的泊车车位检测方法
CN109086708A (zh) * 2018-07-25 2018-12-25 深圳大学 一种基于深度学习的停车位检测方法及***
CN109685000A (zh) * 2018-12-21 2019-04-26 广州小鹏汽车科技有限公司 一种基于视觉的车位检测方法及装置

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560689A (zh) * 2020-12-17 2021-03-26 广州小鹏自动驾驶科技有限公司 一种车位检测方法、装置、电子设备和存储介质
CN112560689B (zh) * 2020-12-17 2024-04-19 广州小鹏自动驾驶科技有限公司 一种车位检测方法、装置、电子设备和存储介质
CN115131762A (zh) * 2021-03-18 2022-09-30 广州汽车集团股份有限公司 一种车辆泊车方法、***及计算机可读存储介质
CN113674199A (zh) * 2021-07-06 2021-11-19 浙江大华技术股份有限公司 停车位检测方法、电子设备及存储介质
CN113822156A (zh) * 2021-08-13 2021-12-21 北京易航远智科技有限公司 车位检测处理方法、装置、电子设备及存储介质
CN113903188A (zh) * 2021-08-17 2022-01-07 浙江大华技术股份有限公司 车位检测方法、电子设备及计算机可读存储介质
CN113408509A (zh) * 2021-08-20 2021-09-17 智道网联科技(北京)有限公司 用于自动驾驶的标识牌识别方法及装置
CN113408509B (zh) * 2021-08-20 2021-11-09 智道网联科技(北京)有限公司 用于自动驾驶的标识牌识别方法及装置
CN113870613A (zh) * 2021-10-14 2021-12-31 中国第一汽车股份有限公司 停车位确定方法、装置、电子设备及存储介质
CN113870613B (zh) * 2021-10-14 2022-09-30 中国第一汽车股份有限公司 停车位确定方法、装置、电子设备及存储介质
CN114359231A (zh) * 2022-01-06 2022-04-15 腾讯科技(深圳)有限公司 一种停车位的检测方法、装置、设备及存储介质

Also Published As

Publication number Publication date
JP2022510329A (ja) 2022-01-26
CN112016349A (zh) 2020-12-01
KR20210087070A (ko) 2021-07-09

Similar Documents

Publication Publication Date Title
WO2020238284A1 (fr) Procédé et appareil de détection de place de stationnement, et dispositif électronique
US9697416B2 (en) Object detection using cascaded convolutional neural networks
WO2018120027A1 (fr) Procédé et appareil de détection d'obstacles
US9734427B2 (en) Surveillance systems and image processing methods thereof
CN110598512B (zh) 一种车位检测方法及装置
WO2020134528A1 (fr) Procédé de détection cible et produit associé
Li et al. Camera localization for augmented reality and indoor positioning: a vision-based 3D feature database approach
WO2017221643A1 (fr) Dispositif de traitement d'image, système de traitement d'image, procédé de traitement d'image et programme
Mei et al. Waymo open dataset: Panoramic video panoptic segmentation
Liang et al. Image-based positioning of mobile devices in indoor environments
CN108875903B (zh) 图像检测的方法、装置、***及计算机存储介质
US10122912B2 (en) Device and method for detecting regions in an image
CN106845338B (zh) 视频流中行人检测方法与***
CN111091597B (zh) 确定图像位姿变换的方法、装置及存储介质
CN107749069B (zh) 图像处理方法、电子设备和图像处理***
CN111259710B (zh) 采用停车位框线、端点的停车位结构检测模型训练方法
CN109447022B (zh) 一种镜头类型识别方法及装置
TWI554107B (zh) 可改變縮放比例的影像調整方法及其攝影機與影像處理系統
CN114663871A (zh) 图像识别方法、训练方法、装置、***及存储介质
CN111583417B (zh) 一种图像语义和场景几何联合约束的室内vr场景构建的方法、装置、电子设备和介质
WO2023179520A1 (fr) Procédé et appareil d'imagerie, et capteur d'image, dispositif d'imagerie et dispositif électronique
CN111260955B (zh) 采用停车位框线、端点的停车位检测***及方法
Osuna-Coutiño et al. Structure extraction in urbanized aerial images from a single view using a CNN-based approach
US20230089845A1 (en) Visual Localization Method and Apparatus
US11551379B2 (en) Learning template representation libraries

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20812792

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021531322

Country of ref document: JP

Kind code of ref document: A

Ref document number: 20217016722

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20812792

Country of ref document: EP

Kind code of ref document: A1