CN111746521A - Parking route planning method, device, equipment and storage medium - Google Patents

Parking route planning method, device, equipment and storage medium Download PDF

Info

Publication number
CN111746521A
CN111746521A CN202010605510.7A CN202010605510A CN111746521A CN 111746521 A CN111746521 A CN 111746521A CN 202010605510 A CN202010605510 A CN 202010605510A CN 111746521 A CN111746521 A CN 111746521A
Authority
CN
China
Prior art keywords
parking
target
vehicle
vacant
parking space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010605510.7A
Other languages
Chinese (zh)
Other versions
CN111746521B (en
Inventor
王秀田
张殿坤
丁坤
杜金枝
周俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dazhuo Intelligent Technology Co ltd
Dazhuo Quxing Intelligent Technology Shanghai Co ltd
Original Assignee
Chery Automobile Co Ltd
Wuhu Lion Automotive Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd, Wuhu Lion Automotive Technologies Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202010605510.7A priority Critical patent/CN111746521B/en
Publication of CN111746521A publication Critical patent/CN111746521A/en
Application granted granted Critical
Publication of CN111746521B publication Critical patent/CN111746521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a parking route planning method, a parking route planning device, parking route planning equipment and a parking route planning storage medium, and belongs to the technical field of lane detection. The method comprises the following steps: acquiring a peripheral view of the vehicle; calling a parking space identification model to identify vacant parking spaces in the surrounding view; and planning a parking route of the vehicle according to the position information of the vacant parking spaces through the parking management system. According to the technical scheme provided by the embodiment of the application, the vacant parking spaces in the surrounding view of the vehicle are identified through the parking space identification model, so that the long time consumed by a user for identifying the parking spaces through the surrounding view is avoided, the operation is simple and convenient, and the accuracy and timeliness of parking space identification are improved; the parking management system plans the route according to the position information of the vacant parking spaces, so that the automatic planning of the parking route is realized, the probability of traffic accidents caused by the misplanning due to the reason of the user supervisor is reduced, and the safety of the vehicle during parking is ensured.

Description

Parking route planning method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of lane detection technologies, and in particular, to a method, an apparatus, a device, and a storage medium for planning a parking route.
Background
At present, when a driver parks a vehicle, due to the influence of subjective factors and objective factors, a rubbing accident is very easy to happen in the process of parking the vehicle, and a parking auxiliary system based on a visual sensor is produced at the right moment.
In the related technology, four ultra-wide angle high-definition cameras are arranged on the front, the back, the left and the right of an automobile body, images around the automobile are collected through the cameras, the images are stretched and flattened and then are combined into a complete automobile surrounding view through a seamless splicing technology, and the position and the surrounding conditions of the automobile can be visually shown to a driver.
However, in the related art, due to the influence of objective factors such as environment, the error of the image obtained by the camera is large, so that the driver cannot obtain an accurate view of the periphery of the vehicle.
Disclosure of Invention
The embodiment of the application provides a parking route planning method, a parking route planning device, parking route planning equipment and a parking route planning storage medium, which can improve the accuracy of a surrounding view of a vehicle and further ensure the accuracy of route planning. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for planning a parking route, where the method includes:
acquiring a peripheral view of a vehicle, wherein the peripheral view is a peripheral environment image of the vehicle;
calling a parking space identification model to identify vacant parking spaces in the surrounding view;
and planning a parking route of the vehicle according to the position information of the vacant parking spaces through a parking management system.
In another aspect, an embodiment of the present application provides a parking route planning apparatus, where the apparatus includes:
the system comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for acquiring a peripheral view of a vehicle, and the peripheral view is a peripheral environment image of the vehicle;
the model calling module is used for calling a parking space identification model to identify the vacant parking spaces in the surrounding view;
and the route planning module is used for planning the parking route of the vehicle according to the position information of the vacant parking spaces through the parking management system.
In still another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and the computer program is loaded and executed by the processor to implement the above-mentioned method for planning a parking route.
In a further aspect, the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program is loaded and executed by the processor to implement the above-mentioned method for planning a parking route.
In a further aspect, a computer program product is provided, which, when run on a computer device, causes the computer device to carry out the above-mentioned method of planning a parking route.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
the vacant parking spaces in the surrounding view of the vehicle are identified through the parking space identification model, so that the problem that a user has long time for identifying the parking spaces through the surrounding view is avoided, the operation is simple and convenient, and the accuracy and timeliness of parking space identification are improved; the parking management system plans the route according to the position information of the vacant parking spaces, so that the automatic planning of the parking route is realized, the probability of traffic accidents caused by the misplanning due to the reason of the user supervisor is reduced, and the safety of the vehicle during parking is ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a vehicle routing system provided by one embodiment of the present application;
FIG. 2 is a flow chart of a method for planning a parking route provided in one embodiment of the present application;
FIG. 3 schematically illustrates a view of a marked four-sided ring;
FIG. 4 is a flow chart of a method for planning a parking route according to another embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a structure of a parking space recognition model;
FIG. 6 illustrates a schematic diagram of a detection grid for determining recognition targets;
fig. 7 is a block diagram of a parking route planning apparatus according to an embodiment of the present application;
fig. 8 is a block diagram of a parking route planning apparatus according to another embodiment of the present application;
fig. 9 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of a vehicle route planning system according to an embodiment of the present application is shown. The vehicle route planning system may include: a computer device 10 and a parking management device 20.
The computer device 10 is used for processing an environment image of the surroundings of the vehicle, which is an image acquired while the vehicle is stationary or in motion. Alternatively, the computer device 10 acquires the environment image through the camera 30. In a possible embodiment, the camera 30 is a vehicle-mounted camera, and the camera 30 acquires an environment image around the vehicle in real time and sends the environment image to the computer device 10 corresponding to the vehicle; in another possible embodiment, the camera 30 is an environment camera, that is, a camera disposed in an environment, the camera 30 acquires an environment image of the surrounding in real time, determines a vehicle in the environment image, and sends the environment image to the computer device 10 corresponding to the vehicle. It should be noted that the computer device 10 may be disposed in a corresponding vehicle, such as a vehicle-mounted terminal; and can also communicate with the vehicle through a network, such as a server corresponding to the vehicle-mounted terminal.
In the embodiment of the present application, after acquiring the environment image of the periphery of the vehicle, the computer device 10 may process the environment image, determine an empty parking space in the environment image, and send the location information of the empty parking space to the parking management system 20. Optionally, a parking space recognition model is provided in the computer device 10 to recognize the vacant parking spaces, and the parking space recognition model is obtained by training a large number of labeled data sets, which include images of the vacant parking spaces in various scenes. It should be noted that the parking space recognition model may be obtained by training the computer device 10, or may be obtained by training other computer devices, which is not limited in this embodiment of the application.
The parking management apparatus 20 is used to plan a parking route of the vehicle. Alternatively, after acquiring the location information of the vacant parking spaces, the parking management device 20 may plan a proper parking route by the parking management system according to the location information of the vacant parking spaces. In one possible embodiment, the parking management device 20 may plan a parking route according to the vacant parking spaces selected by the user; in another possible implementation manner, the parking management device 20 may automatically select an empty parking space meeting a condition to plan a parking route according to the location information of the empty parking space, where the condition may be that the empty space of the parking space is large or/and the parking space is close to the vehicle, which is not limited in this embodiment of the present application.
Alternatively, the computer device 10 and the parking management device 20 may be provided in the same or different computer devices, which is not limited in the embodiment of the present application.
Referring to fig. 2, a flowchart of a method for planning a parking route according to an embodiment of the present application is shown. The method may be applied to the computer device 10 of the vehicle route planning system shown in fig. 1, for example, the executing body of each step may be the computer device 10. The method comprises the following steps (201-203):
step 201, a peripheral view of a vehicle is acquired.
The surrounding view is an image of the surroundings of the vehicle, and in the present embodiment, the surrounding view is an image of the surroundings centered on the vehicle. Optionally, the surrounding view is an image of the surrounding environment of the vehicle during driving. In one possible implementation, the computer device may acquire the surrounding view in real time during the driving of the vehicle. Optionally, the computer device acquires a surrounding view of the vehicle in real time, processes the surrounding view, and detects and identifies the vacant parking spaces in the surrounding view. In another possible embodiment, the computer device acquires the above-described all-around view when the vehicle running speed satisfies a condition that is a restriction condition for determining whether the vehicle is to stop running. Optionally, the computer device detects the running speed of the vehicle in real time, and when the running speed of the vehicle meets the condition, acquires the current surrounding view of the vehicle, processes the surrounding view, and detects and identifies the vacant parking spaces in the surrounding view.
In this embodiment of the application, the computer device may obtain the above-mentioned all-around view through a camera, and optionally, the step 201 includes the following steps:
1. a plurality of environment images around the vehicle are collected through the camera.
Optionally, the computer device may collect a plurality of environmental images around the vehicle through a camera, where the camera may be a vehicle-mounted camera directly connected to the vehicle, or an environmental camera indirectly connected to the vehicle.
In a possible implementation manner, the camera may take pictures of the periphery of the vehicle at preset time intervals to obtain a plurality of environment images of the periphery of the vehicle, and send the plurality of environment images to the computer device, where the time intervals may be 1s, 3s, or 1min, and the like, which is not limited in this application.
In another possible implementation, to ensure the accuracy of the surrounding view, the computer device may perform video shooting on the surrounding of the vehicle to obtain an environment video, process the environment video to obtain an image frame of the environment video, use the image frame as an environment image, and send the obtained plurality of environment images to the computer device.
2. And filtering the environment image to obtain a processed environment image.
In this embodiment, after acquiring the plurality of environment images, the computer device may perform filtering processing on the plurality of environment images to obtain a plurality of processed environment images. Alternatively, the computer device may select gaussian filtering or median filtering according to different environmental image filtering processes, for example, according to the distribution characteristics of environmental image noise.
It should be noted that the above description of processing the environment image is only exemplary, and in practical applications, the computer device may select other processing modes according to practical situations, such as image segmentation on the environment image, and selecting a portion near the ground as the region of interest for processing.
3. And splicing the processed environment images to generate a peripheral ring view.
Optionally, after the computer device obtains the plurality of processed environment images, the plurality of environment images may be stitched to generate a surrounding view of the vehicle. For example, the computer device may perform feature extraction on the plurality of processed environment images, determine a coincidence point of the plurality of processed environment images, and determine a stitching relationship between the plurality of processed environment images to implement image stitching.
Step 202, calling a parking space recognition model to recognize vacant parking spaces in the surrounding view.
The parking space identification model is used for identifying vacant parking spaces in the surrounding view. Optionally, the parking space recognition model is a deep learning model, and in a possible embodiment, the parking space recognition model is obtained by performing data training based on yolov3 network of darknet, wherein the data refers to a large number of labeled data sets, and the data sets include images of vacant parking spaces in various scenes.
In this embodiment of the application, after the computer device obtains the surrounding view image, the parking space recognition model may be called to process the surrounding view image, and the vacant parking spaces in the surrounding view image are recognized, so that the computer device may determine the position information of the vacant parking spaces in the surrounding view image according to the recognition result of the parking space recognition model, and the parking management system plans the parking route of the vehicle according to the position information.
And step 203, planning a parking route of the vehicle according to the position information of the vacant parking spaces through the parking management system.
The parking management system is used for planning a parking route of the vehicle, and optionally, the parking management system plans the parking route according to the position information of the vacant parking spaces. The position information is used for indicating the position of the vacant parking space in the surrounding view, and optionally, the position information includes at least one of the following items: the coordinate positions of the vacant parking spaces in the peripheral annular view and the distances between the vacant parking spaces and the vehicles. In this embodiment of the application, after calling the parking space recognition model to recognize the vacant parking spaces in the surrounding view, the computer device may generate position information of the vacant parking spaces and send the position information to the parking management system, and further, the parking management system plans the parking route of the vehicle according to the position information.
Optionally, after receiving the location information of the vacant parking spaces, the parking management system may plan a parking route for the vehicle in real time. Of course, in another possible embodiment, after receiving the location information of the vacant parking space, the parking management system may store the location information of the vacant parking space, and after receiving a parking instruction, plan a parking route according to the stored location information of the vacant parking space, where the parking instruction is used to indicate that the vehicle is about to stop running, and optionally, the parking instruction may be generated by being triggered by a user, for example, the user may trigger generation of the parking instruction by inputting "stop" by voice, or may trigger generation of the parking instruction by reducing a running speed of the vehicle.
In one possible embodiment, the parking management system automatically plans the parking route according to the position information of the vacant parking spaces. Optionally, the parking management system may determine the positions of the vacant parking spaces according to the position information of the vacant parking spaces, select a vacant parking space meeting the condition as a target vacant parking space to park the vehicle according to the positions of the vacant parking spaces, and plan the parking route of the vehicle. The above condition is a restriction condition for determining whether the vacant parking space can be used as a target vacant parking space, and the target control parking space is a vacant parking space for parking a vehicle. For example, after obtaining the position information of the vacant parking spaces, the parking management system calculates the distance between the vacant parking spaces according to the position information, and if other vacant parking spaces with the distance of zero exist around a certain vacant parking space, determines that the vacant parking space is large in space, further determines that the vacant parking space is a target vacant parking space, and plans a parking route according to the target vacant parking space; for another example, after obtaining the position information of the vacant parking spaces, the parking management system calculates the distance between each vacant parking space and the vehicle according to the position information, selects the vacant parking space with the smallest distance as the target vacant parking space, and plans the parking route according to the target vacant parking space. It should be noted that, the above description of selecting the target vacant parking space is only exemplary, and in practical applications, the parking management system may determine different limiting conditions according to practical situations, for example, on the basis of a large vacant parking space, a vacant parking space closest to the vehicle is selected as the target vacant parking space.
In another possible embodiment, the parking management system may plan the parking route using the free space selected by the user as the target free space. Optionally, the step 203 includes the following steps:
1. and displaying the position information of the vacant parking spaces to the user.
Optionally, after the computer device calls the parking space recognition model to detect the vacant parking spaces in the surrounding view, the computer device may generate the position information of the vacant parking spaces corresponding to the surrounding view, and display the position information of the vacant parking spaces to the user.
In one possible embodiment, the position information is displayed in the form of an image. Optionally, after the computer device calls the parking space recognition model, the position of the vacant parking space can be marked in the surrounding view of the vehicle, the marked surrounding view is displayed to the user on the screen, and the user can visually observe the position of the vacant parking space according to the marked surrounding view. The screen may be a screen provided on the vehicle, or a screen corresponding to a device connected to the vehicle via a network, such as a mobile phone screen of a vehicle driver or a screen of a vehicle event data recorder. It should be noted that the marking methods of the vacant parking spaces in the surrounding view may be different, for example, different vacant parking spaces are marked by using marking frames with different colors and/or different shapes. Optionally, in this embodiment of the application, in order to ensure accurate display of the vacant parking spaces, the computer device may perform fuzzy display on the non-parking space information in the marked surrounding view, for example, black dots represent the current position of the vehicle in the marked surrounding view.
In another possible embodiment, the position information is displayed in a text form. Optionally, after the computer device invokes the parking space recognition model, the computer device may display the position information to the user in a form of text by voice according to the position information of the vacant parking spaces in the vehicle environment view, for example, output information "the vehicle includes two vacant parking spaces right in front of the vehicle. The sound may be a sound provided in the vehicle, or a sound corresponding to a device connected to the vehicle via a network, such as a mobile phone sound of a driver of the vehicle. Of course, the computer device may also display the position information on the screen in a form of text, which is not limited in this application.
2. And receiving a selection instruction aiming at the target vacant parking space.
The selection instruction is used for indicating the free parking space selected by the user. Alternatively, the selection instruction may be generated by user triggering.
In a possible embodiment, the user triggers the selection instruction by an action, such as a gesture instruction. Optionally, after determining the target vacant parking space through the marked surrounding view, the user clicks the target vacant parking space to trigger generation of a corresponding selection instruction, and further, the computer device receives the selection instruction of the target vacant parking space.
In another possible embodiment, the user triggers the selection instruction by voice. Optionally, after the user determines the target vacant parking space through the marked surrounding view, the user inputs selection information of the target vacant parking space through voice, where the selection information is used to indicate the target vacant parking space, and for example, the selection information may be "stop the vehicle at the vacant parking space closest to the target vacant parking space"; or stopping the vehicle at the vacant parking space corresponding to the first red rectangular mark frame. Further, after receiving the selection information, the computer device generates a selection instruction corresponding to the target vacant parking space according to the selection information.
3. And planning a parking route of the vehicle to the target vacant parking space according to the position information of the target vacant parking space through the parking management system.
In the embodiment of the application, after receiving the selection instruction of the target vacant parking space, the computer device may plan a parking route, in which the vehicle is parked to the target vacant parking space, according to the position information of the target vacant parking space through the parking management system.
In one possible embodiment, the parking management system may plan a parking route according to the location information of the target vacant parking space, and present the parking route to the user, so that the user parks the vehicle according to the parking route. Of course, the vehicle may also be automatically parked according to the parking route, which is not limited in the embodiment of the present application.
In another possible embodiment, the parking management system may plan a plurality of parking routes according to the location information of the target vacant parking spaces, for example, different parking routes are planned according to the parking spaces into which the vehicle enters from the head and the parking spaces into which the vehicle exits from the tail, and further, the plurality of parking routes are displayed to the user, and the user selects a parking route to be used for parking according to actual conditions. Of course, after the user selects the parking route to be used, the vehicle may automatically park according to the parking route. Optionally, before the parking system plans the parking route, the user may also input other condition instructions, such as a voice or text input condition instruction "go through point a while parking", and further, the parking management system may plan the parking route according to the position information of the target vacant parking space and the condition instruction.
For example, referring to fig. 3 in combination, after obtaining the position information of the free parking space, the computer device may display a marked surrounding view 30 on the screen, where the marked surrounding view 30 includes a vehicle position 31, a first free parking space 32, a second free parking space 33, and a third free parking space 34. The first vacant space 32 is marked with a red rectangular frame 35, the second vacant space 33 is marked with a green rectangular frame 36, and the third vacant space 34 is marked with a yellow rectangular frame 37. After receiving the voice of the user, "select red parking space", the computer device plans the parking route of the vehicle according to the position information of the first vacant parking space 32 by the parking management system.
In summary, in the technical scheme provided by the embodiment of the application, the vacant parking spaces in the surrounding view of the vehicle are identified through the parking space identification model, so that the problem that a user has long time for identifying the parking spaces through the surrounding view is avoided, the operation is simple and convenient, and the accuracy and timeliness of parking space identification are improved; the parking management system plans the route according to the position information of the vacant parking spaces, so that the automatic planning of the parking route is realized, the probability of traffic accidents caused by the misplanning of the subjective reasons of users is reduced, and the safety of the vehicle during parking is ensured.
In addition, before carrying out the parking stall discernment, carry out filtering to the environment image and handle, be favorable to avoiding the influence of environmental factor to the parking stall discernment, guarantee the accuracy of parking stall discernment.
In addition, the user selects the target vacant parking space for parking the vehicle, so that the freedom degree and flexibility of parking route planning are improved.
Next, a specific method for identifying a parking space by using the parking space identification model will be described.
Please refer to fig. 4, which shows a flowchart of a parking space recognition method according to an embodiment of the present application. The method may be applied to the computer device 10 of the vehicle route planning system shown in fig. 1, for example, the executing body of each step may be the computer device 10. The method can comprise the following steps (401-405):
step 401, processing the surrounding ring view by using a feature extraction layer of the parking space recognition model to obtain a feature map corresponding to the surrounding ring view.
The feature extraction layer is used for extracting features of the image to obtain a feature map. The feature map is used for emphasizing a feature part of the image, and optionally, the computer device may obtain the feature map corresponding to the image by performing edge extraction on the feature part in the image. In the embodiment of the present application, the feature extraction layer is a component of the parking space recognition model. Optionally, after obtaining the surrounding view of the vehicle, the computer device processes the surrounding view by using a feature extraction layer of the parking space recognition model to obtain a corresponding feature map.
In the embodiment of the application, in order to identify vacant parking spaces of various scales, the computer device may scale the surrounding view to obtain feature maps of different scales. Optionally, the step 401 includes the following steps:
1. zooming the peripheral ring view in multiple scales to obtain k peripheral ring views in different scales;
2. and respectively processing the k peripheral ring views with different scales by adopting a feature extraction layer of the parking space identification model to obtain feature maps respectively corresponding to the k peripheral ring views with different scales.
Optionally, after obtaining the above-mentioned surrounding ring view, the computer device may perform multi-scale scaling on the surrounding ring view to obtain k surrounding ring views with different scales, where k is a positive integer. Further, a feature extraction layer of the parking space recognition model is adopted to process the k peripheral ring views with different scales respectively, and feature maps corresponding to the k peripheral ring views with different scales are obtained.
Illustratively, taking yolov3 network based on darknet as an example, the structure of the darknet-53 network is shown in FIG. 5. After the four-side ring view is subjected to a plurality of convolution operations by a dark net-53 network, a first characteristic diagram 51, a second characteristic diagram 52 and a third characteristic diagram 53 are obtained respectively. The scale of the first feature map 51 is 32 × 32, the scale of the second feature map 52 is 16 × 16, and the scale of the third feature map is 8 × 8. In addition, after the three characteristic diagrams are obtained, the parking space identification model detects the first characteristic diagram 51 and identifies the vacant parking spaces with large scale; performing up-sampling on the first feature map 51, fusing the first feature map with the second feature map 52 to obtain a fused second feature map with the dimension of 16 × 16, detecting the fused second feature map, and identifying the vacant parking space with the intermediate dimension; and performing up-sampling on the fused second feature map, fusing the second feature map with the third feature map 53 to obtain a fused third feature map with the scale of 32 × 32, detecting the fused third feature map, and identifying the vacant parking spaces with small scales.
Step 402, the feature map is divided into n grids.
The grids are used for dividing the regions of the surrounding ring view, optionally, each grid represents a different detection region in the surrounding ring view, wherein the size of each grid may be the same or different. In a possible embodiment, after obtaining the feature map, the computer device may randomly divide the feature map into n grids, where n is a positive integer; in another possible implementation manner, after the computer device obtains the feature map, the feature map may be divided into n grids according to a preset rule, where the preset rule may be an area ratio between a grid and the feature map, and feature maps with different scales may correspond to different preset rules.
Step 403, detecting a bounding box corresponding to the recognition target in the grid.
The bounding box is used for detecting the recognition target in the feature map, and optionally, the bounding box may be a geometric box enclosing the recognition target, and the size and the shape of the bounding box may be determined according to actual conditions. In this embodiment of the present application, after dividing the feature map into n grids, the computer device detects a bounding box corresponding to the recognition target in a certain grid, where the number of the bounding boxes may be one or more, and this is not limited in this embodiment of the present application.
It should be noted that in the embodiment of the present application, different grids are used for detecting different recognition targets. Optionally, the step 403 includes the following steps:
1. acquiring the detection range of the grid;
2. acquiring a central position coordinate of an identification target in the characteristic diagram;
3. determining that the recognition target is recognized by the grid in response to the center position coordinate falling within the detection range of the grid;
4. and detecting a bounding box corresponding to the recognition target.
The detection range is used for indicating a range detected by the grid in the feature map, and a recognition target in the detection range is detected and recognized by the grid, optionally, the detection range is represented in a form of a coordinate value set. The central position coordinates of the recognition target are used for indicating the position of the recognition target in the feature map, and in one possible embodiment, the central position coordinates of the recognition target refer to the position coordinates of the midpoint of the recognition target in the feature map; in another possible embodiment, the center position coordinates of the recognition target are position coordinates obtained by performing weighted average calculation on the position coordinates of the feature point on the recognition target in the feature map.
In this embodiment, after the feature map is subjected to mesh division, the computer device may acquire a detection range of a certain mesh and a center position coordinate of a certain recognition target on the feature map, determine that the recognition target is recognized by the mesh if the center position coordinate is in the detection range of the mesh, and detect a bounding box corresponding to the recognition target. Illustratively, with combined reference to FIG. 6, the feature map 60 is divided into a uniform 9 grid, with X-Y coordinate axes established in the feature map 60. Here, the detection range of the first mesh 61 is "X belongs to (0, 1) and Y belongs to (1, 2)", the center position of the first recognition target 62 is (0.8, 1.9), and at this time, the center position of the first recognition target 62 falls within the detection range of the first mesh 61, and the first recognition target 62 is recognized by the first mesh 61.
At step 404, a first IOU for each bounding box is generated.
The first IOU (Intersection-over-Union) is used to indicate the Intersection area between a bounding box and other bounding boxes. In the embodiment of the present application, the first IOU refers to the sum of intersection ratios between the bounding box and other bounding boxes in the feature diagram, and optionally, after the computer device generates the bounding boxes, the computer device may calculate the first IOU of each bounding box.
And step 405, selecting the recognition target in the target boundary box according to the first IOU of each boundary box by adopting a non-maximum suppression algorithm, and determining the recognition target as an empty parking space.
In this embodiment of the application, the computer device may select an identification target in the target boundary box according to the first IOU of each boundary box by using a non-maximum suppression algorithm, and determine the identification target as an empty parking space.
It should be noted that, in the embodiment of the present application, the computer device needs to use a non-maximum suppression algorithm to traverse each bounding box according to the first IOU to select the target bounding box. Optionally, the step 405 includes the following steps:
1. selecting the boundary box with the largest first IOU as an initial target boundary box according to the first IOU of each boundary box;
2. acquiring a second IOU of other bounding boxes except the target bounding box, wherein the second IOU refers to the intersection ratio between the other bounding boxes and the target bounding box;
3. deleting other bounding boxes of which the second IOU is larger than the threshold, wherein the threshold can be any value set by a user according to actual conditions, for example, a larger threshold is set for a feature map with dense distribution of the recognition targets, and a smaller threshold is set for a feature map with scattered distribution of the recognition targets;
4. and selecting the boundary frame with the largest first IOU from the rest boundary frames as an updated target boundary frame, and starting to execute the step of obtaining the second IOU of the other boundary frames except the target boundary frame again until the rest boundary frame is zero, and determining that the target corresponding to the target boundary frame is an empty parking space.
To sum up, in the technical scheme provided by the embodiment of the application, the characteristic diagram is subjected to grid division, so that each grid detects a corresponding recognition target, and the accuracy of recognizing the vacant parking spaces is improved.
In addition, the characteristic graphs zoomed in different scales are obtained, the fact that the parking space recognition model can recognize vacant parking spaces in different scales in the characteristic graphs is guaranteed, and accuracy of recognition of the vacant parking spaces is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 7, a block diagram of a parking route planning apparatus according to an embodiment of the present application is shown. The device has the function of realizing the parking route planning method, and the function can be realized by hardware or hardware executing corresponding software. The device can be a computer device and can also be arranged in the computer device. The apparatus 700 may include: an image acquisition module 710, a model invocation module 720, and a route planning module 730.
An image acquisition module 710, configured to acquire a surrounding view of a vehicle, where the surrounding view is a surrounding image of the vehicle;
the model calling module 720 is used for calling a parking space identification model to identify the vacant parking spaces in the surrounding view;
and the route planning module 730 is configured to plan a parking route of the vehicle according to the location information of the vacant parking spaces through the parking management system.
In an exemplary embodiment, as shown in fig. 8, the model invoking module 720 includes: a feature acquisition unit 721, a feature dividing unit 722, a boundary detection unit 723, a first generation unit 724, and a parking space determination unit 725.
The feature obtaining unit 721 is configured to process the surrounding environment view by using a feature extraction layer of the parking space identification model to obtain a feature map corresponding to the surrounding environment view;
a feature dividing unit 722, configured to divide the feature map into n grids, where n is a positive integer;
a boundary detection unit 723, configured to detect a boundary box corresponding to an identification target in the grid;
a first generating unit 724, configured to generate a first intersection ratio IOU of each bounding box, where the first IOU is a sum of intersection ratios between the bounding box and other bounding boxes in the feature map;
and the parking space determining unit 725 is configured to select an identified target in the target bounding box according to the first IOU of each bounding box by using a non-maximum suppression algorithm, and determine the identified target as the vacant parking space.
In an exemplary embodiment, the parking space determining unit 725 is configured to select, according to the first IOU of each bounding box, the bounding box with the largest first IOU as an initial target bounding box; acquiring a second IOU of other bounding boxes except the target bounding box, wherein the second IOU refers to the intersection ratio between the other bounding boxes and the target bounding box; deleting other bounding boxes for which the second IOU is greater than a threshold; and selecting the boundary frame with the largest first IOU from the rest boundary frames as an updated target boundary frame, and starting to execute the step of acquiring the second IOU of the other boundary frames except the target boundary frame again until the rest boundary frame is zero, and determining that the target corresponding to the target boundary frame is the vacant parking space.
In an exemplary embodiment, the feature obtaining unit 721 is configured to perform scaling on the surrounding annular view at multiple scales to obtain k surrounding annular views at different scales; and respectively processing the k peripheral ring views with different scales by adopting a feature extraction layer of the parking space identification model to obtain feature maps respectively corresponding to the k peripheral ring views with different scales.
In an exemplary embodiment, the boundary detecting unit 723 is configured to obtain a detection range of the grid; acquiring the central position coordinates of the recognition target in the feature map; determining that the recognition target is recognized by the mesh in response to the center position coordinates falling within a detection range of the mesh; and detecting the boundary box corresponding to the recognition target.
In an exemplary embodiment, the image acquiring module 710 is configured to acquire a plurality of environment images around the vehicle through a camera; filtering the environment image to obtain a processed environment image; and splicing the processed environment images to generate the surrounding ring view.
In an exemplary embodiment, the route planning module 730 is configured to: displaying the position information of the vacant parking spaces to a user; receiving a selection instruction aiming at a target vacant parking space; and planning a parking route of the vehicle to the target vacant parking space according to the position information of the target vacant parking space through the parking management system.
In summary, in the technical scheme provided by the embodiment of the application, the vacant parking spaces in the surrounding view of the vehicle are identified through the parking space identification model, so that the problem that a user has long time for identifying the parking spaces through the surrounding view is avoided, the operation is simple and convenient, and the accuracy and timeliness of parking space identification are improved; the parking management system plans the route according to the position information of the vacant parking spaces, so that the automatic planning of the parking route is realized, the probability of traffic accidents caused by the misplanning due to the reason of the user supervisor is reduced, and the safety of the vehicle during parking is ensured.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 9, a block diagram of a computer device 900 according to an embodiment of the present application is shown. The computer device may be a device in the vehicle route planning system shown in fig. 1, and the device may implement the parking route planning method described above. Specifically, the method comprises the following steps:
the computer apparatus 900 includes a Processing Unit (e.g., a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an FPGA (Field Programmable gate array), etc.) 901, a system Memory 904 including a RAM (Random Access Memory) 902 and a ROM (Read Only Memory) 903, and a system bus 905 connecting the system Memory 904 and the Central Processing Unit 901. The server 900 also includes a basic I/O system (Input/Output) 906 that facilitates transfer of information between devices within the computing server, and a mass storage device 907 for storing an operating system 913, application programs 914, and other program modules 912.
The basic input/output system 906 includes a display 908 for displaying information and an input device 909 such as a mouse, keyboard, etc. for a user to input information. The display 908 and the input device 909 are connected to the central processing unit 901 through an input/output controller 910 connected to the system bus 905. The basic input/output system 906 may also include an input/output controller 910 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 910 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 907 is connected to the central processing unit 901 through a mass storage controller (not shown) connected to the system bus 905. The mass storage device 907 and its associated computer-readable media provide non-volatile storage for the server 900. That is, the mass storage device 907 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM (Compact disk Read-Only Memory) drive.
Without loss of generality, the computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD (Digital Video Disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 904 and mass storage device 907 described above may be collectively referred to as memory.
The server 900 may also operate as a remote computer connected to a network via a network, such as the internet, according to embodiments of the present application. That is, the server 900 may be connected to the network 912 through the network interface unit 911 coupled to the system bus 905, or the network interface unit 911 may be used to connect to other types of networks or remote computer systems (not shown).
The memory stores a computer program which is loaded by the processor and implements the method for planning a parking route.
In an exemplary embodiment, a computer-readable storage medium is also provided, in which a computer program is stored which, when being executed by a processor, carries out the above-mentioned method of planning a parking route.
Optionally, the computer-readable storage medium may include: ROM (Read Only Memory), RAM (Random Access Memory), SSD (Solid State drive), or optical disc. The Random Access Memory may include a ReRAM (resistive Random Access Memory) and a DRAM (Dynamic Random Access Memory).
In an exemplary embodiment, a computer program product is also provided, which, when being executed by a processor, is adapted to carry out the above-mentioned method of planning a parking route.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only exemplarily show one possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the order shown in the figure, which is not limited by the embodiment of the present application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for planning a parking route, the method comprising:
acquiring a peripheral view of a vehicle, wherein the peripheral view is a peripheral environment image of the vehicle;
calling a parking space identification model to identify vacant parking spaces in the surrounding view;
and planning a parking route of the vehicle according to the position information of the vacant parking spaces through a parking management system.
2. The method of claim 1, wherein said invoking the space recognition model to recognize the vacant spaces in the surrounding view comprises:
processing the surrounding annular view by adopting a feature extraction layer of the parking space identification model to obtain a feature map corresponding to the surrounding annular view;
dividing the feature map into n grids, wherein n is a positive integer;
detecting a bounding box corresponding to an identification target in the grid;
generating a first intersection ratio IOU of each bounding box, wherein the first IOU refers to the sum of the intersection ratios between the bounding box and other bounding boxes in the feature map;
and selecting the recognition target in the target boundary box according to the first IOU of each boundary box by adopting a non-maximum suppression algorithm, and determining the recognition target as the vacant parking space.
3. The method of claim 2, wherein said selecting the identified target in the target bounding box as the free slot based on the first IOU of each of the bounding boxes using the non-maximum suppression algorithm comprises:
according to the first IOU of each bounding box, selecting the bounding box with the largest first IOU as an initial target bounding box;
acquiring a second IOU of other bounding boxes except the target bounding box, wherein the second IOU refers to the intersection ratio between the other bounding boxes and the target bounding box;
deleting other bounding boxes for which the second IOU is greater than a threshold;
and selecting the boundary frame with the largest first IOU from the rest boundary frames as an updated target boundary frame, and starting to execute the step of acquiring the second IOU of the other boundary frames except the target boundary frame again until the rest boundary frame is zero, and determining that the target corresponding to the target boundary frame is the vacant parking space.
4. The method according to claim 2, wherein the processing the surrounding view image by using the feature extraction layer of the parking space recognition model to obtain the feature map corresponding to the surrounding view image comprises:
zooming the peripheral ring view in multiple scales to obtain k peripheral ring views in different scales;
and respectively processing the k peripheral ring views with different scales by adopting a feature extraction layer of the parking space identification model to obtain feature maps respectively corresponding to the k peripheral ring views with different scales.
5. The method of claim 2, wherein detecting a bounding box corresponding to an identified target in the mesh comprises:
acquiring the detection range of the grid;
acquiring the central position coordinates of the recognition target in the feature map;
determining that the recognition target is recognized by the mesh in response to the center position coordinates falling within a detection range of the mesh;
and detecting the boundary box corresponding to the recognition target.
6. The method of claim 1, wherein said obtaining a view of a surrounding ring of a vehicle comprises:
acquiring a plurality of environment images around the vehicle through a camera;
filtering the environment image to obtain a processed environment image;
and splicing the processed environment images to generate the surrounding ring view.
7. The method according to any one of claims 1 to 6, wherein the planning, by the parking management system, the parking route of the vehicle according to the location information of the vacant parking spaces includes:
displaying the position information of the vacant parking spaces to a user;
receiving a selection instruction aiming at a target vacant parking space;
and planning a parking route of the vehicle to the target vacant parking space according to the position information of the target vacant parking space through the parking management system.
8. A device for planning a parking route, the device comprising:
the system comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for acquiring a peripheral view of a vehicle, and the peripheral view is a peripheral environment image of the vehicle;
the model calling module is used for calling a parking space identification model to identify the vacant parking spaces in the surrounding view;
and the route planning module is used for planning the parking route of the vehicle according to the position information of the vacant parking spaces through the parking management system.
9. A computer arrangement, characterized in that the computer arrangement comprises a processor and a memory, in which a computer program is stored which is loaded and executed by the processor to implement the method of planning a parking route according to any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which is loaded and executed by a processor to implement a method for planning a parking route according to any one of claims 1 to 7.
CN202010605510.7A 2020-06-29 2020-06-29 Parking route planning method, device, equipment and storage medium Active CN111746521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010605510.7A CN111746521B (en) 2020-06-29 2020-06-29 Parking route planning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010605510.7A CN111746521B (en) 2020-06-29 2020-06-29 Parking route planning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111746521A true CN111746521A (en) 2020-10-09
CN111746521B CN111746521B (en) 2022-09-20

Family

ID=72678016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010605510.7A Active CN111746521B (en) 2020-06-29 2020-06-29 Parking route planning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111746521B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882885A (en) * 2022-07-11 2022-08-09 广州小鹏汽车科技有限公司 Voice interaction method, server, vehicle and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150124093A1 (en) * 2013-11-04 2015-05-07 Xerox Corporation Method for object size calibration to aid vehicle detection for video-based on-street parking technology
WO2016128654A1 (en) * 2015-02-10 2016-08-18 Renault S.A.S Device and method for automatically parking a motor vehicle
US20170372615A1 (en) * 2016-06-27 2017-12-28 Hyundai Motor Company Apparatus and method for displaying parking zone
CN108860141A (en) * 2018-06-26 2018-11-23 奇瑞汽车股份有限公司 Method, apparatus of parking and storage medium
CN109508710A (en) * 2018-10-23 2019-03-22 东华大学 Based on the unmanned vehicle night-environment cognitive method for improving YOLOv3 network
US20190102634A1 (en) * 2017-10-02 2019-04-04 Sharp Kabushiki Kaisha Parking position display processing apparatus, parking position display method, and program
CN109817018A (en) * 2019-02-20 2019-05-28 东软睿驰汽车技术(沈阳)有限公司 A kind of automatic parking method and relevant apparatus
CN109815886A (en) * 2019-01-21 2019-05-28 南京邮电大学 A kind of pedestrian and vehicle checking method and system based on improvement YOLOv3
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN110991272A (en) * 2019-11-18 2020-04-10 东北大学 Multi-target vehicle track identification method based on video tracking
CN111160172A (en) * 2019-12-19 2020-05-15 深圳佑驾创新科技有限公司 Parking space detection method and device, computer equipment and storage medium
CN111191485A (en) * 2018-11-14 2020-05-22 广州汽车集团股份有限公司 Parking space detection method and system and automobile

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150124093A1 (en) * 2013-11-04 2015-05-07 Xerox Corporation Method for object size calibration to aid vehicle detection for video-based on-street parking technology
WO2016128654A1 (en) * 2015-02-10 2016-08-18 Renault S.A.S Device and method for automatically parking a motor vehicle
US20170372615A1 (en) * 2016-06-27 2017-12-28 Hyundai Motor Company Apparatus and method for displaying parking zone
US20190102634A1 (en) * 2017-10-02 2019-04-04 Sharp Kabushiki Kaisha Parking position display processing apparatus, parking position display method, and program
CN108860141A (en) * 2018-06-26 2018-11-23 奇瑞汽车股份有限公司 Method, apparatus of parking and storage medium
CN109508710A (en) * 2018-10-23 2019-03-22 东华大学 Based on the unmanned vehicle night-environment cognitive method for improving YOLOv3 network
CN111191485A (en) * 2018-11-14 2020-05-22 广州汽车集团股份有限公司 Parking space detection method and system and automobile
CN109815886A (en) * 2019-01-21 2019-05-28 南京邮电大学 A kind of pedestrian and vehicle checking method and system based on improvement YOLOv3
CN109817018A (en) * 2019-02-20 2019-05-28 东软睿驰汽车技术(沈阳)有限公司 A kind of automatic parking method and relevant apparatus
CN110796168A (en) * 2019-09-26 2020-02-14 江苏大学 Improved YOLOv 3-based vehicle detection method
CN110991272A (en) * 2019-11-18 2020-04-10 东北大学 Multi-target vehicle track identification method based on video tracking
CN111160172A (en) * 2019-12-19 2020-05-15 深圳佑驾创新科技有限公司 Parking space detection method and device, computer equipment and storage medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
J. REDMON, A. FARHADI: "Yolov3: an incremental improvement", 《ARXIV PREPRINT ARXIV》 *
J. REDMON, A. FARHADI: "Yolov3: an incremental improvement", 《ARXIV PREPRINT ARXIV》, 8 April 2018 (2018-04-08) *
J. REDMON, S. DIVVALA, R. GIRSHICK, ET AL: "You only look once: unified, real-time object detection", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
J. REDMON, S. DIVVALA, R. GIRSHICK, ET AL: "You only look once: unified, real-time object detection", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, 31 December 2016 (2016-12-31) *
刘煜,张政,赖世明,曾向荣,周典乐: "《阵列相机成像技术与应用》", 30 April 2018, 长沙:国防科技大学出版社, pages: 170 - 171 *
双锴: "《计算机视觉》", 31 January 2020, 北京邮电大学出版社, pages: 114 - 118 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882885A (en) * 2022-07-11 2022-08-09 广州小鹏汽车科技有限公司 Voice interaction method, server, vehicle and storage medium

Also Published As

Publication number Publication date
CN111746521B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN110287276A (en) High-precision map updating method, device and storage medium
CN112287860B (en) Training method and device of object recognition model, and object recognition method and system
CN110751012B (en) Target detection evaluation method and device, electronic equipment and storage medium
US11748998B1 (en) Three-dimensional object estimation using two-dimensional annotations
CN112200129A (en) Three-dimensional target detection method and device based on deep learning and terminal equipment
Matzka et al. Efficient resource allocation for attentive automotive vision systems
CN110544268B (en) Multi-target tracking method based on structured light and SiamMask network
CN111178119A (en) Intersection state detection method and device, electronic equipment and vehicle
CN111931683A (en) Image recognition method, image recognition device and computer-readable storage medium
CN115273039B (en) Small obstacle detection method based on camera
CN111746521B (en) Parking route planning method, device, equipment and storage medium
CN116168384A (en) Point cloud target detection method and device, electronic equipment and storage medium
CN113903188B (en) Parking space detection method, electronic device and computer readable storage medium
CN114859938A (en) Robot, dynamic obstacle state estimation method and device and computer equipment
JP2021532449A (en) Lane attribute detection
CN113008249B (en) Avoidance point detection method and avoidance method of mobile robot and mobile robot
CN112639822A (en) Data processing method and device
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN115240150A (en) Lane departure warning method, system, device and medium based on monocular camera
CN116358528A (en) Map updating method, map updating device, self-mobile device and storage medium
CN114638947A (en) Data labeling method and device, electronic equipment and storage medium
CN113505725A (en) Multi-sensor semantic fusion method for two-dimensional voxel level in unmanned driving
CN114170267A (en) Target tracking method, device, equipment and computer readable storage medium
CN112507964A (en) Detection method and device for lane-level event, road side equipment and cloud control platform
CN117392348B (en) Multi-object jitter elimination method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220211

Address after: 241006 Anshan South Road, Wuhu Economic and Technological Development Zone, Anhui Province

Applicant after: Wuhu Sambalion auto technology Co.,Ltd.

Address before: 241000 Anshan South Road, Wuhu economic and Technological Development Zone, Anhui

Applicant before: Wuhu Sambalion auto technology Co.,Ltd.

Applicant before: CHERY AUTOMOBILE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240409

Address after: 241000 10th Floor, Block B1, Wanjiang Wealth Plaza, Guandou Street, Jiujiang District, Wuhu City, Anhui Province

Patentee after: Dazhuo Intelligent Technology Co.,Ltd.

Country or region after: China

Patentee after: Dazhuo Quxing Intelligent Technology (Shanghai) Co.,Ltd.

Address before: 241006 Anshan South Road, Wuhu Economic and Technological Development Zone, Anhui Province

Patentee before: Wuhu Sambalion auto technology Co.,Ltd.

Country or region before: China