CN115731688A - Method and device for generating parking fence and server - Google Patents

Method and device for generating parking fence and server Download PDF

Info

Publication number
CN115731688A
CN115731688A CN202211288946.3A CN202211288946A CN115731688A CN 115731688 A CN115731688 A CN 115731688A CN 202211288946 A CN202211288946 A CN 202211288946A CN 115731688 A CN115731688 A CN 115731688A
Authority
CN
China
Prior art keywords
target
area
image
returning
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211288946.3A
Other languages
Chinese (zh)
Inventor
印晶曦
周壹
黄浩
陈兴岳
罗钧峰
赵梦婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanhai Information Technology Shanghai Co Ltd
Original Assignee
Hanhai Information Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanhai Information Technology Shanghai Co Ltd filed Critical Hanhai Information Technology Shanghai Co Ltd
Priority to CN202211288946.3A priority Critical patent/CN115731688A/en
Publication of CN115731688A publication Critical patent/CN115731688A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides a method, an apparatus and a server for generating a parking fence, wherein the method comprises: acquiring a driving image acquired by a driving recorder; determining a target area of a newly-added parking fence according to the driving image; obtaining user returning data of shared vehicles parked to the target area in a first statistical period; detecting whether a target event of a parking fence of a newly added shared vehicle in the target area occurs or not according to the user returning data; generating a parking fence at the target area in the shared vehicle map if the target event occurs.

Description

Method and device for generating parking fence and server
Technical Field
The embodiment of the disclosure relates to the technical field of shared vehicles, and more particularly, to a method and a device for generating a parking fence and a server.
Background
At present, the shared vehicle trip becomes a emerging trip mode in a city, and the trip demand of urban people can be effectively solved. The existing shared vehicles include a general bicycle that is powered by a user, and also include an electric bicycle having an assist motor, and the like.
For shared vehicles, fixed-point parking is required at present, that is, a user needs to park the shared vehicle in a set parking fence to complete vehicle returning.
In the prior art, a parking fence is newly added on a sidewalk by a way of marking a white line frame or arranging a vehicle cage by a government usually. The white-line frame parking fence is a new map element, and is located on a sidewalk, so that a driving recorder of a vehicle running on a motor vehicle lane is limited by a shooting angle, shot driving images are not clear, the omission amount is extremely high, the accuracy of identifying the newly-added parking fence directly according to the driving images shot by the driving recorder is extremely low, and therefore the white-line frame parking fence cannot be the same as other automobile map elements and can be directly obtained through the driving images shot by the driving recorder. Then, for the operator of the shared vehicle, it is necessary to determine the area of the newly added parking fence by manually sweeping the street, and generate the newly added parking fence in the map of the shared vehicle.
However, the manual street sweeping requires manpower to collect newly added parking fences on site, the requirements on equipment and manual arrangement are high, and the generation period is long.
Disclosure of Invention
It is an object of the embodiments of the present disclosure to provide a new technical solution for automatically generating a parking fence.
According to a first aspect of the present disclosure, there is provided a method of generating a parking fence, including:
acquiring a driving image acquired by a driving recorder;
determining a target area of a newly-added parking fence according to the driving image;
obtaining user returning data of shared vehicles parked to the target area in a first statistical period;
detecting whether a target event of a parking fence of a newly added shared vehicle in the target area occurs or not according to the user returning data;
generating a parking fence at the target area in the shared vehicle map if the target event occurs.
Optionally, the determining a target area of a newly added parking fence according to the driving image includes:
identifying whether the roadside includes a preset target object or not according to the driving image;
under the condition that the roadside is identified to contain the target object, determining an area where the target object is located as an area to be identified;
detecting whether a parking fence is arranged in the area to be identified in the shared vehicle map;
and under the condition that no parking fence is arranged in the area to be identified in the shared vehicle map, taking the area to be identified as the target area of the newly-added parking fence.
Optionally, the determining a region where the target object is located, as a region to be identified, includes:
according to the driving image, identifying the direction of a road on which the vehicle with the driving recorder runs;
and obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road and the position of the target object in the driving image.
Optionally, the obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road, and the position of the target object in the driving image includes:
dividing the driving image into a set number of image areas according to a preset dividing proportion;
determining an image area to which the target object belongs as a target image area;
determining a distance corresponding to the target image area as a target distance;
and obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road and the target distance.
Optionally, the detecting, according to the user car return data, whether a target event of a parking fence of a newly added shared vehicle in the target area occurs includes:
detecting whether a car returning abnormal event occurs in the target area or not according to the user car returning data;
acquiring the occurrence frequency of each car returning abnormal event according to the detection result of the car returning abnormal event,
determining a first number of preset abnormal counter items of each car returning according to the occurrence frequency of each abnormal car returning event;
and detecting whether the target event occurs or not according to the first quantity of each returning vehicle abnormity score.
Optionally, the car returning exception event includes at least one of the following events:
first, the shared vehicle fails to return once;
a second term, the shared vehicle violation;
and thirdly, the auditing result of the complaint picture containing the target object, which is submitted by the user for the unlawful complaint, is passed.
Optionally, the method further includes:
determining a second number of preset driving image scoring items, wherein the second number is the number of driving images which are acquired in a second counting time period and used for determining a newly-added parking fence in the target area;
and detecting whether the target event occurs or not according to the second number of the driving image scoring items.
Optionally, the detecting whether the target event occurs according to the second number of the driving image scoring items includes:
acquiring a first score of each preset returning abnormal score item and a second score of the driving image score item;
obtaining a comprehensive score of the target area according to the first number and the first score of each returning abnormal score item and the second number and the second score of the driving image score items;
and detecting whether the target event occurs or not according to the comprehensive score of the target area.
According to a second aspect of the present disclosure, there is provided a generation apparatus of a parking fence, including:
the image acquisition module is used for acquiring driving images acquired by the driving recorder;
the area determining module is used for determining a target area of a newly-added parking fence according to the driving image;
the data acquisition module is used for acquiring user vehicle returning data of shared vehicles parked to the target area in a first statistical period;
the event detection module is used for detecting whether a target event of a parking fence of a newly added shared vehicle in the target area occurs or not according to the user vehicle returning data;
a fence generation module to generate a parking fence in the target area in the shared vehicle map if the target event occurs.
According to a third aspect of the present disclosure, there is provided a server comprising the apparatus of the second aspect of the present disclosure; alternatively, the first and second electrodes may be,
the server comprises a memory for storing a computer program for controlling the processor to perform the method of the first aspect of the present disclosure and a processor.
In the embodiment of the disclosure, the target area of the newly added parking fence is determined according to the driving image acquired by the driving recorder, and the parking fence is generated in the target area in the shared vehicle map when the target data of the parking fence of the newly added shared vehicle in the target area is determined according to the user returning data of the shared vehicle parked to the target area in the first statistical time period. The driving image and the user that this embodiment gathered through the vehicle event data recorder data that return the car fuse, can discern the target area who newly increases the parking rail more accurately to through the target area generation parking rail in the shared vehicle map, with supply the user to return the car, can promote user's the efficiency of returning the car and experience of returning the car. In addition, this embodiment automatic generation parking rail can reduce the cost of labor, can also improve the generation efficiency of parking rail.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic structural diagram of a shared vehicle system capable of implementing the method for generating a parking fence according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a method of generating a parking fence according to one embodiment;
FIG. 3 is a schematic illustration of a segmentation of a driving image according to an embodiment;
FIG. 4 is a schematic illustration of a road in a driving image according to one embodiment;
fig. 5 is a block diagram of a generation apparatus of a parking fence according to an embodiment;
FIG. 6 is a block diagram of a server according to one embodiment.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
Fig. 1 is a schematic structural diagram of a shared vehicle system 100 that can be used to implement the method for generating a parking fence according to the embodiment of the present disclosure. The shared vehicle system 100 can be applied to the generation scenario of the parking fence as a whole.
As shown in fig. 1, the shared vehicle system 100 includes a server 1000, a user terminal 2000, and a shared vehicle 3000.
The server 1000 provides a service point for processes, databases, and communications facilities. The server 1000 may be a unitary server, a distributed server across multiple computers, a computer data center, a cloud server, or a cloud-deployed server cluster, etc. The server may be of various types, such as, but not limited to, a web server, a news server, a mail server, a message server, an advertisement server, a file server, an application server, an interaction server, a database server, or a proxy server. In some embodiments, each server may include hardware, software, or embedded logic components or a combination of two or more such components for performing the appropriate functions supported or implemented by the server. For example, a server, such as a blade server, a cloud server, etc., or may be a server group consisting of a plurality of servers, which may include one or more of the above types of servers, etc.
In one embodiment, the server 1000 may be as shown in fig. 1, including a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600.
Processor 1100 is used to execute computer programs, which may be written in an instruction set of architectures such as x86, arm, RISC, MIPS, SSE, and the like. The memory 1200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 1300 includes, for example, various bus interfaces such as a serial bus interface (including a USB interface), a parallel bus interface, and the like. Communication device 1400 is capable of wired or wireless communication, for example. The display device 1500 is, for example, a liquid crystal display, an LED display touch panel, or the like. The input device 1600 may include, for example, a touch screen, a keyboard, and the like.
In this embodiment, the memory 1200 of the server 1000 is used to store a computer program for controlling the processor 1100 to operate to perform the method according to the embodiment of the present invention. The skilled person can design the computer program according to the disclosed solution. How the computer program controls the processor to operate is well known in the art and will not be described in detail here.
Although a plurality of devices of the server 1000 are illustrated in fig. 1, the present invention may relate to only some of the devices, for example, the server 1000 relates to only the memory 1200, the processor 1100 and the communication device 1400.
In this embodiment, the user terminal 2000 is, for example, a mobile phone, a portable computer, a tablet computer, a palm computer, a wearable device, or the like.
The user terminal 2000 is installed with a vehicle use application client to achieve the purpose of using a shared vehicle by operating the vehicle use application client.
As shown in fig. 1, the user terminal 2000 may include a processor 2100, a memory 2200, an interface device 2300, a communication device 2400, a display device 2500, an input device 2600, a speaker 2700, a microphone 2800, and the like.
The processor 2100 is used to execute a computer program, which may be written in an instruction set of an architecture such as x86, arm, RISC, MIPS, SSE, and the like. The memory 2200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 2300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 2400 can perform wired or wireless communication, for example, the communication device 2400 may include at least one short-range communication module, for example, any module that performs short-range wireless communication based on a short-range wireless communication protocol such as a Hilink protocol, wiFi (IEEE 802.11 protocol), mesh, bluetooth, zigBee, thread, Z-Wave, NFC, UWB, liFi, and the like, and the communication device 2400 may also include a long-range communication module, for example, any module that performs WLAN, GPRS, 2G/3G/4G/5G long-range communication. The display device 2500 is, for example, a liquid crystal display panel, a touch panel, or the like. The input device 2600 may include, for example, a touch screen, a keyboard, and the like. The user terminal 2000 may output an audio signal through the speaker 2700 and collect an audio signal through the microphone 2800.
In this embodiment, the memory 2200 of the user terminal 2000 is configured to store a computer program for controlling the processor 2100 to operate to perform a method of using a shared vehicle, for example, including: acquiring a unique identifier of a shared vehicle 3000, and forming an unlocking request aiming at a specific shared vehicle and sending the unlocking request to a server; and performing bill calculation and the like according to the charge settlement notification sent by the server. A skilled person can design a computer program according to the solution disclosed in the present invention. How computer programs control the operation of the processor is well known in the art and will not be described in detail herein.
As shown in fig. 1, the shared vehicle 3000 may include a processor 3100, a memory 3200, an interface device 3300, a communication device 3400, an output device 3500, and an input device 3600, among others. Processor 3100 is configured to execute computer programs, which may be written in an instruction set of architectures such as x86, arm, RISC, MIPS, SSE, and the like. The memory 3200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface 3300 includes, for example, a USB interface, a headphone interface, and the like. The communication device 3400 includes at least one communication module, for example, capable of wired or wireless communication, and for example, capable of short-range and long-range communication. The output device 3500 may be, for example, a device that outputs a signal, may be a display device such as a liquid crystal display screen or a touch panel, or may be a speaker or the like that outputs voice information or the like. The input device 3600 may include, for example, a touch device such as a touch panel, a sound sensing device such as a button or a microphone, a pressure sensing device such as a pressure sensor, and the like.
The shared vehicle 3000 may be any vehicle such as a bicycle, an electric motorcycle, a tricycle, and a quadricycle, and is not limited thereto.
In this embodiment, the shared vehicle 3000 may report its own position information to the server 1000.
In this embodiment, the memory 3200 of the shared vehicle 3000 is used to store a computer program for controlling the processor 3100 to operate to perform a method according to any embodiment of the invention. The skilled person can design the computer program according to the disclosed solution. How the computer program controls the operation of the processor is well known in the art and will not be described in detail here.
The network 4000 may be a wireless communication network or a wired communication network, and may be a local area network or a wide area network. In the shared vehicle system 100 shown in fig. 1, the shared vehicle 3000 and the server 1000, and the user terminal 2000 and the server 1000 can communicate with each other through the network 4000. The shared vehicle 3000 may be the same as the server 1000, or may be different from the server 1000 in the network 4000 through which the user terminal 2000 communicates with the server 1000.
It should be understood that although fig. 1 shows only one server 1000, user terminal 2000, shared vehicle 3000, it is not meant to limit the respective numbers, and the shared vehicle system 100 may include a plurality of servers 1000, a plurality of user terminals 2000, a plurality of shared vehicles 3000, and the like.
The shared vehicle system 100 shown in FIG. 1 is illustrative only and is not intended to limit the invention, its application, or uses in any way.
< method examples >
Fig. 2 shows a flow diagram of a method of generating a parking fence according to an embodiment. The method steps of the present embodiment are performed by a server, such as server 1000 in fig. 1.
As shown in fig. 2, the method for generating a parking fence according to the present embodiment may include the following steps S2100 to S2500:
step S2100, a driving image acquired by a driving recorder is acquired.
The driving recorder is an instrument for recording the image and sound of the vehicle during driving. After the automobile data recorder is installed, the video images and the sound of the whole automobile running process can be recorded, and meanwhile, the time, the vehicle running speed and the position of the vehicle can also be recorded.
The driving image in this embodiment may be an image acquired by a driving recorder during driving of the vehicle. The driving image can record the environment of the vehicle when the driving image is collected.
And step S2200, determining a target area of the newly-added parking fence according to the driving image.
In a real environment, parking fences are typically placed in sidewalks and identified by a wire frame or cage. Therefore, a newly added wire frame or a region where a vehicle cage is located may be determined as the target region from the driving image.
In one embodiment of the present disclosure, determining the target area of the newly added parking fence according to the driving image may include steps S2210 to S2230 as follows:
step S2210, identifying whether the roadside includes a preset target object according to the driving image.
In this embodiment, since the parking fence is usually identified by a wire frame or a vehicle cage, the target object may include the wire frame and the vehicle cage. The wire frame may be a solid line frame or a dashed line frame, and the color of the wire frame may be white or other colors.
Since a large number of shared vehicles may be placed within the parking fence in use, causing partial areas of the wire frame to be occluded by the shared vehicles, the target objects may also include a large number of shared vehicles parked neatly. In one example, at least a partial wire frame may also be included in a mass of shared vehicles parked neatly.
In one embodiment of the present disclosure, a target recognition model capable of recognizing an image region to which an object of interest belongs in an image may be set in advance. The object of interest may include a target object, and then, it may be determined whether the roadside includes the preset target object according to the driving image based on a preset target recognition model. Specifically, the driving image may be input into the target recognition model to obtain a target object recognition result.
In this embodiment, the image input to the target recognition model may be divided into a set number of image regions in advance according to a preset division ratio, so that the result output by the target recognition model may include the image region to which the attention object belongs in the image.
The division ratio and the set number may be set in advance according to an application scenario or a specific requirement. The division ratio may include a division ratio in a width direction of the image and a division ratio in a height direction of the image. The width direction of the image is generally perpendicular to the direction of the road on which the vehicle is traveling, and the height direction of the image is generally parallel to the direction of the road on which the vehicle is traveling. For example, the division ratio in the width direction of the image may be 1.
In an actual scene, since the parking fences sharing the vehicle are usually disposed on both sides of the road on which the vehicle travels, the middle region in the width direction of the image, that is, the portion corresponding to 2 in 1 of the division ratio 1. Therefore, 3 image areas corresponding to the middle area may be removed, and 6 image areas may be left, that is, a set number of image areas.
Specifically, the image may be segmented according to a segmentation ratio 1 in the width direction of the image; then removing the middle part to obtain parts corresponding to 1 and 1 in the division ratio of 1; then, each of the remaining portions is divided again in accordance with a division ratio 1.
In one example, to avoid interference of scattered parked shared vehicles with the identification of the target object, the objects of interest that can be identified by the target identification model may also include a small number of parked shared vehicles.
On the basis of the above, the target recognition model can also recognize the category of the attention object in the image. The categories of the object of interest may include: a wire frame, a vehicle cage, a large number of shared vehicles parked neatly, a small number of shared vehicles parked. Then, according to the coordinate frame of the attention object in the driving image identified by the target identification model and the category of the attention object, it can be determined whether the roadside includes a preset target object.
The target recognition model of the embodiment independently detects shared single vehicles parked scattered on the roadside, and can avoid confusion caused by recognition of the types of a large number of shared single vehicles parked regularly.
In an embodiment of the present disclosure, the method may further include a step of training a target recognition model, specifically including: and acquiring a training sample, and training the initial target recognition model according to the training sample to obtain a trained target recognition model.
The training sample may include a sample image and an image annotation. The sample image can be an image acquired by a vehicle event data recorder, and the image annotation comprises an image area where the attention object is located in the sample image and the type of the attention object.
Because the parking fence is generally positioned at the edge of the driving image, and the size of the concerned target in the driving image is small, the image acquired by the driving recorder can be zoomed according to the target size, and a sample image of the target size is obtained. The target size may be set in advance according to an application scenario or specific requirements. For example, the target size may be 960 × 960 pixels.
In this embodiment, the sock part of the target recognition model adopts a structure of FPN + PAN, so that targets with different sizes can be better learned, wherein the loss supervision function of each level can be expressed as:
L l =L cls (O cls,l ,GT cls,l )+λL reg (O reg,l ,GT reg,l )
where L denotes the current FPN level, L cls Represents the loss of the classified part, L reg Denotes the loss of the regression part of the coordinates, O cls,l Predictive value, GT of the hierarchical classification section cls,l Represents the truth value, O, of the l-th layer classification section reg,l Predicted values representing regression parts of the l-th layer coordinates, GT reg,l A true value of the i-th layer coordinate regression part is represented, and λ represents a preset fraction value.
The classification part is used for detecting whether the image contains the attention object, and the coordinate regression part is used for detecting whether the position of the attention object is located on the two sides of the road.
Further, in order to better adapt to driving images acquired by driving recorders of different models, in the process of training the target recognition model, data enhancement processing can be performed on the images acquired by the driving recorders to obtain sample images, so that the robustness of the target recognition model obtained through training according to the sample images is improved. The data enhancement processing mode may include: HSV conversion, random angle rotation, scale change, brightness change, image turnover, mosaic enhancement and the like.
In one example, the value of the FPN level l may be 2 to 4, the iteration time epoch may be 300, the precision requirement batch-size may be 4, and the initial learning rate may be 0.01.
According to the target identification model of the embodiment, whether the road edge in the driving image contains the target object can be identified.
Step S2220, when it is identified that the roadside includes the target object, determines an area where the target object is located as an area to be identified.
In an embodiment of the present disclosure, determining an area where a target object is located as an area to be identified may include steps S2221 to S2222 shown as follows:
step S2221, according to the driving images, the direction of the road on which the vehicle with the driving recorder runs is identified.
In an actual scene, at a non-intersection position in a road, the direction of the road where the vehicle runs is the direction of the road in the driving image.
At the intersection position in the road, a mark corresponding to the driving direction can be arranged in each lane, and the trend of the road on which the vehicle where the automobile data recorder is located runs can be determined by identifying the mark of the driving direction in the driving image. For example, in the road shown in fig. 4, in the case that the position of the vehicle is determined to be position 1 according to the driving image, the driving direction of the lane corresponding to position 1 is indicated as a right turn, and the direction of the road on which the vehicle is driven may be as indicated by the arrow direction of road direction 1; under the condition that the position of the vehicle is determined to be the position 2 according to the driving image, the driving direction of the lane corresponding to the position 2 is marked as straight, and the direction of the road where the vehicle drives can be shown as the arrow direction of the road direction 2; in the case where it is determined from the driving image that the position of the vehicle is position 3, the driving direction of the lane corresponding to position 3 is indicated as a left turn, and the direction of the road on which the vehicle is driving may be as indicated by the arrow direction of road direction 3.
Step S2222, obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road and the position of the target object in the driving image.
In an embodiment of the present disclosure, obtaining the region to be identified according to the acquisition position of the driving image, the trend of the road, and the position of the target object in the driving image may include steps S2222-1 to S2222-4 as shown below:
and S2222-1, dividing the driving image into a set number of image areas according to a preset dividing proportion.
The division ratio and the set number may be set in advance according to an application scenario or a specific requirement. The division ratio may include a division ratio in a width direction of the image and a division ratio in a height direction of the image. The width direction of the image is generally perpendicular to the direction of the road on which the vehicle is traveling, and the height direction of the image is generally parallel to the direction of the road on which the vehicle is traveling. For example, the division ratio in the width direction of the image may be 1.
In an actual scene, since the parking fences sharing the vehicle are usually disposed on both sides of the road on which the vehicle travels, the middle region in the width direction of the image, that is, the portion corresponding to 2 in division ratio 1. Therefore, 3 image areas corresponding to the middle area may be removed, and 6 image areas may be left, that is, a set number of image areas.
Specifically, the image may be segmented according to a segmentation ratio 1 in the width direction of the image; then removing the middle part to obtain parts corresponding to 1 and 1 in the division ratio of 1; then, each remaining portion is divided again according to the division ratio 1 in the height direction of the image.
Step S2222-2, the image area to which the target object belongs is determined as the target image area.
Specifically, an image area including the target object may be determined as the target image area.
Step S2222-3, the distance corresponding to the target image area is determined as the target distance.
In the present embodiment, the distance corresponding to each image area may be set in advance. For example, in the driving image shown in fig. 3, a distance d1 corresponding to the image area 1 and the image area 4, a distance d2 corresponding to the image area 2 and the image area 5, and a distance d3 corresponding to the image area 3 and the image area 6 may be set in advance. Wherein d1 is more than or equal to d2 and more than or equal to d3.
For example, in the case where the target image region to which the target object belongs is the image region 4, the target distance may be d1.
And S222-4, obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road on which the vehicle runs and the target distance.
In this embodiment, the driving distance between the vehicle and the collection position of the driving image may be determined as a position of the target distance along the direction of the road on which the vehicle is driving, and the position is used as the target position; and determining an area with a set size on the sidewalk corresponding to the target position as an area to be identified. The driving distance between the target position and the collecting position may be the direction of the vehicle along the road where the vehicle is driven, the distance required to be driven from the collecting position to the target position is not the straight distance between the target position and the collecting position.
The set size may be set in advance according to an application scenario or a specific requirement. The shape of the region to be identified may be, for example, a rectangle.
Further, according to the target image area, the parking fence is determined to be positioned on the left side or the right side of the road where the vehicle runs, so that the sidewalk corresponding to the target position is determined. In the example shown in fig. 3, if the target image area is one of the image areas 1-3, the sidewalk corresponding to the target position may be determined to be the sidewalk on the left side of the target position, and if the target image area is one of the image areas 4-6, the sidewalk corresponding to the target position may be determined to be the sidewalk on the right side of the target position.
According to the embodiment, the area to be recognized where the target object is located in the image can be recognized more accurately, and therefore the position of the generated parking fence can be more accurate.
In another embodiment of the present disclosure, determining a region where the target object is located as the region to be identified may further include: determining pixel coordinates of a target object in a driving image, determining a first distance between the target object and a collection position of the driving image, and determining a first direction of the target object relative to the collection position in the width direction of the driving image; determining a position, which is in a first direction relative to the acquisition position of the driving image and has a first distance from the acquisition position of the driving image in a sidewalk on which the vehicle runs, as a first position; and determining a region with a set size taking the first position as the center as a region to be identified.
And step S2230, detecting whether a parking fence is arranged in the area to be identified in the shared vehicle map.
Step S2240, under the condition that no parking fence is arranged in the area to be identified in the shared vehicle map, taking the area to be identified as a target area of a newly-added parking fence.
In the case that no parking fence is arranged in the area to be identified in the shared vehicle map, it indicates that the target object in the area to be identified may be a newly added parking fence, and therefore, the area to be identified may be the target area of the newly added parking fence.
According to the embodiment, the target area of the newly-added parking fence can be determined according to the driving image, and then whether the result of the parking fence is added to the target area or not can be verified according to the vehicle returning data of the user, so that the generated parking fence is more accurate.
Step S2300, obtaining user returning data of the shared vehicle parked to the target area within the first statistical time period.
The first statistical period may be a period set in advance according to an application scenario or a specific demand, for example, the first statistical period may be a past week.
The shared vehicle parked to the target area in the first statistical time period may be a shared vehicle whose returning position in the first statistical time period was located in the target area, and when step S2300 is executed, the actual position of the shared vehicle whose returning position in the first statistical time period was located in the target area may be any other position, and is not limited to the target area. The returning position of the shared vehicle may be acquired by a positioning device provided in the shared vehicle when a returning event occurs in the shared vehicle, or may be acquired by a user terminal using the shared vehicle.
In this embodiment, the car return event may include at least one of:
detecting that a touch device arranged on a shared vehicle senses an appointed touch action;
detecting that a sound sensing device arranged on a shared vehicle receives a specified voice sent by a user;
detecting that the foot supports of the shared vehicles sense the appointed returning action;
detecting that a short-range wireless communication device arranged in a shared vehicle can sense a user terminal;
detecting that the continuous time length of the shared vehicle not sensing the pressure reaches a first set time length;
detecting disconnection between a Bluetooth device of a sharing vehicle and a Bluetooth device of a user terminal;
detecting that a signal intensity value of a Bluetooth signal of a user terminal received by a Bluetooth device of a shared vehicle is smaller than a set value;
detecting that the time length of the shared vehicle in the static state reaches a second set time length;
and receiving a locking instruction sent by the server or the user terminal.
The user car returning data can be data generated by a user when the user returns a car, and specifically can include a car returning position of a shared vehicle, car returning time, data representing success or failure of one-time car returning of the shared vehicle, data representing whether the shared vehicle is illegal to stop, and data representing illegal stop complaints of the shared vehicle.
In this embodiment, the shared vehicle generates a user vehicle returning data every time the shared vehicle is returned.
Step S2400, whether a target event of generating a parking fence of a shared vehicle in a target area occurs is detected according to the user returning data.
In one embodiment of the present disclosure, detecting whether a target event of generating a parking fence of a shared vehicle in a target area occurs according to user returning data may include steps S2410 to S2440 as follows:
step S2410, detecting whether a car returning abnormal event occurs in the target area or not according to the car returning data of the user.
In this embodiment, it may be determined whether a car-returning abnormal event occurs in the target area according to the car-returning data of each user.
Wherein, the return abnormal event can comprise at least one of the following events:
first, the sharing vehicle fails to return once;
the second item, shared vehicle violation;
and thirdly, the auditing result of the complaint picture containing the target object, which is submitted by the user for the unlawful complaint, is passed.
Specifically, the first returning abnormal event may be determined to occur under the condition that it is determined that the shared vehicle fails to return once according to the returning data of one user; determining that a second car returning abnormal event occurs under the condition that the shared car is determined to be illegal to stop according to the car returning data of one user; and under the condition that the audit result of the complaint picture containing the target object, which is submitted by the user for the unlawful complaint, is determined to pass according to the returning data of the user, determining that a third returning abnormal event occurs.
In this embodiment, a car returning failure is that the shared car is in a use process for one time, and the car returning failure occurs when the car returning operation is performed for the first time, that is, the car returning operation needs to be performed at least twice before the car returning is successful.
And step S2420, obtaining the occurrence frequency of each car returning abnormal event according to the detection results of the car returning abnormal events of the car returning data of all the users.
In the embodiment, when the first car returning abnormal event is determined to occur according to the car returning data of one user, the first car returning abnormal event is determined to occur once; under the condition that a second car returning abnormal event is determined according to one user car returning data, judging that the second car returning abnormal event occurs once; and under the condition that the third car returning abnormal event is determined according to the car returning data of one user, judging that the third car returning abnormal event occurs once.
That is to say, the number of times of occurrence of each car returning abnormal event is the number of the car returning data of the user, which can determine the occurrence of the corresponding car returning abnormal event.
And step S2430, determining a first number of each preset returning vehicle abnormal counting item according to the occurrence frequency of each returning vehicle abnormal event.
In one embodiment, the return trip exception scoring items may include at least one of:
first, sharing sudden increase of vehicle returning failure at one time;
the second item, sharing a vehicle violation surge;
third, shared vehicle violation;
and fourthly, the auditing result of the complaint picture containing the target object, which is submitted by the user to the unlawful complaint, is passed.
In an embodiment where the car returning abnormal score item includes a sudden increase of one-time car returning failure of the shared vehicle, the first number of the car returning abnormal score item may be a difference obtained by subtracting the second occurrence number from the first occurrence number. The first occurrence frequency may be the occurrence frequency of the returning abnormal event that the target area shares the vehicle once returning failure in the first statistical period, and the second occurrence frequency may be the occurrence frequency of the returning abnormal event that the target area shares the vehicle once returning failure in the third statistical period.
The third statistical time period may be a statistical time period before the first statistical time period, which is set in advance according to an application scenario or specific requirements, and a duration of the third statistical time period may be a duration equal to or greater than the first statistical time period. For example, the third statistical period may be one month prior to the first statistical period.
In an embodiment where the car returning abnormal score item includes sudden increases in shared vehicle parking, the first number of the car returning abnormal score items may be a difference obtained by subtracting the fourth occurrence number from the third occurrence number. The third occurrence frequency may be the occurrence frequency of the abnormal returning event that the target area shares the vehicle for illegal parking in the first statistical period, and the fourth occurrence frequency may be the occurrence frequency of the abnormal returning event that the target area shares the vehicle for illegal parking in the fourth statistical period.
The fourth statistical time period may be a statistical time period before the first statistical time period, which is set in advance according to an application scenario or specific requirements, and a duration of the fourth statistical time period may be a duration equal to or greater than the first statistical time period. The fourth statistical period and the third statistical period may be the same or different. For example, the fourth statistical period may be two weeks before the first statistical period.
In an embodiment where the car-returning abnormal scoring item includes the shared vehicle illegal parking, the first number of the car-returning abnormal scoring items may be the number of occurrences of the car-returning abnormal event that the target area shares the vehicle illegal parking within the first statistical period.
In the embodiment that the returning abnormal scoring items include that the audit result of the complaint picture containing the target object submitted by the user for the illegal parking complaint is passed, the first number of the returning abnormal scoring items may be the number of occurrences of the returning abnormal event that the audit result of the complaint picture containing the target object submitted by the user for the illegal parking complaint is passed within the first statistical section of the target area.
And step S2440, detecting whether a target event occurs according to the first number of each returning abnormity score item.
In one embodiment of the disclosure, whether a target event occurs is detected according to the first number of each returning vehicle abnormal scoring item, which may be determining whether the first number of each returning vehicle abnormal scoring item is greater than or equal to a third number threshold of the corresponding returning vehicle abnormal scoring item, determining that the target event occurs in a case that the first number of each returning vehicle abnormal scoring item is greater than or equal to the third number threshold of the corresponding returning vehicle abnormal scoring item, and determining that the target event does not occur in a case that the first number of any returning vehicle abnormal scoring item is less than the third number threshold of the corresponding returning vehicle abnormal scoring item.
In another embodiment of the present disclosure, detecting whether a target event occurs according to the first number of each returning vehicle abnormality score item may further include: acquiring the score of each preset returning vehicle abnormal score item; obtaining a comprehensive score of the target area according to the first quantity and the first score of each returning abnormal score item; and detecting whether a target event occurs according to the comprehensive score of the target area.
Specifically, for each returning vehicle abnormal scoring item, calculating a product of a first number of the returning vehicle abnormal scoring items and a first score, as a total score of the returning vehicle abnormal scoring items; and summing the total scores of all the return abnormal score items to obtain a comprehensive score of the target area.
In this embodiment, the comprehensive score of the target area may reflect the probability of a new parking fence in the target area. The higher the composite score of the target area, the greater the probability of a new parking fence being added to the target area.
The first score of each returning vehicle abnormal score item can be preset according to an application scene or specific requirements. The first scores of the abnormal counter items of different returning cars can be the same or different. For example, the first score of the first returning vehicle abnormity score item can be 1.5, the first score of the second returning vehicle abnormity score item can be 5, the first score of the third returning vehicle abnormity score item can be 1, and the first score of the fourth returning vehicle abnormity score item can be 50.
Further, the highest score can be set for each vehicle returning abnormity score item in advance according to an application scene or specific requirements. For example, the highest score of the first returning vehicle abnormality scoring item may be 100, the highest score of the second returning vehicle abnormality scoring item 100 may be 5, the highest score of the third returning vehicle abnormality scoring item may be 100, and the highest score of the fourth returning vehicle abnormality scoring item may be 200.
And when the product of the first number and the first score of one returning vehicle abnormal scoring item is larger than the corresponding highest score, taking the highest score as the total score of the returning vehicle abnormal scoring item. For example, when the product of the first number and the score of the fourth returning vehicle abnormality scoring item is greater than the highest score of the fourth returning vehicle abnormality scoring item, the highest score of the fourth returning vehicle abnormality scoring item may be used as the total score of the fourth returning vehicle abnormality scoring item.
In yet another embodiment of the present disclosure, the method may further include: and determining a second number of preset driving image counting items, wherein the second number is the number of driving images which are acquired in a second counting time period and used for determining a newly added parking fence in the target area, and detecting whether a target event occurs according to the second number of the driving image counting items.
In this embodiment, the number of the driving images capable of determining the newly added parking fence in the target area may be determined as the second number of the driving image scoring items by acquiring the driving images acquired in the vicinity of the target area in the second statistical time period. The vicinity of the target region may be a region having a distance from the target region of a preset distance or less.
In one embodiment, for a plurality of driving images acquired by the driving recorder of one vehicle within a set time period, filtering processing may be performed in advance, and only one driving image acquired by the driving recorder of one vehicle within the set time period is reserved. The set time length may be set in advance according to an application scenario or a specific requirement.
In this embodiment, detecting whether the target event occurs may further include: acquiring a first score of each preset returning abnormal score and a second score of a driving image score; obtaining a comprehensive score of the target area according to the first number and the first score of each returning abnormal scoring item and the second number and the second score of the driving image scoring items; and detecting whether a target event occurs according to the comprehensive score of the target area.
In this embodiment, for each returning vehicle abnormality scoring item, a product of a first number of the returning vehicle abnormality scoring items and a first score may be calculated as a total score of the returning vehicle abnormality scoring items; calculating the product of the second number of the driving image scoring items and the second fraction as the total score of the driving image scoring items; and summing the total scores of all the abnormal returning scoring items and the total scores of the driving image scoring items to obtain the comprehensive score of the target area.
In one embodiment, detecting whether a target event occurs according to the composite score of the target area may include: and according to the comprehensive score, performing descending sorting on all target areas in the preset range to obtain a sorting value of each target area, and determining the occurrence of a target event of a newly-added parking fence in the target area with the sorting value being greater than or equal to the preset sorting value.
The preset sorting value may be preset according to an application scenario or a specific requirement. For example, the preset sorting value may be 300.
In another embodiment, detecting whether a target event occurs according to the composite score of the target area may include: and judging whether the comprehensive score of the target area is greater than or equal to a preset score threshold value or not, and judging that a target event of a newly added parking fence in the target area occurs under the condition that the comprehensive score of the target area is greater than or equal to the score threshold value.
The score threshold may be set in advance according to an application scenario or a specific requirement. For example, the score threshold may be 75.
In another embodiment of the present disclosure, detecting whether a target event occurs according to the user car return data may include: detecting whether a car returning abnormal event occurs in a target area or not according to the car returning data of the user; according to the detection result of the car returning abnormal event, the number of the user car returning data capable of determining the occurrence of any one car returning abnormal event is obtained, and the occurrence of the target event is determined under the condition that the number is larger than or equal to a preset first number threshold value.
The first quantity threshold may be set in advance according to an application scenario or a specific requirement.
In still another embodiment of the present disclosure, detecting whether a target event occurs according to user car return data may include: detecting whether a car returning abnormal event occurs in a target area or not according to the car returning data of the user; according to the detection result of the car returning abnormal event, acquiring the number of the user car returning data which can determine the occurrence of each car returning abnormal event as the number corresponding to the corresponding car returning abnormal event; and under the condition that the number corresponding to each returning abnormal event is greater than or equal to a preset second number threshold of the corresponding returning abnormal event, determining that the target event occurs.
Wherein, the second number threshold value can be preset according to the application scene or the specific requirement.
In step S2500, in the event of a target event, a parking fence is generated in a target area in the shared vehicle map.
In the case where it is determined that the target event of newly adding a parking fence to the target area occurs through step S2400, a parking fence may be generated at the target area in the shared vehicle map.
In the embodiment of the disclosure, the target area of the newly added parking fence is determined according to the driving image acquired by the driving recorder, and the parking fence is generated in the target area in the shared vehicle map under the condition that the target data of the parking fence of the newly added shared vehicle in the target area is determined according to the user returning data of the shared vehicle parked to the target area in the first statistical time period. The driving image and the user car returning data collected by the driving recorder are fused, the target area of the newly-increased parking fence can be accurately identified, the parking fence is generated through the target area in the shared vehicle map, the user can return the car, and the car returning efficiency and the car returning experience of the user can be improved. In addition, this embodiment automatic generation parking rail can reduce the cost of labor, can also improve the generation efficiency of parking rail.
< apparatus embodiment >
Corresponding to the above method, the present disclosure also provides a parking fence generating apparatus 5000, as shown in fig. 5, including an image obtaining module 5100, an area determining module 5200, a data obtaining module 5300, an event detecting module 5400, and a fence generating module 5500. The image obtaining module 5100 is configured to obtain driving images collected by a driving recorder; the area determination module 5200 is configured to determine a target area of a newly added parking fence according to the driving image; the data acquisition module 5300 is configured to acquire user returning data of the shared vehicle parked to the target area within a first statistical period; the event detection module 5400 is configured to detect whether a target event of a parking fence of a newly added shared vehicle in the target area occurs according to the user returning data; the fence generation module 5500 is configured to generate a parking fence in the target area of the shared vehicle map if the target event occurs.
In one embodiment of the present disclosure, the area determination module 5200 is further configured to:
identifying whether the roadside includes a preset target object or not according to the driving image;
under the condition that the roadside is identified to contain the target object, determining an area where the target object is located as an area to be identified;
detecting whether a parking fence is arranged in the area to be identified in the shared vehicle map;
and under the condition that no parking fence is arranged in the area to be identified in the shared vehicle map, taking the area to be identified as the target area of the newly-added parking fence.
In an embodiment of the present disclosure, the determining a region where the target object is located as a region to be identified includes:
according to the driving image, identifying the direction of a road on which the vehicle with the driving recorder runs;
and obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road and the position of the target object in the driving image.
In an embodiment of the present disclosure, the obtaining the region to be identified according to the acquisition position of the driving image, the trend of the road, and the position of the target object in the driving image includes:
dividing the driving image into a set number of image areas according to a preset division ratio;
determining an image area to which the target object belongs as a target image area;
determining a distance corresponding to the target image area as a target distance;
and obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road and the target distance.
In an embodiment of the present disclosure, the event detection module 5400 is further configured to:
detecting whether a car returning abnormal event occurs in the target area or not according to the user car returning data;
acquiring the occurrence frequency of each car returning abnormal event according to the detection result of the car returning abnormal event,
determining a first number of preset abnormal counter items of each car returning according to the occurrence frequency of each abnormal car returning event;
and detecting whether the target event occurs or not according to the first quantity of each returning vehicle abnormity score.
In one embodiment of the present disclosure, the carriage return exception event includes at least one of:
first, the shared vehicle fails to return once;
a second term, the shared vehicle violation;
and thirdly, the auditing result of the complaint picture containing the target object, which is submitted by the user for the unlawful complaint, is passed.
In one embodiment of the present disclosure, the generating device 50000 of the parking fence further includes:
the module is used for determining a second number of preset driving image counting items, wherein the second number is the number of driving images which are acquired in a second counting time period and used for determining a newly added parking fence in the target area;
and detecting whether the target event occurs or not according to the second number of the driving image scoring items.
In an embodiment of the present disclosure, the detecting whether the target event occurs further according to the second number of driving image scoring items includes:
acquiring a first score of each preset returning abnormal score and a second score of the driving image score;
obtaining a comprehensive score of the target area according to the first number and the first score of each returning abnormal score item and the second number and the second score of the driving image score items;
and detecting whether the target event occurs or not according to the comprehensive score of the target area.
It will be appreciated by those skilled in the art that the generation means 5000 of the parking fence can be implemented in various ways. The generation means 5000 of the parking fence can be implemented, for example, by an instruction configuration processor. For example, the generation apparatus 5000 of the parking fence may be implemented by storing instructions in a ROM and reading the instructions from the ROM into a programmable device when the device is started. For example, the generation means 5000 of the parking fence may be solidified into a dedicated device (e.g. ASIC). The generation means 5000 of the parking fence may be divided into units independent of each other, or may be implemented by combining them together. The generation device 5000 of the parking fence may be implemented by one of the various implementations described above, or may be implemented by a combination of two or more of the various implementations described above.
In this embodiment, the generation device 5000 of the parking fence may have various implementation forms, for example, the generation device 5000 of the parking fence may be any functional module running in a software product or an application program providing a service of generating the parking fence, or a peripheral insert, a plug-in, a patch, etc. of the software product or the application program, and may also be the software product or the application program itself.
< Server embodiment >
The present embodiment provides a server 6000. In one example, the server may include the aforementioned generation device 5000 of the parking fence.
In another example, as shown in fig. 6, the server 6000 may comprise a memory 6200 and a processor 6100, the memory 6200 being used for storing program instructions, the processor 6100 being used for controlling the server 6000 to perform the method provided in any one of the embodiments when the computer program is executed.
In the embodiment of the disclosure, the target area of the newly added parking fence is determined according to the driving image acquired by the driving recorder, and the parking fence is generated in the target area in the shared vehicle map when the target data of the parking fence of the newly added shared vehicle in the target area is determined according to the user returning data of the shared vehicle parked to the target area in the first statistical time period. The driving image and the user that this embodiment gathered through the vehicle event data recorder data that return the car fuse, can discern the target area who newly increases the parking rail more accurately to through the target area generation parking rail in the shared vehicle map, with supply the user to return the car, can promote user's the efficiency of returning the car and experience of returning the car. In addition, this embodiment automatic generation parking rail can reduce the cost of labor, can also improve the generation efficiency of parking rail.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A method of generating a parking fence, comprising:
acquiring a driving image acquired by a driving recorder;
determining a target area of a newly-added parking fence according to the driving image;
obtaining user returning data of shared vehicles parked to the target area in a first statistical period;
detecting whether a target event of a parking fence of a newly added shared vehicle in the target area occurs or not according to the user returning data;
generating a parking fence at the target area in the shared vehicle map if the target event occurs.
2. The method of claim 1, the determining a target area of a newly added parking fence from the driving image, comprising:
identifying whether the roadside includes a preset target object or not according to the driving image;
under the condition that the roadside is identified to contain the target object, determining an area where the target object is located as an area to be identified;
detecting whether a parking fence is arranged in the area to be identified in the shared vehicle map;
and under the condition that no parking fence is arranged in the area to be identified in the shared vehicle map, taking the area to be identified as the target area of the newly-added parking fence.
3. The method according to claim 2, wherein the determining the area where the target object is located as the area to be identified includes:
according to the driving image, identifying the direction of a road on which the vehicle with the driving recorder runs;
and obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road and the position of the target object in the driving image.
4. The method according to claim 3, wherein the obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road and the position of the target object in the driving image comprises:
dividing the driving image into a set number of image areas according to a preset division ratio;
determining an image area to which the target object belongs as a target image area;
determining a distance corresponding to the target image area as a target distance;
and obtaining the area to be identified according to the acquisition position of the driving image, the trend of the road and the target distance.
5. The method of claim 1, wherein the detecting whether a target event of a parking fence of a newly added shared vehicle in the target area occurs according to the user returning data comprises:
detecting whether a car returning abnormal event occurs in the target area or not according to the user car returning data;
acquiring the occurrence frequency of each car returning abnormal event according to the detection result of the car returning abnormal event,
determining a first number of preset abnormal counter items of each car returning according to the occurrence frequency of each abnormal car returning event;
and detecting whether the target event occurs or not according to the first quantity of each returning vehicle abnormity score.
6. The method of claim 5, the carriage return exception event comprising at least one of:
first, the shared vehicle fails to return once;
a second term, the shared vehicle violating a stop;
and thirdly, the auditing result of the complaint picture containing the target object, which is submitted by the user for the unlawful complaint, is passed.
7. The method of claim 5, further comprising:
determining a second number of preset driving image counting items, wherein the second number is the number of driving images which are acquired in a second counting time period and used for determining a newly added parking fence in the target area;
and detecting whether the target event occurs or not according to the second number of the driving image scoring items.
8. The method of claim 7, the detecting whether the target event occurs further according to the second number of driving image scoring items, comprising:
acquiring a first score of each preset returning abnormal score and a second score of the driving image score;
obtaining a comprehensive score of the target area according to the first number and the first score of each returning abnormal scoring item and the second number and the second score of the driving image scoring items;
and detecting whether the target event occurs or not according to the comprehensive score of the target area.
9. A parking fence generating apparatus, comprising:
the image acquisition module is used for acquiring driving images acquired by the driving recorder;
the area determining module is used for determining a target area of a newly-added parking fence according to the driving image;
the data acquisition module is used for acquiring user vehicle returning data of shared vehicles parked to the target area in a first statistical period;
the event detection module is used for detecting whether a target event of a parking fence of a newly added shared vehicle in the target area occurs or not according to the user vehicle returning data;
a fence generation module to generate a parking fence in the target area in the shared vehicle map if the target event occurs.
10. A server comprising the apparatus of claim 9; alternatively, the first and second electrodes may be,
the server comprises a memory for storing a computer program for controlling the processor to perform the method according to any one of claims 1 to 8 and a processor.
CN202211288946.3A 2022-10-20 2022-10-20 Method and device for generating parking fence and server Pending CN115731688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211288946.3A CN115731688A (en) 2022-10-20 2022-10-20 Method and device for generating parking fence and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211288946.3A CN115731688A (en) 2022-10-20 2022-10-20 Method and device for generating parking fence and server

Publications (1)

Publication Number Publication Date
CN115731688A true CN115731688A (en) 2023-03-03

Family

ID=85294268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211288946.3A Pending CN115731688A (en) 2022-10-20 2022-10-20 Method and device for generating parking fence and server

Country Status (1)

Country Link
CN (1) CN115731688A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611420A (en) * 2024-01-18 2024-02-27 福之润智能科技(福建)有限公司 Electric vehicle returning data processing method and system based on Internet of things

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611420A (en) * 2024-01-18 2024-02-27 福之润智能科技(福建)有限公司 Electric vehicle returning data processing method and system based on Internet of things
CN117611420B (en) * 2024-01-18 2024-04-26 福之润智能科技(福建)有限公司 Electric vehicle returning data processing method and system based on Internet of things

Similar Documents

Publication Publication Date Title
CN110390262B (en) Video analysis method, device, server and storage medium
CN109919347B (en) Road condition generation method, related device and equipment
US9275547B2 (en) Prediction of free parking spaces in a parking area
CN110648533A (en) Traffic control method, equipment, system and storage medium
CN112837542B (en) Method and device for counting traffic volume of highway section, storage medium and terminal
CN112509325B (en) Video deep learning-based off-site illegal automatic discrimination method
CN113048982B (en) Interaction method and interaction device
CN112349087B (en) Visual data input method based on holographic perception of intersection information
CN110111582B (en) Multi-lane free flow vehicle detection method and system based on TOF camera
JP2020518165A (en) Platform for managing and validating content such as video images, pictures, etc. generated by different devices
CN112287806A (en) Road information detection method, system, electronic equipment and storage medium
CN115731688A (en) Method and device for generating parking fence and server
CN112671834A (en) Parking data processing system
CN113496213A (en) Method, device and system for determining target perception data and storage medium
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN112907761B (en) Method, electronic device, and computer storage medium for parking management
CN117075350B (en) Driving interaction information display method and device, storage medium and electronic equipment
CN114694370A (en) Method, device, computing equipment and storage medium for displaying intersection traffic flow
WO2021086884A1 (en) System, apparatus and method of provisioning allotments utilizing machine visioning
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN110853364A (en) Data monitoring method and device
CN112752067A (en) Target tracking method and device, electronic equipment and storage medium
CN113838283B (en) Vehicle position state marking method and device, storage medium and terminal
CN114596704A (en) Traffic event processing method, device, equipment and storage medium
CN114511825A (en) Method, device and equipment for detecting area occupation and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination