CN113362592A - Method, system, and computer-readable storage medium for identifying an offending traffic participant - Google Patents

Method, system, and computer-readable storage medium for identifying an offending traffic participant Download PDF

Info

Publication number
CN113362592A
CN113362592A CN202110587444.XA CN202110587444A CN113362592A CN 113362592 A CN113362592 A CN 113362592A CN 202110587444 A CN202110587444 A CN 202110587444A CN 113362592 A CN113362592 A CN 113362592A
Authority
CN
China
Prior art keywords
information
perception
traffic participant
traffic
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110587444.XA
Other languages
Chinese (zh)
Inventor
房颜明
李智
时兵兵
孟令钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202110587444.XA priority Critical patent/CN113362592A/en
Publication of CN113362592A publication Critical patent/CN113362592A/en
Priority to PCT/CN2022/095590 priority patent/WO2022247931A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0116Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method, a system and a computer readable storage medium for identifying an illegal traffic participant, wherein the method comprises the following steps: acquiring perception information of first perception equipment at a scene entrance and perception information of second perception equipment in a scene; capturing a target traffic participant based on the perception information of the first perception device to acquire identity information of the target traffic participant; acquiring object characteristic information of a target traffic participant according to the sensing information of the first sensing equipment, and binding the identity information and the object characteristic information to obtain binding information and record the binding information in a traffic participant searching set; identifying object characteristic information of the illegal traffic participating object; and acquiring target binding information from the binding information according to the object characteristic information, thereby acquiring the identity information of the illegal object and broadcasting the identity information and the illegal behavior. By using the scheme of the invention, the traffic participants violating the regulations in the scene can be accurately identified.

Description

Method, system, and computer-readable storage medium for identifying an offending traffic participant
Technical Field
The present invention relates generally to the field of intelligent traffic management. More particularly, the present invention relates to methods, systems, and computer-readable storage media for identifying offending traffic-engaging objects.
Background
This section is intended to provide a background or context to the embodiments of the disclosure recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Thus, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
Current intelligent traffic management involves automatic identification of traffic objects such as people, vehicles, and roads, and automatic detection of illegal activities. Through automatic identification and detection, traffic key management targets can be intelligently analyzed, and therefore traffic efficiency is improved and traffic travel order is optimized. However, there is currently no good management solution for locations such as service areas where vehicles are parked. In particular, there is currently no effective detection or notification scheme for various violations of vehicles within a service area. Therefore, how to accurately judge and timely notify the vehicle violation behaviors in the similar places becomes a problem to be solved by current vehicle management.
Disclosure of Invention
In order to solve at least the technical problems described in the background section above, the present invention proposes a solution for identifying an offending traffic participant. By using the scheme of the invention, the traffic participants which violate the regulations can be effectively determined in some scenes, thereby being beneficial to realizing efficient intelligent traffic management. In view of this, the present invention provides solutions in the following aspects.
In a first aspect, the present invention provides a method of identifying an offending traffic-involved object, comprising: acquiring perception information of first perception equipment at a scene entrance; capturing a target traffic participant based on the perception information of the first perception device to acquire the identity information of the target traffic participant, wherein the target traffic participant is a traffic participant entering the scene; acquiring object characteristic information of the target traffic participant according to the perception information of the first perception device; binding the identity information of the target traffic participation object with the corresponding object characteristic information to obtain binding information of the target traffic participation object, and recording the binding information of the target traffic participation object in a traffic participation object search set; acquiring perception information of second perception equipment in the scene; identifying an illegal traffic participant according to the perception information of the second perception device so as to obtain object characteristic information of the illegal traffic participant; acquiring target binding information from the binding information of the traffic participant searching set according to the object characteristic information of the illegal traffic participant; acquiring identity information of an illegal traffic participant based on the target binding information; and outputting the identity information of the illegal traffic participant and the corresponding illegal action.
In one embodiment, the scene comprises a service area scene and the offending traffic participant is a traffic participant that violates a service area specification.
In one embodiment, the capturing positions include a far-end capturing position and a near-end capturing position, and capturing the target traffic participation object based on the perception information of the first perception device to acquire the identity information of the target traffic participation object includes: and selecting the far-end snapshot position or the near-end snapshot position to snapshot the target traffic participant based on the perception information of the first perception device so as to acquire the identity information of the target traffic participant.
In one embodiment, selecting to snap-shoot the target traffic participant at the far-end snap-shot position or the near-end snap-shot position based on the perception information of the first perception device comprises: acquiring first distance and size information of traffic participation objects positioned in front of the target traffic participation object in the snapshot direction based on the perception information of the first perception device, wherein the first distance comprises the distance between the front part of the target traffic participation object and the tail part of the traffic participation object in front of the target traffic participation object; and selecting the far-end snapshot position or the near-end snapshot position to snapshot the target traffic participation object according to the comparison between the first distance and the size information and a preset threshold value.
In one embodiment, selecting to snap-shoot the target traffic participant at the far-end snap-shot position or the near-end snap-shot position according to the comparison of the first distance and size information with a preset threshold comprises: in response to the first distance being smaller than a first preset threshold and the height of the front traffic participant being larger than a second preset threshold, selecting the near-end snapshot position to snapshot the target traffic participant; selecting the far-end snapshot position to snapshot the target traffic participant in response to the fact that the first distance is larger than a first preset threshold value; or in response to the first distance being smaller than a first preset threshold and the height of the front traffic participant being smaller than a second preset threshold, selecting the far-end capturing position to capture the target traffic participant.
In one embodiment, capturing a target traffic participant object based on the perception information of the first perception device to obtain the identity information of the target traffic participant object further comprises: determining a snapshot identification frame for snapshot and identification of the target traffic participant according to the perception information and the snapshot position; and capturing and identifying the target traffic participant at the capturing position by using the capturing and identifying frame to acquire the identity information of the target traffic participant.
In one embodiment, determining the snapshot identification frame according to the perception information and the snapshot position comprises: acquiring a second distance of the traffic participant object positioned in front of the target traffic participant object in the snapshot direction according to the perception information, wherein the second distance comprises a distance between the front of the target traffic participant object and the front of the traffic participant object in front of the target traffic participant object; determining the snapshot recognition frame based on the snapshot position and the second distance.
In one embodiment, identifying the offending traffic-engaging object based on the perception information of the second perception device includes: acquiring object characteristic information, position information and speed information of a target traffic participant in the scene according to the perception information of the second perception device, wherein the object characteristic information at least comprises category information; and in response to the speed information indicating that the speed of the target traffic participation object is zero and the category information of the target traffic participation object does not match the category information of a preset area within the scene, identifying the target traffic participation object as an offending traffic participation object; or identifying the target traffic participation object as an illegal traffic participation object in response to the speed information indicating that the speed of the target traffic participation object is zero and the position information of the target traffic participation object does not match the position of a preset area within the scene.
In one embodiment, outputting the identity information of the offending traffic-engaging object and the corresponding offending behavior comprises: generating violation information associated with a category information mismatch or a location mismatch for the violation traffic engagement object; and broadcasting the violation information to the violation traffic-involved object directionally according to the position information of the violation traffic-involved object.
In a second aspect, the present invention provides a system for identifying an offending traffic-involved object, comprising: the first perception device is arranged at a scene entrance and used for acquiring perception information at the scene entrance; the snapshot device is arranged at the entrance of the scene and is used for snapshot of a target traffic participant object based on the perception information of the first perception device so as to acquire the identity information of the target traffic participant object, wherein the target traffic participant object is a traffic participant object entering the scene; the second perception device is arranged in the scene and used for acquiring perception information in the scene; and an information processing center which is in communication connection with the first perception device, the capturing device and the second perception device and is configured to: acquiring corresponding perception information from the first perception device and the second perception device respectively; acquiring object characteristic information of the target traffic participant according to the perception information of the first perception device; binding the identity information of the target traffic participation object with the corresponding object characteristic information to obtain binding information of the target traffic participation object, and recording the binding information of the target traffic participation object in a traffic participation object search set; identifying an illegal traffic participant according to the perception information of the second perception device so as to obtain object characteristic information of the illegal traffic participant; acquiring target binding information from the binding information of the traffic participant searching set according to the object characteristic information of the illegal traffic participant; acquiring identity information of an illegal traffic participant based on the target binding information; and outputting the identity information of the illegal traffic participant and the corresponding illegal action.
In one embodiment, the first and second sensing devices comprise lidar and/or cameras. In one embodiment, the second sensing device comprises a lidar and/or a camera. In one embodiment, the capturing device comprises a bayonet camera.
In a third aspect, the present invention provides a computer readable storage medium having stored thereon computer readable instructions for identifying an offending traffic-engaging object, which, when executed by one or more processors, implement a method as described in the first aspect and its various embodiments.
By using the scheme of the invention, the illegal traffic participants in the scene can be effectively identified. Specifically, by utilizing the perception information and the snapshot operation, the invention can accurately identify and store the information of the traffic participants entering the scene. Further, by perceiving the traffic participants again in the scene, the scheme of the invention can accurately identify the traffic participants violating the regulations in the scene. Therefore, the scheme of the invention promotes the effective management of the traffic participants in the scene and the effective discovery and timely treatment of the offenders.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. In the drawings, several embodiments of the disclosure are illustrated by way of example and not by way of limitation, and like or corresponding reference numerals indicate like or corresponding parts and in which:
FIG. 1 is an exemplary scene diagram schematically illustrating a scheme for identifying offending traffic-engaging objects in which the present invention is applied;
FIG. 2 is a flow diagram that schematically illustrates a method for identifying an offending traffic-engaging object, in accordance with an embodiment of the present invention;
FIG. 3 is a diagram schematically illustrating a process for performing a snap-shot operation according to an embodiment of the present invention; and
FIG. 4 is a schematic block diagram that schematically illustrates a system for identifying an offending traffic-engaging object, in accordance with an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few, and not all, of the disclosed embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be understood that the terms "first," "second," "third," and "fourth," etc. in the claims, description, and drawings of the present disclosure are used to distinguish between different objects and are not used to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1 is a diagram schematically illustrating an exemplary scenario 100 in which the inventive approach for identifying an offending traffic-engaging object is applied. In the context of the present invention, the aforementioned scenarios may include various environments for parking traffic-participating objects, such as various service areas on highways, various parking lots (e.g., commercial or civil), various traffic tunnels or bridge culverts, and the like. Based on this, it is understood that fig. 1 shows the scenario as a service area for exemplary purposes only. Further, in the context of the present invention, the aforementioned traffic-participating objects may be objects related to road/traffic activities, such as motor vehicles and/or non-motor vehicles, etc.
As shown in fig. 1, according to the solution of the present invention, a first sensing device configured to collect sensing information within a service coverage area may be disposed at an entrance of a service area. In one implementation scenario, the first sensing device may include a lidar (shown as a dot in fig. 1) and/or a camera. In an application scene in which the laser radar and the camera are used in a matched manner, the laser radar and the camera can be used for acquiring point cloud data and video data at the same time and in the same scene. When target detection is respectively carried out on point cloud data and video data and detection results are fused, an initial fusion result can be obtained. When the target is a traffic participant and the traffic participant is a vehicle, the aforementioned initial fusion results may, for example, include various types of information about the vehicle (such as vehicles 102 and 103 shown in fig. 1), including, but not limited to, one or more of the following: identity (ID) information of vehicles within the perception area, vehicle category information, vehicle position information, lane information where the vehicles are located, vehicle perception time information, vehicle size information, vehicle speed information, front inter-head distance (as shown at 301 in fig. 3), front inter-tail distance (as shown at 302 in fig. 3), and/or vehicle image feature information.
Further, in a direction in which the traffic participant enters the scene, that is, in this example, in a direction in which the vehicle enters the service area, a snapshot device 104 is disposed at a certain distance behind the first sensing device, and is configured to snapshot the traffic participant entering the service area based on the sensing information of the first sensing device 101, so as to obtain identity information of the target traffic participant (for example, a license plate number of the vehicle). In the context of the present invention, a traffic participant entering a service area is also a target traffic participant, and the solution of the present invention is directed to monitoring such one or more target traffic participants in order to determine an offending traffic participant violating a service area specification. The inventive snapping device may be various types of cameras according to different embodiments. Preferably, the camera may be a bayonet camera.
After the first perception device and the capturing device acquire information on the traffic participation object or the target traffic participation object, respectively, both may transmit the acquired information to the information processing center 105 of the present invention. After that, the information processing center of the present invention may acquire object feature information of the target traffic participant, for example, feature information of a vehicle entering the service area, according to the perception information of the first perception device. Further, the information processing center may bind the identity information of the target traffic participant with the corresponding object feature information to obtain binding information of the target traffic participant, and record the binding information of the target traffic participant in the traffic participant search set. Taking the example that the target traffic participant is a motor vehicle in the service area and the identity information thereof is a license plate number, the binding operation performed by the information processing center may be to bind the license plate number with a specific feature of the motor vehicle (such as a model, color or category of the vehicle), and record such binding information (similar to the mapping correspondence) in a traffic participant object lookup set, for example, in a dedicated database. Therefore, the scheme of the invention establishes an effective way for inquiring the traffic participation objects (such as vehicles in the example) entering the scene (such as the service area in the example).
In order to effectively determine an offending traffic participant violating the service area parking rule after a target traffic participant (e.g., a plurality of vehicles shown in the lower part of fig. 1) is parked in the service area, the solution of the present invention proposes to provide a second sensing device 106 in the service area opposite to the service area entrance to collect sensing data in the service area, wherein the sensing data may include information about the offending traffic participant. In the context of the present invention, a violation may include a behavior that violates a specification set for the scenario. In the case of a service area, violations of service area specifications may involve errant landing behavior. In one implementation scenario, the second sensing device and the first sensing device of the present invention have the same processing capabilities as regards sensing an offending traffic participant in the service area, and therefore the technical description above regarding the first sensing device also applies to the second sensing device.
Similarly, the second sensing device may also transmit the sensing information to the information processing center so as to obtain the object feature information of the illegal traffic participant, and obtain the target binding information from the binding information of the aforementioned traffic participant search set according to the object feature information of the illegal traffic participant. Then, the information processing center may acquire the identity information of the offending traffic participant based on the target binding information. After the identity information of the illegal traffic participant is determined, the information processing center can output the identity information of the illegal traffic participant and the corresponding illegal behavior. According to various embodiments, the input means may be a voice announcement or a video display, or a combination of both. In one scenario, a plurality of large display screens may be arranged in the service area, so that the information processing center selects one large display screen closest to the illegal traffic participant for display and broadcast.
While the invention has been described with reference to fig. 1, it is to be understood that the above description is intended to be illustrative and not restrictive, and that changes may be made to the scene shown in fig. 1 by those skilled in the art in light of the teachings of the present invention without departing from the spirit or essential scope thereof. For example, although the information processing center is shown as being separated from the first sensing device, the second sensing device, and the capturing device, in some scenarios, it may be selected to arrange the information processing center close to one of the three, thereby reducing the distance of data transmission and improving the stability of data transmission. Further, the information processing center may be disposed at a remote end (e.g., a cloud end). In this case, the identity information of the violation traffic participant and the violation information determined at the remote end can be sent to the broadcasting device in the scene, so as to prompt the violation (such as the owner of the vehicle) to pay attention and correct the violation in time.
Fig. 2 is a flow diagram that schematically illustrates a method 200 for identifying an offending traffic-engaging object, in accordance with an embodiment of the present invention. It will be appreciated that the method flow illustrated in fig. 2 may be implemented in the exemplary scenario illustrated in fig. 1, and thus what is described with respect to fig. 1 (e.g., with respect to the perceiving device) is equally applicable to fig. 2.
As shown in fig. 2, at step S201, perception information of a first perception device (e.g., the first perception device 101 in fig. 1) at a scene entrance is acquired. As previously described, in one embodiment, the scenario may be a service area scenario.
Next, at step S202, a snapshot is taken of the target traffic participant based on the perception information of the first perception device to obtain identity information of the target traffic participant. As previously mentioned, this snapping action may be performed, for example, by the snapping device shown in fig. 1. In order to realize accurate snapshot so as to acquire accurate identity information, the invention provides that a far-end snapshot position and a near-end snapshot position are set, so that a target traffic participant can be selected to be snapshot at the far-end snapshot position or the near-end snapshot position based on the perception information of the first perception device. With respect to a specific snapshot operation, it will be described later in detail in conjunction with fig. 3.
After acquiring the identity information of the target traffic participant, at step S203, object feature information of the target traffic participant may be acquired according to the perception information of the first perception device. As described above, the object feature information may include, for example, ID, category, location, lane, leading inter-head distance, leading inter-tail distance, and/or vehicle image feature information about traffic-participating objects within the scene. Then, the flow advances to step S204 to bind the identity information of the target traffic participant and the corresponding object feature information to obtain the binding information of the target traffic participant, and at step S205, record the binding information of the target traffic participant in a traffic participant search set (e.g., a database).
Next, at step S206, perception information of a second perception device within the scene may be acquired. The second perceiving device is, for example, the second perceiving device 106 shown in fig. 1. In operation, the second perception device performs perception operation on the target traffic participant in the scene, so as to obtain perception information of the target traffic participant in the scene. Further, at step S207, the illegal traffic-involved object may be identified according to the perception information of the second perception device, so as to obtain object feature information of the illegal traffic-involved object.
In order to identify the illegal traffic participant, in one implementation scenario, object feature information, position information and speed information of a target traffic participant object within the scene may be acquired according to perception information of the second perception device, wherein the object feature information includes at least category information. Then, in response to the speed information indicating that the speed of the target traffic participation object is zero and the category information of the target traffic participation object does not match the category information of the preset area within the scene, the target traffic participation object is identified as an illegal traffic participation object. Taking the scene as a service area and the category information of the target traffic participant as a car as an example, when the car is parked in a parking area preset for parking a truck, the target traffic participant (i.e. a car in this example) is determined to violate the parking regulation of the service area, so that the target traffic participant becomes an illegal traffic participant in the context of the present invention. Additionally or alternatively, the target traffic participation object is identified as the offending traffic participation object in response to the speed information indicating that the speed of the target traffic participation object is zero and the position information of the target traffic participation object does not match the position of the preset area within the scene. For example, when the position information in the perception information indicates that the vehicle is parked at a position other than the preset parking position in the service area. In other words, the vehicle is parked at a non-parking area within the service area, such as a pedestrian passageway or a fire passageway, and thus it will also be determined as an offending traffic participant.
After the offending traffic participant is identified in the various manners described above and thus the object characteristic information thereof is obtained, the flow advances to step S208. At step S208, target binding information may be obtained from the binding information of the transportation target lookup set updated in step S205 according to the target feature information of the violation transportation target. Next, at step S209, the identity information of the violation traffic-involved object may be acquired based on the target binding information, so that at step S210, the identity information of the violation traffic-involved object and the corresponding violation behavior may be output. In one embodiment, violation information associated with a category information mismatch or a location mismatch may be generated for the offending traffic-engaging object. Then, the violation information can be directionally broadcasted to the violation traffic participant according to the position information of the violation traffic participant, so that the violation traffic participant can be effectively reminded and warned.
FIG. 3 is a diagram that schematically illustrates a method for performing a snap-shot operation within a scene, in accordance with an embodiment of the present invention. For the purpose of convenience of description only, the snapshot operation of the present invention is described below taking the traffic participation object as a vehicle as an example.
As shown in fig. 3, when a vehicle enters a scene, for example, when the motor vehicles 102 and 103 in the figure enter a service area, the present invention proposes to snap it with a snap-in device 104 in order to acquire identity information of the vehicle. In view of this, in order to identify the identity without error, the present invention proposes a far-end snapshot position and a near-end snapshot position for the snapshot position, and selects to snapshot the target traffic participant at the far-end snapshot position (as indicated by arrow 303 in fig. 3) or the near-end snapshot position (as indicated by arrow 304 in fig. 3) based on the perception information of the first perception device (e.g., the first perception device 101 in fig. 1) to obtain the identity information of the target traffic participant. In one embodiment, the far-end and/or near-end capturing positions of the present invention also relate to lane information of a lane in which the vehicle is traveling.
In terms of selecting a far-end or near-end snapshot position, the present invention proposes to acquire, based on the perception information of the first perception device, first distance and size information of a traffic-participating object (i.e., vehicle 102 in this example) located ahead of a target traffic-participating object (i.e., vehicle 103 in this example) in the snapshot direction, where the first distance includes a distance between a front portion of the target traffic-participating object and a rear portion of the traffic-participating object ahead thereof (i.e., a distance shown by 302 in the figure). Then, the target traffic participant can be selected to be captured at the far-end capturing position or the near-end capturing position according to the comparison between the first distance and the size information and the preset threshold value. In other words, when the vehicle is in the far-end capturing position for capturing, the vehicle enters the far-end capturing position, and the capturing device of the invention is triggered to capture. Similarly, a vehicle meeting the near-end capturing position triggers the capturing device to capture the vehicle when the vehicle is located at the near-end capturing position.
In one implementation, selecting to snapshot the target traffic participant at the far-end snapshot position or the near-end snapshot position according to the comparison between the first distance and the size information and the preset threshold comprises: and selecting a near-end snapshot position to snapshot the target traffic participation object in response to the first distance being smaller than a first preset threshold and the height of the front traffic participation object being larger than a second preset threshold. In response to the first distance being greater than the first preset threshold, the far-end snapshot position may be selected to snapshot the target traffic participant, contrary to the aforementioned condition. Or selecting a far-end snapshot position to snapshot the target traffic participation object in response to the fact that the first distance is smaller than a first preset threshold value and the height of the front traffic participation object is smaller than a second preset threshold value.
The following describes the capturing operation of the present invention by taking the truck 103 in fig. 3 as a specific capturing object as an example. Before the truck 103 and the car 102 enter the service area, the truck and the car are sensed by the first sensing device at the entrance of the service area, and therefore sensing information containing the truck and the car is acquired. Then, for example, after acquiring the aforementioned perception information, the information processing center in fig. 1 may determine the car rear spacing (i.e., the spacing shown in 302) and the car height (i.e., the size information) according to the perception information. When the car rear spacing is smaller than the first preset threshold and the car height is larger than the second preset threshold, the information processing center may select the trigger position as the near-end capture position (as indicated by an arrow 304 in fig. 3).
The inventive snapshot operation is described in detail above in connection with fig. 3. After the snapshot position is determined, in order to accurately identify the identity information of the target traffic participant in the service area, in one embodiment, the invention further provides that a snapshot identification frame for snapshot and identification of the target traffic participant is determined according to the perception information and the snapshot position. Therefore, the target traffic participant can be captured and identified by the capturing and identifying frame at the capturing position so as to acquire the identity information of the target traffic participant. In one embodiment, in order to determine the snapshot recognition frame, the invention proposes to obtain a second distance of the traffic participant object located in front of the target traffic participant object in the direction of the snapshot from the perception information (i.e., the perception information acquired by the first perception device), wherein the second distance comprises a distance between a front of the target traffic participant object and a front of the traffic participant object in front of the target traffic participant object (i.e., the distance shown at 301 in fig. 3), and then to determine the snapshot recognition frame based on the snapshot position and the second distance, in order to snapshot and recognize the vehicle. For example, the snapshot device may identify a license plate of the vehicle according to the selected snapshot position (which includes lane information) and the determined snapshot recognition frame, so as to obtain identity information of the vehicle. Therefore, the method and the device provide effective identification of the target traffic participants in the scene through snapshot operation, and provide a good foundation for subsequent determination of binding and illegal traffic participants.
Fig. 4 is a schematic block diagram that schematically illustrates a system 400 for identifying an offending traffic-engaging object, in accordance with an embodiment of the present invention. It is understood that the system shown in fig. 4 is only one implementation of the solution of the present invention, and those skilled in the art can also adapt the system 400 to different application scenarios according to the teachings of the present invention. Further, since fig. 4 is a simplified illustration of the system shown in fig. 1, the description with respect to fig. 1 applies equally to the description with respect to fig. 4.
As shown in fig. 4, the system 400 includes a first perception device 101, a snapshot device 104, a second perception device 106, an information processing center 105, and a violation report device 401. As described above, the first sensing device 101, the snapshot device 104, the second sensing device 106, and the information processing center 105 cooperatively operate to determine the illegal traffic participants in the scene, and the illegal playing device is used to broadcast the illegal traffic participants, for example, directionally broadcast the illegal traffic participants, so as to prompt the illegal users to correct the illegal operations in time.
Although the system architecture of the present invention is described above in terms of multiple devices and information processing centers, in some implementation scenarios, the present invention may also be implemented in a modular design.
For example, the first sensing device and the second sensing device of the present invention may respectively constitute a scene entrance multi-source sensing module and an in-scene multi-source sensing module, which may sense a traffic participant (e.g., a vehicle) in a detection area in real time, so as to obtain sensing information about the vehicle, including but not limited to an ID, a category, a location, a lane, a sensing time, a size, speed information, a front inter-head distance, a front inter-tail distance, and/or vehicle image feature information of the vehicle.
For another example, the capturing and recognizing operations of the present invention can also be implemented by triggering the position selection module, the capturing module and the license plate recognition module. Specifically, the trigger position selection module may perform the determination according to the front inter-vehicle-rear distance and the front vehicle height of the entrance vehicle perception information. When the distance between the vehicle tails is smaller than a preset threshold value and the height of the front vehicle is larger than the preset threshold value, selecting the trigger position as a near-end snapshot position; otherwise, the trigger area is selected as the far-end snapshot position. Additionally, the trigger location selection module may also update the perception information of the vehicle at the entrance of the scene, including the selected snap location.
Further, the triggering snapshot module can judge according to the vehicle position of the entrance vehicle perception information. When the vehicle just enters the triggering area, the triggering snapshot module can select a preset recognition frame position according to the front vehicle-head distance and the snapshot position of the entrance vehicle sensing information. Then, the trigger snapshot module may send trigger snapshot information to the license plate recognition module, where the trigger snapshot information may include, but is not limited to, an ID of the vehicle, a recognition frame position, a lane, and a snapshot time. In response to receiving the triggering snapshot information, the license plate recognition module recognizes the license plate at the snapshot position in the corresponding lane according to the position of the recognition frame according to the lane information in the triggering snapshot information, and outputs the license plate to the information processing center, so as to obtain the identity information of the vehicle, wherein the identity information can include but is not limited to the license plate number of the vehicle and the triggering snapshot information.
For the binding operation, the invention can also be provided with an identity information binding module which is used for uniquely determining the entrance vehicle perception information according to the vehicle ID in the triggering snapshot information of the vehicle identity information, thereby completing the binding of the vehicle identity information and the entrance vehicle perception information. Thus, vehicle fusion information may be formed via fusion, which may include, but is not limited to, the ID, category, location, perceived time, size, speed information, license plate number, and/or vehicle image feature information of the vehicle.
Aiming at the identification of the illegal operation, the invention can also be provided with a vehicle illegal judgment module which is used for judging according to the speed information in the vehicle fusion information. For example, when the speed information indicates that the vehicle speed is 0, it may be determined whether the vehicle is within the parking area according to a preset parking area in combination with the position information in the fusion information. When the vehicle position is not within the parking area, it is determined that the vehicle is illegally parked. Further, whether the vehicle violates rules or not can be judged by combining the category information in the fusion information according to the preset parking area category. For example, if the vehicle position is within the parking area but the vehicle type does not match the parking area type, it may be determined that the vehicle is not parked according to the vehicle type and thus a violation is caused. Based on this, violation information for the violating vehicle may be formed, which may include, but is not limited to, a vehicle ID, a category, a location, a violation time, a violation category, a vehicle size, speed information, a license plate number, or vehicle image feature information.
In order to realize the searching and broadcasting of the illegal vehicle, the invention can also be provided with an identity information searching module and a broadcasting module. In an embodiment, the identity information search module may match identity information belonging to itself in the vehicle fusion information according to the vehicle category, size, vehicle image feature information, and the like in the violation information, so as to form violation broadcast information, where the information may include, but is not limited to, an ID, a category, a location, a violation time, a violation category, a size, speed information, and/or a license plate number of a vehicle. And then, the playing module can automatically select the broadcasting equipment closest to the illegal vehicle or display a large screen according to the illegal broadcasting information and the position information in the illegal broadcasting information, and can broadcast the illegal category according to the license plate number.
From the above description of the modular design of the present invention, it can be seen that the system of the present invention can be flexibly arranged according to application scenarios or requirements without being limited to the architecture shown in the accompanying drawings. Further, it should also be understood that any module, unit, component, server, computer, or device performing operations of examples of the invention may include or otherwise access a computer-readable medium, such as a storage medium, computer storage medium, or data storage device (removable) and/or non-removable) such as a magnetic disk, optical disk, or magnetic tape. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data. Based on this, the present invention also discloses a computer readable storage medium having stored thereon computer readable instructions for identifying an offending traffic participant, which when executed by one or more processors, implement the methods and operations previously described in connection with the figures.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Although the embodiments of the present invention are described above, the descriptions are only examples for facilitating understanding of the present invention, and are not intended to limit the scope and application scenarios of the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. A method of identifying an offending traffic participant, comprising:
acquiring perception information of first perception equipment at a scene entrance;
capturing a target traffic participant based on the perception information of the first perception device to acquire the identity information of the target traffic participant, wherein the target traffic participant is a traffic participant entering the scene;
acquiring object characteristic information of the target traffic participant according to the perception information of the first perception device;
binding the identity information of the target traffic participation object with the corresponding object characteristic information to obtain binding information of the target traffic participation object, and recording the binding information of the target traffic participation object in a traffic participation object search set;
acquiring perception information of second perception equipment in the scene;
identifying an illegal traffic participant according to the perception information of the second perception device so as to obtain object characteristic information of the illegal traffic participant;
acquiring target binding information from the binding information of the traffic participant searching set according to the object characteristic information of the illegal traffic participant;
acquiring identity information of an illegal traffic participant based on the target binding information; and
and outputting the identity information of the illegal traffic participation object and the corresponding illegal action.
2. The method of claim 1, wherein the scene comprises a service area scene and the offending traffic participant is a traffic participant that violates a service area specification.
3. The method of claim 1, wherein the capturing positions comprise a far-end capturing position and a near-end capturing position, and capturing the target traffic participant object based on the perception information of the first perception device to obtain the identity information of the target traffic participant object comprises:
and selecting the far-end snapshot position or the near-end snapshot position to snapshot the target traffic participant based on the perception information of the first perception device so as to acquire the identity information of the target traffic participant.
4. The method of claim 3, wherein selecting to snap the target traffic participant at the far-end snap position or the near-end snap position based on the perception information of the first perception device comprises:
acquiring first distance and size information of traffic participation objects positioned in front of the target traffic participation object in the snapshot direction based on the perception information of the first perception device, wherein the first distance comprises the distance between the front part of the target traffic participation object and the tail part of the traffic participation object in front of the target traffic participation object; and
and selecting the far-end snapshot position or the near-end snapshot position to snapshot the target traffic participation object according to the comparison between the first distance and the size information and a preset threshold value.
5. The method of claim 4, wherein selecting the target traffic participant for snapping at the far-end snapping position or the near-end snapping position according to the comparison of the first distance and size information with a preset threshold comprises:
in response to the first distance being smaller than a first preset threshold and the height of the front traffic participant being larger than a second preset threshold, selecting the near-end snapshot position to snapshot the target traffic participant;
selecting the far-end snapshot position to snapshot the target traffic participant in response to the fact that the first distance is larger than a first preset threshold value; or
And selecting the far-end capturing position to capture the target traffic participation object in response to the first distance being smaller than a first preset threshold and the height of the front traffic participation object being smaller than a second preset threshold.
6. The method of claim 3, wherein capturing a target traffic participant object based on the perception information of the first perception device to obtain the identity information of the target traffic participant object further comprises:
determining a snapshot identification frame for snapshot and identification of the target traffic participant according to the perception information and the snapshot position; and
and capturing and identifying the target traffic participant at the capturing position by using the capturing and identifying frame to acquire the identity information of the target traffic participant.
7. The method of claim 6, wherein determining the snapshot recognition box according to the perception information and the snapshot position comprises:
acquiring a second distance of the traffic participant object positioned in front of the target traffic participant object in the snapshot direction according to the perception information, wherein the second distance comprises a distance between the front of the target traffic participant object and the front of the traffic participant object in front of the target traffic participant object; and
determining the snapshot recognition frame based on the snapshot position and the second distance.
8. The method of claim 1, wherein identifying an offending traffic-engaging object based on the perception information of the second perception device comprises:
acquiring object characteristic information, position information and speed information of a target traffic participant in the scene according to the perception information of the second perception device, wherein the object characteristic information at least comprises category information;
identifying the target traffic-participating object as an offending traffic-participating object in response to the speed information indicating that the speed of the target traffic-participating object is zero and the category information of the target traffic-participating object does not match the category information of a preset area within the scene; or
In response to the speed information indicating that the speed of the target traffic participation object is zero and the position information of the target traffic participation object does not match the position of a preset area within the scene, identifying the target traffic participation object as an offending traffic participation object.
9. The method of claim 8, wherein outputting the identity information of the offending traffic-involved object and the corresponding offending behavior comprises:
generating violation information associated with a category information mismatch or a location mismatch for the violation traffic engagement object; and
and broadcasting the violation information to the violation traffic-involved object directionally according to the position information of the violation traffic-involved object.
10. A system for identifying an offending traffic participant, comprising:
the first perception device is arranged at a scene entrance and used for acquiring perception information at the scene entrance;
the snapshot device is arranged at the entrance of the scene and is used for snapshot of a target traffic participant object based on the perception information of the first perception device so as to acquire the identity information of the target traffic participant object, wherein the target traffic participant object is a traffic participant object entering the scene;
the second perception device is arranged in the scene and used for acquiring perception information in the scene; and
an information processing center in communication connection with the first perception device, the capturing device, and the second perception device, and configured to:
acquiring corresponding perception information from the first perception device and the second perception device respectively;
acquiring object characteristic information of the target traffic participant according to the perception information of the first perception device;
binding the identity information of the target traffic participation object with the corresponding object characteristic information to obtain binding information of the target traffic participation object, and recording the binding information of the target traffic participation object in a traffic participation object search set;
identifying an illegal traffic participant according to the perception information of the second perception device so as to obtain object characteristic information of the illegal traffic participant;
acquiring target binding information from the binding information of the traffic participant searching set according to the object characteristic information of the illegal traffic participant;
acquiring identity information of an illegal traffic participant based on the target binding information; and
and outputting the identity information of the illegal traffic participation object and the corresponding illegal action.
11. The system of claim 10, wherein the first sensing device comprises a lidar and/or a camera.
12. The system of claim 10, wherein the capture device comprises a bayonet camera.
13. The system according to claim 10, characterized in that the second perception device comprises a lidar and/or a camera.
14. A computer-readable storage medium having stored thereon computer-readable instructions for identifying an offending traffic-engaging object, which when executed by one or more processors implement the method of any one of claims 1-9.
CN202110587444.XA 2021-05-27 2021-05-27 Method, system, and computer-readable storage medium for identifying an offending traffic participant Pending CN113362592A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110587444.XA CN113362592A (en) 2021-05-27 2021-05-27 Method, system, and computer-readable storage medium for identifying an offending traffic participant
PCT/CN2022/095590 WO2022247931A1 (en) 2021-05-27 2022-05-27 Method and system for identifying illegal traffic participant, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110587444.XA CN113362592A (en) 2021-05-27 2021-05-27 Method, system, and computer-readable storage medium for identifying an offending traffic participant

Publications (1)

Publication Number Publication Date
CN113362592A true CN113362592A (en) 2021-09-07

Family

ID=77528094

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110587444.XA Pending CN113362592A (en) 2021-05-27 2021-05-27 Method, system, and computer-readable storage medium for identifying an offending traffic participant

Country Status (1)

Country Link
CN (1) CN113362592A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247932A1 (en) * 2021-05-27 2022-12-01 北京万集科技股份有限公司 Method and system for recognizing traffic violation participant, and computer-readable storage medium
WO2022247931A1 (en) * 2021-05-27 2022-12-01 北京万集科技股份有限公司 Method and system for identifying illegal traffic participant, and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590454A (en) * 2016-01-27 2016-05-18 福建工程学院 Vehicle violation behavior proof-providing method and system thereof
CN105788292A (en) * 2016-04-07 2016-07-20 四川巡天揽胜信息技术有限公司 Method and apparatus for obtaining driving vehicle information
CN108091142A (en) * 2017-12-12 2018-05-29 公安部交通管理科学研究所 For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN108986473A (en) * 2017-05-31 2018-12-11 蔚来汽车有限公司 Vehicle mounted traffic unlawful practice identification and processing system and method
CN110459057A (en) * 2019-07-29 2019-11-15 湖南湖芯信息科技有限公司 Vehicle-mounted candid photograph traffic violations event handling system based on masses' supervision
CN110782677A (en) * 2019-11-25 2020-02-11 湖南车路协同智能科技有限公司 Illegal vehicle snapshot warning method and device
CN111932693A (en) * 2020-08-11 2020-11-13 杭州立方控股股份有限公司 Management system for urban roadside parking lot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590454A (en) * 2016-01-27 2016-05-18 福建工程学院 Vehicle violation behavior proof-providing method and system thereof
WO2017128874A1 (en) * 2016-01-27 2017-08-03 福建工程学院 Traffic violation evidence producing method and system thereof
CN105788292A (en) * 2016-04-07 2016-07-20 四川巡天揽胜信息技术有限公司 Method and apparatus for obtaining driving vehicle information
CN108986473A (en) * 2017-05-31 2018-12-11 蔚来汽车有限公司 Vehicle mounted traffic unlawful practice identification and processing system and method
CN108091142A (en) * 2017-12-12 2018-05-29 公安部交通管理科学研究所 For vehicle illegal activities Tracking Recognition under highway large scene and the method captured automatically
CN110459057A (en) * 2019-07-29 2019-11-15 湖南湖芯信息科技有限公司 Vehicle-mounted candid photograph traffic violations event handling system based on masses' supervision
CN110782677A (en) * 2019-11-25 2020-02-11 湖南车路协同智能科技有限公司 Illegal vehicle snapshot warning method and device
CN111932693A (en) * 2020-08-11 2020-11-13 杭州立方控股股份有限公司 Management system for urban roadside parking lot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247932A1 (en) * 2021-05-27 2022-12-01 北京万集科技股份有限公司 Method and system for recognizing traffic violation participant, and computer-readable storage medium
WO2022247931A1 (en) * 2021-05-27 2022-12-01 北京万集科技股份有限公司 Method and system for identifying illegal traffic participant, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN113470371B (en) Method, system, and computer-readable storage medium for identifying an offending vehicle
CN113470372B (en) Method, system, and computer-readable storage medium for identifying an offending vehicle
CN111047870A (en) Traffic violation vehicle recognition system, server, and non-volatile storage medium storing vehicle control program
CN102254429B (en) Video identification-based detection method of detection device of violation vehicles
CN108806272B (en) Method and device for reminding multiple motor vehicle owners of illegal parking behaviors
CN108765975B (en) Roadside vertical parking lot management system and method
CN111028529A (en) Vehicle-mounted device installed in vehicle, and related device and method
CN111985356A (en) Evidence generation method and device for traffic violation, electronic equipment and storage medium
CN111340003B (en) Evidence obtaining method and system for illegal lane change behavior in electronic police blind area
CN108932849B (en) Method and device for recording low-speed running illegal behaviors of multiple motor vehicles
CN105448087A (en) Integrated system and method for rapid vehicle clearance, non-stop fee payment, safe early warning, fog monitoring, and command management of vehicles on highway
KR102067006B1 (en) System and Method for Managing Vehicle Running Information
CN113362592A (en) Method, system, and computer-readable storage medium for identifying an offending traffic participant
US20110032120A1 (en) Method and system for infraction detection based on vehicle traffic flow data
CN113362610B (en) Method, system, and computer-readable storage medium for identifying an offending traffic participant
CN112509325B (en) Video deep learning-based off-site illegal automatic discrimination method
KR20160141226A (en) System for inspecting vehicle in violation by intervention and the method thereof
CN111081031B (en) Vehicle snapshot method and system
CN111492416A (en) Violation monitoring system and violation monitoring method
CN103577412A (en) High-definition video based traffic incident frame tagging method and system
CN113781827A (en) Video data management method of cloud platform and cloud platform
CN109003457B (en) Method and device for recording behaviors of multiple motor vehicles illegally occupying emergency lane
WO2022247932A1 (en) Method and system for recognizing traffic violation participant, and computer-readable storage medium
CN111526475A (en) Target vehicle tracking method and device, electronic equipment and storage medium
CN115798249A (en) Parking management method and system based on fusion of geomagnetic data and video data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210907