CN114373152A - Method and device for identifying road violation, electronic equipment and storage medium - Google Patents

Method and device for identifying road violation, electronic equipment and storage medium Download PDF

Info

Publication number
CN114373152A
CN114373152A CN202210015976.0A CN202210015976A CN114373152A CN 114373152 A CN114373152 A CN 114373152A CN 202210015976 A CN202210015976 A CN 202210015976A CN 114373152 A CN114373152 A CN 114373152A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
road
image
pod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210015976.0A
Other languages
Chinese (zh)
Inventor
林凡雨
崔书刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuandu Internet Technology Co ltd
Original Assignee
Beijing Yuandu Internet Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuandu Internet Technology Co ltd filed Critical Beijing Yuandu Internet Technology Co ltd
Priority to CN202210015976.0A priority Critical patent/CN114373152A/en
Publication of CN114373152A publication Critical patent/CN114373152A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a method and a device for identifying road violation, electronic equipment and a storage medium. Wherein the method comprises the following steps: acquiring a first image captured when the unmanned aerial vehicle cruises a road; processing the first image formation road data and information of an object located on a road; matching violation constraint information according to the information of the object and the road data, and obtaining a matching result; when the object is determined to be in violation according to the matching result, controlling the unmanned aerial vehicle and the pod to track the object, and shooting a second image; and acquiring the identification of the object according to the second image. Therefore, whether illegal behaviors exist in different areas of the road in the unmanned aerial vehicle cruising route range or not can be identified, and the identification area of the road illegal behaviors is enlarged.

Description

Method and device for identifying road violation, electronic equipment and storage medium
Technical Field
The application relates to the technical field of unmanned aerial vehicles, in particular to a method and a device for identifying road violation, electronic equipment and a storage medium.
Background
In order to standardize driving behaviors and reduce the occurrence of road safety accidents, real-time traffic conditions on roads need to be monitored. And when the violation is monitored, the violation needs to be forensically performed on the object violating the traffic rules. At present, evidence of road violation is generally obtained through a camera fixedly mounted on a road or through manual watching. Evidence is obtained through a camera fixedly installed on a road, and an object violating the road is found from an image shot by the camera. The method is characterized in that evidence collection of the road violation is carried out through human force value guard, and mainly comprises the step of taking pictures and collecting evidence after a traffic police finds that the road violation occurs. However, the traffic police on duty method has limited application range and wastes manpower. Therefore, road violation evidence is mainly obtained through a camera fixedly mounted on a road.
In the process of realizing the prior art, the inventor finds that:
when the camera fixedly installed on the road is used for road violation evidence obtaining, the shooting range of the camera is limited, and the road violation evidence obtaining can be only carried out in a fixed and limited road area. Therefore, road traffic jams and safety accidents are caused frequently in road areas without camera shooting and road violation behaviors. If illegal evidence collection is carried out in all areas of the road, a plurality of shooting devices need to be densely deployed in the road. This will greatly increase the road construction cost and the difficulty of road planning.
Therefore, it is necessary to provide a technical solution that can perform road violation forensics for all areas of a road.
Disclosure of Invention
The embodiment of the application provides a technical scheme for tracking and monitoring road violation by using an unmanned aerial vehicle, which is used for solving the technical problem of fixing a road violation identification area.
Specifically, the method for identifying the road violation comprises the following steps: acquiring a first image captured when the unmanned aerial vehicle cruises a road; processing the first image formation road data and information of an object located on a road; matching violation constraint information according to the information of the object and the road data, and obtaining a matching result; when the object is determined to be in violation according to the matching result, controlling the unmanned aerial vehicle and the pod to track the object, and shooting a second image; and acquiring the identification of the object according to the second image.
Further, the first image of snatching when acquireing unmanned aerial vehicle and cruising the road specifically includes: controlling the ground pitch angle of a pod of the unmanned aerial vehicle to be a first pitch angle and controlling the zoom multiple of the pod of the unmanned aerial vehicle to be a first zoom multiple according to the flight height of the unmanned aerial vehicle; keeping the first pitch angle and the first zoom multiple, and acquiring a first image in a video shot by the unmanned aerial vehicle when cruising on a road; wherein, be provided with in unmanned aerial vehicle's the nacelle along unmanned aerial vehicle course or deviate from the camera of unmanned aerial vehicle course.
Further, controlling the unmanned aerial vehicle and the pod to track the object and taking a second image specifically comprises: controlling a pod to lock the object so that the object is centered in the video; adjusting the attitude of the unmanned aerial vehicle to enable the pod to keep a second pitch angle locking object; controlling the zoom factor of the pod to zoom from a first zoom factor to a second zoom factor; controlling the pod to capture a second image at the second pitch angle and the second zoom factor; and a camera which is arranged along or departs from the course is arranged in the pod of the unmanned aerial vehicle.
Further, controlling the pod to lock the object so that the object is in the video center specifically includes: acquiring pixel coordinates of the object in real time, and sending the pixel coordinates to the pod, so that the pod adjusts the attitude angle of the pod according to the pixel coordinates, and the object is positioned in the center of the video.
Further, the posture of the unmanned aerial vehicle is adjusted, so that the pod can keep a second pitch angle locking object, and the method specifically comprises the following steps: acquiring the attitude angle of the pod in real time; when the pitch angle in the attitude angle is not equal to the second pitch angle, determining the distance to be flown by the unmanned aerial vehicle according to the pitch angle; and determining the attitude data of the unmanned aerial vehicle according to the distance to be flown by the unmanned aerial vehicle.
Further, determining a distance at which the unmanned aerial vehicle flies according to the pitch angle includes: determining the horizontal distance between the unmanned aerial vehicle and the object according to the flying height and the pitch angle of the unmanned aerial vehicle; determining the horizontal distance between the unmanned aerial vehicle and an object when the pod holds a second pitch angle to lock the object according to the flying height of the unmanned aerial vehicle and the second pitch angle; and determining the distance to be flown by the unmanned aerial vehicle according to the horizontal distance between the unmanned aerial vehicle and the object and the horizontal distance between the unmanned aerial vehicle and the object when the pod keeps the second pitch angle to lock the object.
Further, determining the attitude data of the unmanned aerial vehicle according to the distance to be flown by the unmanned aerial vehicle, includes: determining the acceleration in the horizontal direction when the unmanned aerial vehicle moves to the nacelle and keeps a second pitch angle according to the distance to be flown by the unmanned aerial vehicle; determining the angular speed of the unmanned aerial vehicle when the unmanned aerial vehicle moves to the nacelle to maintain a second pitch angle according to the acceleration in the horizontal direction; and determining a course angle when the unmanned aerial vehicle moves to the pod to maintain a second pitch angle according to the angular speed so as to determine attitude data of the unmanned aerial vehicle.
The application also provides a device for identifying the road violation.
Specifically, a device for road violation identification comprises: the acquisition module is used for acquiring a first image captured when the unmanned aerial vehicle cruises a road; a processing module for processing the first image forming road data and information of an object located on a road; the matching module is used for matching violation constraint information according to the information of the object and the road data and obtaining a matching result; the control module is used for controlling the unmanned aerial vehicle and the pod to track the object and shoot a second image when the object is determined to be in violation according to the matching result; and the identification module acquires the identification of the object according to the second image.
The application also provides an electronic device.
Specifically, an electronic device includes: a memory storing computer readable instructions; a processor reading computer readable instructions stored by the memory to perform the method of any of the methods of road violation identification.
The present application also provides a storage medium.
In particular, a storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any one of the methods of road violation identification.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
by acquiring images captured when the unmanned aerial vehicle cruises the road and performing negative road rule matching, whether violation behaviors exist in different areas of the road within the unmanned aerial vehicle cruises route range or not can be identified, and the identification area of the violation behaviors is enlarged.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic view of a structure of an unmanned aerial vehicle for identifying a road violation, provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for identifying a road violation according to an embodiment of the present disclosure;
fig. 3 is a schematic view of an image captured by an unmanned aerial vehicle according to an embodiment of the present application;
FIG. 4 is a schematic view of road data corresponding to the schematic view shown in FIG. 3;
fig. 5 is a flowchart illustrating a method for capturing a second image according to an embodiment of the present disclosure;
FIG. 6 is a schematic flow chart of a method for holding a second pitch angle lockout offending object with a pod according to an embodiment of the present application;
fig. 7 is a schematic diagram of an unmanned aerial vehicle tracking a vehicle in a road violation according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a device for identifying a road violation according to an embodiment of the present disclosure;
fig. 9 is a schematic view of a structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the invention provides a method for identifying road violation, which can be executed by any device with computing capability, such as a terminal or a server. In the embodiment of the invention, an unmanned aerial vehicle is taken as an execution main body for explanation. Fig. 1 is the structural diagram of unmanned aerial vehicle that this application embodiment provided, inside TX2 module and the flight control module of being provided with of unmanned aerial vehicle, wherein the TX2 module is for handling the image, the processing module of calculation, the flight control module is used for controlling unmanned aerial vehicle's flight gesture, still be provided with the nacelle on the unmanned aerial vehicle, the nacelle can adjust own attitude angle, be provided with in the nacelle along unmanned aerial vehicle course or deviate from the camera of unmanned aerial vehicle course, can shoot in real time the road in the unmanned aerial vehicle monitoring area, the TX2 module is used for carrying out the discernment of road object violation according to the image that unmanned aerial vehicle shot.
In the embodiment of the invention, when the unmanned aerial vehicle is on a cruising road, the pod shoots a first image and sends the first image to the TX2 module, and the TX2 module obtains the first image in a video shot when the unmanned aerial vehicle is on the cruising road. The TX2 module processes the first image forming road data and information of an object located on a road, matches violation constraint information according to the information of the object and the road data, obtains a matching result, and when the object is determined to be in violation according to the matching result, the TX2 module obtains coordinates of the object and a second zoom multiple, and sends the coordinates and the second zoom multiple to the pod. And the pod adjusts the attitude angle of the pod according to the pixel coordinates to enable the object to be in the center of the video, and the flight control module of the unmanned aerial vehicle monitors the attitude angle of the pod through a sensor positioned on the pod and adjusts the attitude of the unmanned aerial vehicle according to the attitude angle of the pod so as to keep the pod tracking the illegal object at the angle of the second pitch angle.
The specific method for identifying the road violation by the unmanned aerial vehicle in the embodiment of the invention during the road cruising can be referred to fig. 2. Fig. 2 is a flowchart of a method for identifying a road violation according to an embodiment of the present application, including the following steps:
s100: the method comprises the steps of obtaining a first image captured when the unmanned aerial vehicle cruises on a road.
Specifically, when the unmanned aerial vehicle executes a cruise task according to a cruise path, a cruise road video is shot to obtain a first image. The first image may be one frame of image in a video shot when the unmanned aerial vehicle is cruising on a road, or may be a plurality of frames of images extracted periodically.
It can be understood that, when the unmanned aerial vehicle is cruising on a road, the unmanned aerial vehicle firstly needs to take off successfully from the starting point where the unmanned aerial vehicle is located, and fly to an overhead area where an image of the road to be cruising can be acquired. Then, the unmanned aerial vehicle can carry out cruising and shooting work above the road to be cruising. And the road to be cruising is a target road for cruising of the unmanned aerial vehicle. When the cruise task is finished, the unmanned aerial vehicle can return to the air and land to a designated recovery point or a stop point. However, the endurance of the drone is limited, which directly affects the total flight time or the total flight mileage of the drone, and thus the road cruising mileage of the drone. And the road cruising mileage of the unmanned aerial vehicle can directly influence the area size of the road violation identification. Therefore, the cruising work content of the unmanned aerial vehicle needs to be planned in advance to ensure that the unmanned aerial vehicle can work efficiently under the limited cruising ability. The unmanned aerial vehicle cruise work content comprises all work content related to flight and cruise during the period from takeoff to return flight landing of the unmanned aerial vehicle. The cruising work content of the unmanned aerial vehicle planned in advance is the cruising strategy preset for the unmanned aerial vehicle. Like this, unmanned aerial vehicle can be under the condition that does not influence normal back journey, carries out the cruise task of longer mileage. When the cruise strategy is set, the unmanned aerial vehicle cruises on the road according to the cruise strategy.
In the embodiment of the invention, the cruise strategy is at least one of a preset cruise route, a cruise speed and a cruise direction.
In the embodiment of the invention, the cruising route is a flight path of the unmanned aerial vehicle in the cruising process, and at least comprises a starting point and an ending point when the unmanned aerial vehicle cruises a road. Through formulating the route of cruising, the emergence of the phenomenon of repeatedly cruising of many unmanned aerial vehicles to same road at the same moment can be effectively avoided to and the efficiency of road that cruises is promoted.
It should be noted that the cruise route is a total cruise path for cruising on a plurality of roads, and the cruise route should at least include a start point and an end point for cruising on each road. At this time, the cruising route may further include a flight path for the unmanned aerial vehicle to transit between the adjacent road segments. Through setting up this route of cruising, unmanned aerial vehicle can shift to next section road according to the flight path that shifts between the adjacent section road, has avoided the emergence of unmanned aerial vehicle detour phenomenon, can make unmanned aerial vehicle work according to the optimal route of cruising, has reduced the time that unmanned aerial vehicle cruises and shifts, has promoted unmanned aerial vehicle efficiency of cruising.
In the embodiment of the invention, the cruising speed is the flying speed of the unmanned aerial vehicle when cruising on a road. If unmanned aerial vehicle flying speed is too fast, can make flight stability relatively poor, the fuselage easily rocks. Therefore, the shooting angle of the unmanned aerial vehicle changes along with the change of the shooting angle, so that a stable video cannot be acquired, the monitoring of all road areas is not facilitated, and the accuracy of a road violation identification result is influenced. If unmanned aerial vehicle flying speed is too slow, because unmanned aerial vehicle duration is limited, this makes unmanned aerial vehicle cruise road mileage reduce, cruises inefficiency. At this moment, be unfavorable for unmanned aerial vehicle to carry out the cruise of long road scope. Therefore, in order to guarantee the shooting stability of the unmanned aerial vehicle and also consider the cruising efficiency, the cruising speed of the unmanned aerial vehicle needs to be designed.
In the embodiment of the invention, the cruising direction of the unmanned aerial vehicle can be planned. When the unmanned aerial vehicle cruises the road, the cruiser direction of the unmanned aerial vehicle directly influences the presenting range of the road data in the obtained image. If unmanned aerial vehicle direction of cruising keeps with the direction that extends along the road, can obtain more road data according to the image that the shooting obtained. If there is certain angle between the direction that unmanned aerial vehicle cruising direction and road extended, only can obtain less road data according to the image of shooing obtaining, this is because the reason of shooting angle, there is the road region in the image that obtains and is sheltered from the problem that the thing covered, there is the vision blind area promptly, be unfavorable for carrying out comprehensive control to the road, and through the direction planning of cruising to unmanned aerial vehicle, there is the blind area in the image that unmanned aerial vehicle shooed, the realization is to the comprehensive control of road, promote the efficiency of discerning the road violation. In the embodiment of the invention, the cruising direction of the planning unmanned aerial vehicle is consistent with a certain passing direction of a road. For example, if the cruise target road is a one-way traffic road, the cruise direction and the traffic direction are kept consistent. If the cruising target road is a bidirectional passing road, the cruising target road is consistent with a certain passing direction in the road according to the set cruising direction.
The first image captured when the unmanned aerial vehicle cruises the road can be obtained in real time, and each frame of the video captured by the unmanned aerial vehicle can also be obtained at an interval of n seconds and used as the first image. It can be understood that the specific time interval between the capturing of the first image by the drone on the cruising road obviously does not constitute a limitation to the scope of the present application.
In the embodiment of the invention, acquiring the first image captured when the unmanned aerial vehicle is cruising on the road specifically comprises: controlling the ground pitch angle of a pod of the unmanned aerial vehicle to be a first pitch angle and controlling the zoom multiple of the pod of the unmanned aerial vehicle to be a first zoom multiple according to the flight height of the unmanned aerial vehicle; keeping the first pitch angle and the first zoom multiple, and acquiring a first image in a video shot by the unmanned aerial vehicle when cruising on a road; wherein, be provided with in unmanned aerial vehicle's the nacelle along unmanned aerial vehicle course or deviate from the camera of unmanned aerial vehicle course.
It will be appreciated that the sharpness requirements of the first image may be preset in the drone. The first image can be set to different resolutions according to the requirements of a user so as to meet the requirements of acquiring road data and road target information, and meanwhile, the first image does not occupy excessive storage of the unmanned aerial vehicle.
It can be appreciated that the drone grabs, primarily through a camera fixedly mounted to the drone pod. Wherein, the camera is initially towards the direction that defaults to be parallel to the afterbody or the head of unmanned aerial vehicle, promptly, towards the direction of unmanned aerial vehicle course or towards the direction that deviates from unmanned aerial vehicle course. By adjusting the orientation of the pod, the grabbing angle of the camera is changed accordingly. The unmanned aerial vehicle nacelle should have certain contained angle with ground to guarantee that unmanned aerial vehicle can grab the road details on ground in the air.
It can be appreciated that the drone grabs, primarily through a camera fixedly mounted to the drone pod. Wherein, the camera is initially towards the direction that defaults to be parallel to the afterbody or the head of unmanned aerial vehicle, promptly, towards the direction of unmanned aerial vehicle course or towards the direction that deviates from unmanned aerial vehicle course. By adjusting the orientation of the pod, the grabbing angle of the camera is changed accordingly. The unmanned aerial vehicle nacelle should have certain contained angle with ground to guarantee that unmanned aerial vehicle can grab the road details on ground in the air.
Specifically, the ground pitch angle of a pod of the unmanned aerial vehicle is controlled to be a target pitch angle according to the flying height of the unmanned aerial vehicle. The ground pitch angle of the nacelle is the angle between the direction of the camera in the nacelle and the horizontal direction of the ground. Because unmanned aerial vehicle is when snatching the video according to the road of cruising, the flight height of unmanned aerial vehicle's the road of cruising is uncertain, so when unmanned aerial vehicle was patrolled according to the road of cruising, can be according to unmanned aerial vehicle flight height, the angle of pitch to the ground of control unmanned aerial vehicle's nacelle. In particular embodiments provided herein, the target pitch angle ranges from-20 ° to-40 °.
It will be appreciated that if the nacelle pitch angle is small, images of the road at greater distances may be acquired. However, the road information error in the first image obtained at this time is large due to the presence of dust particles in the air, atmospheric refraction, road surface reflection, and the like. For example, the road distribution line in a distant area cannot be accurately identified, and at this time, the identification accuracy of identifying a road violation based on the acquired image is low. If the nacelle pitch angle is large, only the road image of the near area can be acquired. In the limited endurance mileage of the unmanned aerial vehicle, the total cruising mileage of the road of the unmanned aerial vehicle is reduced, and the cruising efficiency of the unmanned aerial vehicle is influenced. Therefore, the target pitch angle is set in the range of-20 ° to-40 °, i.e., 20 ° to 40 ° depression. When the unmanned aerial vehicle is cruising on the road, the pitch angle of the nacelle is a certain value between the target pitch angle setting ranges. It will be appreciated that the specific values of the pitch angle of the nacelle within the target pitch angle setting range are not intended to limit the scope of the present application.
It should be noted that the roll angle and the heading angle of the nacelle generally do not need to be adjusted, i.e., the initial angle is maintained.
And controlling the zooming multiple of a pod of the unmanned aerial vehicle to be a first zooming multiple according to the flight height of the unmanned aerial vehicle. The zoom factor of the pod is the zoom factor of the camera in the pod, wherein the first zoom factor is between 2 times zoom and 10 times zoom. Because unmanned aerial vehicle is when snatching the video according to the route of cruising, the flight height of unmanned aerial vehicle's the route of cruising is uncertain, so unmanned aerial vehicle is when cruising according to the route of cruising, can be according to unmanned aerial vehicle flight height, the multiple of zooming of control unmanned aerial vehicle nacelle. The larger the zoom factor, the farther the scene can be grabbed. When the unmanned aerial vehicle patrols the road, if the low zoom multiple is always kept, the road image in the range as large as possible can be obtained. However, this results in a smaller proportion of road regions in the acquired image, which leads to a lower image road recognition accuracy. If keep the low multiple that zooms all the time, the unmanned aerial vehicle visual angle narrows down, and the camera field of vision is restricted. Therefore, when the first zoom factor value is set, the influence of the flight height of the unmanned aerial vehicle and the size of the nacelle pitch angle needs to be fully considered. In this way, the road area proportion in the obtained image is moderate, and the image at least comprises a single road image. After the target pitch angle of a pod of the unmanned aerial vehicle and the first zoom multiple of the pod of the unmanned aerial vehicle are controlled according to the flying height of the unmanned aerial vehicle, the unmanned aerial vehicle keeps the target pitch angle and the first zoom multiple, and a first image in a video captured when the unmanned aerial vehicle cruises on a road is acquired. In a preferred embodiment provided by the present application, the TX2 module of the drone determines the degree of pitch angle and the first zoom factor in the nacelle based on the drone's altitude.
S200: the first image-forming road data and information of an object located on the road are processed.
In a preferred embodiment provided by the present application, the first image-forming road data is processed to identify a lane line and a vertex of each lane line in the first image. And generating road data according to the lane lines, the vertexes of the lane lines and the road distribution use data corresponding to the first image. The road data at least includes lane distribution information in the road and information of each lane line constituting the road.
It is understood that the road data refers to information related to the distribution of the road, and the road data includes at least information of the distribution of the lanes and the respective lane lines constituting the road. The lane distribution conditions include emergency lanes, passing lanes, bus lanes and the like on a highway, and the distribution details of motor lanes, non-motor lanes and sidewalks on an urban road; the information of each lane constituting the road may include coordinates of a lane line constituting each road and whether the lane line is a dotted line or a solid line.
Note that the road data may further include: geographical location information where the road is located in the first image, and extending direction information of the road in the first image, and so on.
In the embodiment of the present invention, when determining the lane lines and the vertices of the lane lines in the first image, that is, the virtual and real information of the lane lines and the coordinates of the vertices, may be determined according to recognition algorithms such as lane line detection based on color masks and line detection and lane line detection based on a sliding window. The invention is not limited thereto but may also be marked manually, for example.
For example, the lane lines and the vertices of each lane line may be as follows:
line1 vertex coordinates [ (x1, y1), (x2, y2), solid line ];
line2 vertex coordinates [ (x3, y3), (x4, y4), solid line ].
It should be noted that, in practice, there may be a case where the lane line portion is a solid line and a portion is a dashed line, and the determined virtual-real case of the lane line and the vertex of each lane line may include coordinates of a junction of the dashed line and the solid line, for example:
line1 vertex coordinates [ (x1, y1), (x2, y2), dashed/solid (xn, yn) ];
where (xn, yn) may be the intersection of the dashed line and the solid line, and the dashed/solid line indicates that the part near (x1, y1) is the dashed line and the part near (x2, y2) is the solid line.
It should also be noted that when determining the lane lines and the vertices of the lane lines in the first image, lanes may also be pre-assigned as follows:
a first lane:
line1 vertex coordinates [ (x1, y1), (x2, y2), solid line ];
line2 vertex coordinates [ (x3, y3), (x4, y4), solid line ].
It is to be noted that the position coordinates of the vertices refer to the position coordinates, i.e., pixel coordinates, of the boundary points of the lane lines in the image, and not the start points or the end points of the actual roads. Each lane needs to be defined by two lane lines, so that the road data of each road is formed by the boundary points of two adjacent lane lines. In this way, each road in the first image is made to have unambiguous geographic coordinates. Therefore, according to the first image captured by the unmanned aerial vehicle, the virtual and real information including the lane line, the vertex of each lane line and each lane line can be obtained.
In the embodiment of the invention, after the first image captured when the unmanned aerial vehicle is cruising on the road is obtained, the road distribution use data corresponding to the first image can be obtained from the existing map data according to the cruising route in the preset cruising strategy and/or the corresponding actual road identified by the captured image. For example, if the first image is a capture screen of a link on a certain expressway, the road distribution use data corresponding to the link can be acquired from the map data. However, the present invention is not limited to this, and road data may be obtained by presetting road distribution use data and matching the preset road distribution use data with the acquired lane lines and the vertexes of the lane lines.
In the embodiment of the present invention, the road distribution usage data may include distribution conditions and usage rules of each road, for example, as shown in fig. 3, on a certain expressway, the innermost lane (generally, the leftmost lane) is the highest speed lane, and the outermost lane (generally, the rightmost lane) is the emergency lane. The road usage rules may include speed limit sections of various lanes, usage rules of bus-only lanes and the like, for example, 7-9 points and 17-19 points of a weekday are set, and the innermost lane is a bus-only lane and the like.
In the embodiment of the invention, after the lane line, the vertex of the lane line and the road distribution use data are obtained, the lane line, the vertex of the lane line and the road distribution use data are matched to generate the road data. When the matching is performed, the road data is determined by matching the position coordinates of the vertexes of the respective lane lines with the distribution of the respective roads in the road distribution use data. The road data here includes at least lane distribution information in the road and information of each lane constituting the road.
For example, the image shown in fig. 3 is processed to form road data as shown in fig. 4. In fig. 4, each lane line has boundary points (only line1 and line2 are taken as examples in fig. 4, and the vertices of the other lane lines are not shown), i.e., vertices, at the uppermost and lowermost positions in the image. The lane lines and the vertices of the lane lines are determined from the image. For example, line1, [ (x1, y1), (x2, y2), solid line ]; wherein, (x1, y1) is boundary point position information of Line1 at the lowest in the image, and (x2, y2) is boundary point position information of Line1 at the top in the image; line2, [ (x3, y3), (x4, y4), solid Line ] was obtained. And the road distribution use data shows that the distribution condition of the road from right to left is respectively: and matching the emergency lane, the lowest speed lane, the passing lane and the highest speed lane with the road distribution use data corresponding to the image according to the vertex coordinates of Line1 and Line2 shown in fig. 4 to obtain a schematic diagram of the road data shown in fig. 4, wherein the road formed by Line1 and Line2 is the emergency lane. Similarly, the lane distribution information can be obtained by using the data for the lane lines, the vertices of the lane lines, and the corresponding road distribution in fig. 3. For example, the vertex position coordinates of the emergency lane are represented as Line1: (x1, y1), (x2, y2), solid line; line2: (x3, y3), (x4, y4), solid line. Namely:
emergency lane:
line1: [ (x1, y1), (x2, y2), solid line ];
line2: [ (x3, y3), (x4, y4), solid line ].
Since two adjacent roads share the lane Line located in the middle, the vertex position coordinates of the lowest speed lane adjacent to the emergency lane also include the position information of the lane Line 2.
It will be appreciated that if the pre-assigned lanes, lane lines and vertices of each lane line are obtained as follows:
a first lane:
line1 vertex coordinates [ (x1, y1), (x2, y2), solid line ];
line2 vertex coordinates [ (x3, y3), (x4, y4), solid line ].
Matching the data with road distribution use data to obtain road data as follows:
emergency lane:
line1: (x1, y1), (x2, y2), solid line;
line2: (x3, y3), (x4, y4), solid line.
Namely, the first lane is identified as the emergency lane.
In an embodiment of the present invention, processing the information of the object whose image is formed on the road specifically includes: the image is processed to form category attribute information and location coordinates of objects located on the road. That is, the information of the object may include: the category attributes and the location coordinates of the objects located on the road. For example, the type of object located on the road is a bus, and the position coordinates of the bus.
It should be noted that, for example, traffic regulations for cars, trucks, buses are different because different types of motor vehicles are subject to different traffic regulations. Road violation identification for motor vehicles is therefore more complex than for pedestrians and non-motor vehicles. Therefore, the road data should include at least the distribution information of the lanes in the road and the information of the respective lanes constituting the road. The information of the object needs to include a category attribute and a position coordinate of the object located on the road.
In the embodiment of the present invention, the position coordinates of the object refer to pixel coordinates of the object on the image, and the category attribute of the object, such as the types of pedestrians, non-motor vehicles, and motor vehicles. Wherein, the traffic laws and regulations applicable to different types of motor vehicles are different, for example, the traffic laws and regulations applicable to cars, trucks and buses are different. Road violation identification for motor vehicles is therefore more complex than for pedestrians and non-motor vehicles. Therefore, when identifying a road violation, it is necessary to determine the class attribute of an object located on a road in an image.
Specifically, determining the position coordinates of the object in the first image may be obtained by performing Bounding Box regression on the image. The position coordinates of the object in the first image correspond to the label (x, y, w, h). In the formula, x and y are coordinates of the central point of a quadrangle where the object is located; w and h are the width and height of the quadrangle where the object is located respectively. Then, the suspected classes of the object in the first image and the confidence degrees of the suspected classes are determined through the trained classification detection model. Wherein the categories include at least cars, trucks, heavy machinery vehicles, buses, and the like. And finally, selecting the category corresponding to the maximum confidence coefficient from the confidence coefficients corresponding to the suspected categories as the category attribute of the information of the object positioned on the road in the first image. In this way, the class attribute of the object in the first image is obtained.
In the embodiment of the invention, the classification detection model can be obtained by carrying out negative feedback optimization training on the neural network. And performing negative feedback optimization of the classification detection model, wherein a public image set for training is required. The public image set for training at least comprises a plurality of car image elements, a plurality of truck image elements, a plurality of heavy machinery vehicle image elements, a plurality of bus image elements and other image elements with different types of attributes. Therefore, the classification detection model obtained through negative feedback optimization can at least identify objects with different classification attributes such as medium and small cars, trucks, heavy machinery vehicles, buses and the like. In addition, several pedestrian image elements, several non-motor vehicle image elements, several mission-specific vehicle elements, etc. may also be included. Correspondingly, the classification detection model obtained through negative feedback optimization can also identify objects with different classification attributes, such as pedestrians, non-motor vehicles, special task vehicles and the like in the image. The special task vehicle is a fire truck, an ambulance, a police car and other vehicles for executing tasks. It is to be understood that the specific attribute classes of the object that can be identified by the classification detection model described herein are clearly not to be construed as limiting the specific scope of the present application.
S300: and matching violation constraint information according to the information of the object and the road data, and acquiring a matching result.
Specifically, after the first image is processed, the generated multiple objects such as vehicles on the road, pedestrians on the road, and road information are matched with the violation constraint information. The violation constraint information is that a vehicle on a road violates a road-driving rule or a pedestrian on the road violates a road-driving rule. The violation constraint information in a specific practical application scenario may be expressed as: the rule that large vehicles occupy overtaking lanes, the rule that vehicles occupy emergency lanes, the rule that vehicles occupy special lanes for buses, the rule that vehicles occupy ultra-high speed or ultra-low speed and the rule that pedestrians travel in motor vehicle lanes are adopted. In contrast, violation constraint information corresponding to the violation constraint information, that is, violation constraint information that a vehicle traveling on a road complies with a travel rule or that a pedestrian traveling on a road complies with a travel rule. It is understood that the specific form represented by the object matching violation constraint information obviously does not limit the scope of protection of the present application.
It should be noted that when the object is determined to be a special vehicle such as a fire truck, an ambulance, or a police car that performs the task, the step of matching violation constraint information according to the information of the object and the road data is not performed, or when the object is determined to be a special vehicle such as a fire truck, an ambulance, or a police car that performs the task, the step of matching the violation constraint information according to the information of the object and the road data is set, and the unmatched matching result is obtained.
In the embodiment of the invention, the violation constraint information is information of relevant road behaviors violating the road traffic regulations and can comprise at least one of constraint information of a large vehicle occupying overtaking lane, constraint information of a vehicle occupying emergency lane, constraint information of a bus occupied by the vehicle occupying special lane, constraint information of a compaction line and constraint information of ultra-high speed and ultra-low speed of the vehicle. The large vehicle may include a truck, a heavy machinery vehicle, and the like.
In the embodiment of the invention, when the violation constraint information is prepared, the road violation of the motor vehicle needs to be fully considered, and the violation constraint information is matched according to the class attribute, the position coordinate and the road data of the object positioned on the road, so that the accuracy of identifying the road violation of the motor vehicle can be ensured.
Different types of motor vehicles are different in applicable road rules, and different lanes are different in applicable road rules. For example, the road rules applicable to cars, trucks, and buses are different, and the road rules applicable to emergency lanes and first lanes adjacent to the emergency lanes are different, so that the lane where the object is located needs to be determined in order to determine violation constraint information of the lane where the object is located, and thus the information of the road recorded in the first image and the object data are matched with the violation constraint information of the lane where the object is located, so as to obtain a matching result.
In the embodiment of the invention, the lane where the object is located can be determined according to the information of the object and the road data, the violation constraint information is matched according to the lane where the object is located, and the matching result is obtained.
According to the obtained road data in the first image and the information of the object, the lane where the object is located can be determined, so that the road behaviors of all the objects in the road corresponding to the first image are obtained, the road behaviors of the lane where the object is located are matched with the violation constraint information, and a matching result of whether the road behaviors of the object are consistent with the violation constraint information or not can be obtained. And if the road behaviors meeting the violation constraint information exist, acquiring a matching result whether the violation constraint information is matched with the corresponding violation constraint information. And if the road behaviors meeting the violation constraint information do not exist, acquiring a matching result inconsistent with the violation constraint information, namely the first image does not have the related road behaviors violating the road traffic regulations.
In the embodiment of the present invention, the lane in which the object is located is determined according to the information of the object and the road data, and the determination may be performed by an algorithm that determines whether a point is within a polygon. The algorithm for judging whether the point is in the polygon may be a Crossing Number algorithm or a wining Number surrounding Number algorithm.
In the embodiment of the invention, the area enclosed by each lane line corresponding to each lane and the boundary of the first image is used as a polygon, the central point of the object is used as a point for judging whether the point is in the polygon, and the lane in which the object is positioned can be determined by judging whether the central point of the object is in the polygon. For example, when the lane where the object is located is determined by the cross Number algorithm, the result of calculating the Number of intersections between each object and each polygon defined by the road data is obtained. When the calculation result of the number of intersections of the object and a certain polygon is odd, the object is positioned in the polygon. I.e. the object is located within the lane represented by the polygon. Similarly, when the lane where the object is located is judged through the wining Number surrounding algorithm, the surrounding Number calculation result of the object and each polygon defined by the road data is correspondingly obtained. When the calculation result of the number of circles of a certain polygon relative to the object is a value other than 0, the object is positioned in the polygon. I.e. the object is located within the lane represented by the polygon.
It should be noted that, in addition to using the center point of the object as the point for determining whether the point is in the polygon, each vertex of the quadrangle where the object is located may be used as a point in the algorithm for determining whether the point is in the polygon. At this time, it can be determined that the object is in a certain lane only if all the vertices of the quadrangle where the object is located are in the same polygon. The quadrangle where the object is located is a quadrangle which is obtained by the unmanned aerial vehicle image through Bounding Box regression processing and is used for identifying the road object. I.e. a quadrilateral for framing objects, to represent objects in the road.
In the embodiment of the present invention, the coordinates of the center point of the object are generally used as the points in the algorithm for determining whether the point is in the polygon, because this method is more accurate than the method for determining whether each vertex of the quadrangle where the object is located is in the polygon. In general, the range covered by the quadrangle used for selecting the object is far larger than the actual range of the object, and therefore, there may be a case that a certain vertex of the quadrangle is not in the polygon corresponding to a certain lane, but the object selected by the quadrangle is actually in the polygon corresponding to the lane.
It should be noted that there are typically 3 results in determining whether a point is within a polygon: points are inside the polygon, points are outside the polygon and points are on the polygon, where a point is on the polygon that includes a specific edge of the polygon where the point is located.
According to the embodiment of the invention, when the violation constraint information is vehicle ultra-high speed and ultra-low speed constraint information, the moving speed of the object can be calculated according to the information of the object and road data formed by a plurality of continuous frame images, the lane where the object is located is determined according to the information of the object and the road data, the moving speed of the object is matched with the speed limit value interval of the lane, and the matching result is obtained.
In the embodiment of the present invention, the ultra high speed is understood as a maximum speed limit higher than a prescribed traveling speed of a corresponding lane, and the ultra low speed is understood as a minimum speed limit lower than the prescribed traveling speed of the corresponding lane. At this time, according to the information of the object in the first image and the road data, the violation constraint information of the ultra-high speed and the ultra-low speed of the lane where the object is located is matched, and the matching result of whether the ultra-high speed and the ultra-low speed constraint information of the vehicle is matched can be obtained.
It will be appreciated that the vehicle is moving within the traffic lane and the speed of movement of the object in the image cannot generally be determined from a single image taken. Therefore, when the violation constraint information is the vehicle ultra-high speed and ultra-low speed constraint information, before matching, when the moving speed of the object in the first image needs to be determined, several continuous frames of images captured by the unmanned aerial vehicle need to be acquired. In this way, the positions of the objects in the image at different times can be determined. And determining the change of the pixels of each target vehicle through multi-target tracking according to the obtained continuous frames of images. Afterwards, based on unmanned aerial vehicle's speed, camera internal reference, camera gesture and zoom multiple, can convert the change of pixel into the actual speed of target vehicle. After the lane where the object is located is determined, the moving speed of the object is matched with the speed limit value interval of the lane where the object is located, and then the matching result of the violation constraint information of the ultra-high speed or ultra-low speed of the vehicle can be obtained. When the moving speed of the object is higher than the highest speed limit value of the lane, the matching result matched with the vehicle ultra-high speed violation constraint information can be obtained. When the moving speed of the object is lower than the lowest speed limit value of the lane, the matching result matched with the vehicle ultra-low speed violation constraint information can be obtained.
It should be noted that, in the embodiment of the present invention, the setting of the number of frames of several consecutive image frames actually depends on several factors: the actual operation performance of the device for identifying the road violation, the frame rate of the video stream grabbed by the unmanned aerial vehicle and the average remaining frame number of the vehicle in the image are executed.
For example, the device for road violation identification may process one frame every 100ms, the original video stream is 30Hz, and the vehicle may remain 90 frames in the image, so the vehicle may be theoretically seen in the image within 3 seconds, and at most 3000/100-30 frames of continuous determination may be required. But takes into account two factors: when a vehicle just enters an image edge and leaves the image edge, the recognition effect is unstable, and after the recognition is finished, the vehicle needs to be subjected to operations of capturing, storing evidence and the like, and the number of frames should be halved, namely 15 frames are used as a reasonable parameter.
In the embodiment of the invention, after the lane where the object is located is determined, the lane constraint information can be matched with the overtaking lane constraint information occupied by the large-scale vehicle, the emergency lane constraint information occupied by the vehicle, the special lane constraint information occupied by the vehicle and the solid line constraint information in the violation constraint information so as to obtain the matching result of the violation constraint information.
In the embodiment of the invention, the fact that the large vehicle occupies the overtaking lane means that the continuous driving time of the large vehicle in the overtaking lane exceeds the preset time. The overtaking lane constraint information of the large vehicle means that if the vehicle type is the large vehicle, the lane where the vehicle is located is the overtaking lane, and the driving time of the vehicle in the overtaking lane meets the preset time. When the constraint information is matched with the constraint information of the overtaking lane occupied by the large vehicle, if the object is the large vehicle, the lane where the object is located is determined to be the overtaking lane, and the time of occupying the overtaking lane is determined to meet the preset time of occupying the overtaking lane by the large vehicle, the constraint information of occupying the overtaking lane by the large vehicle is determined to be matched. It is noted that the time for a large vehicle to occupy a passing lane may be determined from several consecutive frame images. It should be noted that the several consecutive frame images are several frame images adjacent to the first image, and may include the first image.
In the embodiment of the invention, the emergency lane occupied by the vehicle means that the vehicle does not occupy the emergency lane for driving by special vehicles (such as police cars, ambulances and the like). Occupying an emergency lane may be understood as a violation of a road object traveling in the emergency lane. In an emergency situation, the vehicle may be driving on an emergency lane or parked. However, when the road object has no special reason to drive in the emergency lane, the road object can be regarded as the road behavior occupying the emergency lane. The constraint information of the emergency lane occupied by the vehicle refers to constraint information of the non-special vehicle running in the emergency lane. When the emergency lane constraint information is matched with the vehicle occupation emergency lane constraint information, if the object is a car, the lane where the object is located is determined to be an emergency lane, and the object runs on the emergency lane, the emergency lane constraint information is determined to be matched with the vehicle occupation emergency lane constraint information. It should be noted that, in the embodiment of the present invention, whether the object is driving or stopping on the emergency lane may be determined according to several continuous frame images.
In the embodiment of the invention, the solid pressing line is the driving behavior of the solid line part in the lane pressing line of the tire during the running process of the vehicle. The solid line here can be understood as the solid line part of the lane line in the road. It should be noted that the solid lines in the embodiment of the present invention do not include the solid lines in the one-way lane change permission line. The one-way allowable lane change line is composed of a group of parallel broken lines and solid lines, and the vehicle runs on the one-way allowable lane change line, so that the behavior of pressing the one-way allowable lane change line in a short time is allowed. The solid line pressing constraint information refers to constraint information of a solid line pressing by a non-specific vehicle. When the object is matched with the compaction line constraint information, if the object is a car, the car is determined to be on a certain lane line, namely, a point is on a polygon formed by the lane line, and if the lane line is a solid line, the car is matched with the compaction line constraint information. If the lane line is a dashed line, the lane line does not match the compaction line constraint information.
In the embodiment of the invention, the constraint information of the bus lane occupied by the vehicle refers to the constraint information of the non-bus running in the bus lane, and when the constraint information is matched with the constraint information of the bus lane occupied by the vehicle, if the object is a car, the lane where the car is located is determined to be the bus lane and the service time of the bus lane is met, the constraint information is determined to be matched with the constraint information of the bus lane occupied by the vehicle.
According to the embodiment of the present invention, matching violation constraint information according to the information of the object and the road data, and obtaining a matching result, the method may further include: determining a lane where the object is located, a lane line closest to the object and a distance between the object and the lane line closest to the object according to the information of the object and road data; and matching violation constraint information according to the lane where the object is located, the lane line closest to the object and the distance between the object and the lane line closest to the object, and obtaining a matching result.
It is understood that when the vehicle is close to the lane line, the actual driving situation of the vehicle in the lane cannot be accurately determined. For example, when the vehicle is at a short distance from the lane line, the center point of the vehicle is located within the lane, but in reality the vehicle may travel on a push (lane) line. At this time, the violation constraint information is only matched according to the lane where the road object is located, and some road behaviors cannot be detected, so that the accuracy and the practicability of the road violation detection are reduced. And the lane where the object is located is determined, the lane line closest to the object and the distance between the lane line and the object are determined, and then violation constraint information matching is performed, so that the violation constraint information matching is performed conveniently according to the actual road violation of the road object, the accuracy of the matching result of the violation constraint information is improved, and the phenomenon of missing matching is prevented.
In the embodiment of the present invention, according to the information of the object and the road data, before or after the lane where the object is located is determined by an algorithm that determines whether a point is within a polygon, a lane line closest to the object and a distance between the object and the lane line closest to the object may be further determined according to the position information of the object and the position information of the lane line of the lane where the object is located. The position coordinate of the object may be a center point coordinate of the object, that is, x and y in a position coordinate (x, y, w, h) formula obtained by performing Bounding Box regression processing on the first image. The position information of the lane line of the lane where the object is located refers to the vertex position coordinates of the lane line constituting the lane.
For example, the coordinates of the center point of a certain object are (x, y), and the lane lines corresponding to the detected lane are line1, namely, the vertex coordinates [ (x1, y1), (x2, y2), and solid lines ]; line2, vertex coordinates [ (x3, y3), (x4, y4), solid line ], the distances from the object to the line1 and the line2 can be obtained by calculating the distances from the points to the straight line, respectively, and the minimum distance is selected from the two as the lane line closest to the object, and the minimum distance is the distance between the object and the lane line closest to the object.
According to the lane where the object is located, the lane line closest to the object and the distance between the object and the lane line closest to the object, the vehicle-occupied overtaking lane constraint information, the vehicle-occupied emergency lane constraint information, the vehicle-occupied bus lane constraint information and the ultra-high speed and ultra-low speed constraint information in the violation constraint information can be matched to obtain the matching result of the violation constraint information.
According to the embodiment of the invention, when the violation constraint information is compaction line constraint information, if the lane line closest to the object is a solid line and the distance between the object and the lane line closest to the object is within a first preset range, the matching result is obtained as successful matching.
It is to be understood that the first preset range may be understood as an allowable separation distance between the road object and the nearest lane line preset in the compacted-line constraint information. If the distance between the road object (the center point of the object) and the closest lane line is within the first preset range, the road object is in a solid line driving state. At this time, a matching result successfully matched with the compaction line constraint information is obtained. If the distance between the road object and the lane line is out of the first preset range, the road object is far away from the lane line, namely, the road object is in a non-solid line pressing driving state. At this time, a matching result which is not matched with the compaction line constraint information or fails in matching is obtained. In addition, when the distance between the road object and the lane line is outside the first preset range, although the road object is in the driving state of the non-solid line, it cannot be said that the road object does not violate the behavior of the other road than the compacted line.
It should be noted that, when matching the solid line pressing constraint information according to the road behavior of the road object, it is first necessary to determine whether the lane line corresponding to the road where the vehicle is located in the road is a solid line. That is, information of each lane line in the road data formed by the processed image. The lane line includes a solid line portion and a broken line portion. The object in the road can make corresponding lane change in the dotted line area, and the driving behavior of the object in the road does not violate the road traffic regulation.
It can be understood that the position coordinates of the object in the image are determined to be the coordinates of the center point of the object, which can be obtained by performing Bounding Box regression processing on the image to obtain x and y in the formula (x, y, w, h), where w and h are the width and height of the quadrangle where the object is located, respectively. The quadrangle where the object is located is a quadrangle which is obtained by processing the unmanned aerial vehicle image through a Bounding Box and identifies the area where the road object is located. I.e. a quadrilateral for framing objects.
In an embodiment of the present invention, the first preset range may be obtained according to a width in the obtained coordinates of the object, that is, w in the formula (x, y, w, h), and the first preset range may be set to be not more than a width w/2 of a center point of the object and a width of the framed quadrangle in the lateral direction.
In the embodiment of the present invention, the first preset range may also be obtained according to the obtained category attribute of the road object. Due to the fact that the width of each type of vehicle is different, the corresponding relation between the vehicle of each type and the first preset range can be preset, and after the type of the object is obtained, the first preset range corresponding to the type can be found from the corresponding relation. For example, if the type of the acquired object is a bus, the corresponding first preset range is acquired from the corresponding relationship to be L01, and if the type of the acquired object is a car, the corresponding first preset range is acquired from the corresponding relationship to be L02, and L01 is greater than L02.
In the embodiment of the present invention, the first preset range may also be manually set.
According to the embodiment of the invention, when the violation constraint information is constraint information of an emergency lane occupied by a vehicle, if the object is located outside the lane, the distance between the object and the inner lane line of the emergency lane is the shortest, and the distance between the object and the inner lane line of the emergency lane is within a second preset range, the matching result is obtained as successful matching.
It can be understood that, after determining the lane where the object is located, the lane line closest to the object, and the lane where the object is located in the distance between the object and the lane line closest to the object, the matching result with the vehicle occupation emergency lane constraint information may be obtained according to the matching between the lane where the object is located and the vehicle occupation emergency lane constraint information. And if the lane where the object is located is determined to be the emergency lane, acquiring a matching result successfully matched with the constraint information of the emergency lane occupied by the vehicle.
In practical application, because emergent lane outside (keep away from vehicle driving lane one side) does not have obvious boundary line, or its boundary line is sheltered from by articles such as sound-proof wall, railing, can lead to the unable emergent lane outside boundary information that obtains according to the image that unmanned aerial vehicle snatched. That is, it cannot be determined whether the vehicle is traveling in the emergency lane. Therefore, when the boundary line of the emergency lane in the image acquired by the unmanned aerial vehicle is blocked and the lane type cannot be determined, and the constraint information of the emergency lane occupied by the vehicle is matched, the lane line on the inner side of the emergency lane can be used as a judgment standard. If the object is located outside the lane, the distance between the vehicle and the lane line on the inner side of the emergency lane is calculated, and whether the distance is within a second preset range is judged, so that whether the actual road behavior of the vehicle is consistent with the constraint information of the emergency lane occupied by the vehicle can be determined.
The second preset range may be understood as a preset emergency lane determination section occupied by the vehicle. The value interval of the second preset range can be set according to actual conditions, for example, the second preset range is set to be the width of an emergency lane. If the object is located outside the lane and the distance between the vehicle in the image and the lane line on the inner side of the emergency lane is within a second preset range, the vehicle is indicated to run in the emergency lane, at the moment, a violation constraint information matching result of the emergency lane occupied by the vehicle is obtained, and violation of the object can be determined, namely, the vehicle is judged to occupy the emergency lane.
Specifically, by an algorithm that determines whether a point is within a polygon, a specific lane in which the object is located in the first image can be determined, thereby determining whether the object is located outside the lane. When the object is located outside the lane, the distance of the vehicle in the resulting image from the lane line inside the emergency lane is calculated. By comparing the distance with the second preset range, whether the object occupies the emergency lane can be judged.
It should be noted that, based on the lane distribution information in the road data and the information of each lane line, it can be determined whether the outermost lane is an emergency lane. And if the outermost lane is an emergency lane, the object is positioned outside the lanes, namely the vehicle is not positioned in any lane and on the lane, and the distance between the vehicle and the lane line on the inner side of the emergency lane is calculated to be within a second preset range, the emergency lane occupation constraint information is matched with the vehicle.
S400: and when the object is determined to be in violation according to the matching result, controlling the unmanned aerial vehicle and the pod to track the object, and shooting a second image.
The violation of the determined object can be understood as a vehicle matched with the violation constraint information, that is, the speed of the vehicle running during the road running, the lane road of the vehicle running and the violation constraint information are matched; it can also be understood as a pedestrian that matches the violation constraint information, i.e., that is traveling on a non-pedestrian road, is deemed a match. And when the matching is carried out, and the object in the image violates the rules, the unmanned aerial vehicle and the pod can be controlled to track the violated object, and the second image is shot through the pod. The second image here may be understood as an image taken in real time for a road violation object, and information to be recognized in the road violation recognition is recorded.
It should be noted that the unmanned aerial vehicle and the pod are controlled to track the object and capture the second image, and the attitude angle and the zoom magnification of the pod can be adjusted, and the flight attitude of the unmanned aerial vehicle and the attitude angle and the zoom magnification of the pod can also be adjusted. In the specific practical application scenario of the application, the pod is controlled to shoot the second image by preferentially selecting the flight attitude of the unmanned aerial vehicle, the attitude angle of the pod and the adjustment of the zoom multiple. Namely, the combination of the flight attitude of the unmanned aerial vehicle and the attitude angle of the pod and the zoom numerical value of the camera are changed to track the road violation object and shoot a second image.
In the practical application scenario of the application, when the unmanned aerial vehicle patrols the road according to the cruise strategy (the original path thereof), and when the vehicle violating the road exists in the acquired first image, the unmanned aerial vehicle adjusts the attitude angle of the pod and the zooming value of the camera, so as to shoot a second image of the vehicle. If the illegal vehicle has the phenomena of acceleration, deceleration, turning around/turning around by changing the moving track and the like, the unmanned aerial vehicle adjusts the flight attitude thereof accordingly, for example, adaptively accelerates, decelerates/turns around by changing the moving track along with the illegal vehicle, so as to ensure that the vehicle is always positioned in the visual field range. It should be noted that in the process of adjusting the flight attitude of the unmanned aerial vehicle along with the real-time road behavior of the road violation vehicle, the pod always keeps a certain fixed attitude angle for image shooting. It will be understood that the specific values of the fixed attitude angle maintained by the nacelle described herein obviously do not constitute a limitation on the scope of the present application.
It is to be noted that, considering that the second image is an image to be recognized in the road violation recognition, the definition of the second image should be much higher than that of the first image.
In an embodiment of the present invention, controlling the drone and the pod to track the object and capture a second image, referring to fig. 5, may include:
s410: controlling a pod to lock the object so that the object is centered in the video;
s420: adjusting the attitude of the unmanned aerial vehicle to enable the pod to keep a second pitch angle locking object;
s430: controlling the zoom factor of the pod to zoom from a first zoom factor to a second zoom factor;
s440: controlling the pod to capture a second image at the second pitch angle and the second zoom factor;
and a camera which is arranged along or departs from the course is arranged in the pod of the unmanned aerial vehicle.
It should be noted that in the process of controlling the pod to lock the offending object so that the offending object is in the center of the video, the pixel coordinates of the offending object can be acquired in real time. And sending the pixel coordinates acquired in real time to a pod of the unmanned aerial vehicle, so that the pod can adjust an attitude angle according to the pixel coordinates acquired in real time, and the illegal object is positioned in the center of the acquired video. Here the nacelle adjusts the attitude angle, mainly the pitch angle and the heading angle of the nacelle.
Specifically, when determining a road object violation, the TX2 module may obtain the pixel coordinates of the violation object for the first time. And then, a camera in the unmanned aerial vehicle pod shoots the road violation object in real time. At this time, the TX2 module can calculate the pixel coordinates of the offending object in the captured video frames in real time and send the pixel coordinates in each video frame to the pod in real time. The pod can adjust its attitude angle according to the pixel coordinates received in real time, so that the road violation object is in the center of the video.
In the embodiment provided by the invention, when the TX2 module determines that a road object violates a rule, the real-time position coordinates (x, y, w, h) of the violation object can be obtained through Bounding Box regression processing and sent to the pod. Here mainly the (x, y) coordinates therein are utilized. At this time, the pod can control the center of the lens arranged in the pod by adjusting the posture of the pod. That is, the lens center is moved to the position coordinate (x, y).
It should be noted that the pod can adjust the lens center (picture center) of the image in the video captured by the pod by controlling the attitude angle of the pod, and after receiving the pixel coordinate of the object sent by the TX2 module in real time, the pod calculates the coordinate difference between the lens center in the current attitude and the pixel coordinate, and then adjusts the attitude angle of the pod according to the coordinate difference to adjust the lens center to the pixel coordinate.
In the embodiment of the invention, when the TX2 module judges that an illegal object exists in the first image, the pixel coordinates of the object in each frame of image are determined in real time, and then each pixel coordinate of the determined object is sent to the pod, and the pod makes the object in the center of the video by adjusting the attitude angle of the pod (namely, the center of the picture is adjusted to the pixel coordinates of the object). The attitude angle of the nacelle to be adjusted here includes not only the pitch angle but also the heading angle. When the first image is acquired, the first zoom factor is smaller, so that a comprehensive road image can be shot, and the object is not required to be positioned in the center of the picture. Therefore, when the first image is shot, only the pitch angle is adjusted to the first pitch angle. However, when the second image is captured, the offending object needs to be in the center of the screen. At this time, this effect may not be achieved by adjusting only the pitch angle. Therefore, the nacelle attitude angle to be adjusted before the second image is taken includes not only the pitch angle but also the heading angle. For example, the coordinates of the center of the picture shot by the pod are (100 ), the coordinates of the pixel of the violation object determined by the TX2 module in the first image frame are (101,102), and the pixel coordinates are sent to the pod. In this case, the pod needs to adjust its attitude angle to move the center of the screen of the first image toward the offending object, thereby allowing the offending object to be positioned at the center of the screen. Here, the pitch angle of the nacelle with respect to the road violation object when the road violation object is in the center of the screen is regarded as the second pitch angle. The pod then holds the second pitch angle lockout violation.
It should be noted that, in practice, when the pod receives the pixel coordinates returned by the TX2 module after sending the first image to the TX2 module, it takes an image with a certain delay from the first image. According to the pixel coordinates of the illegal object determined in the first image, the picture center coordinates of the current image are adjusted to be at the pixel coordinates, and the current image is sent to the TX2 module, so that the TX2 module controls the pod locking object and the illegal object is positioned in the center of the video shot by the pod.
It should be noted that the roll angle of the nacelle is always constant when the first image is acquired and the attitude angle thereof is adjusted so that the offending object is at the center of the screen. I.e. maintaining the initial roll angle.
It will be appreciated that during the time that the drone controls the pod to lock the offending object so that it is in the center of the video, the drone is still flying in the preset cruise path. Then, the offending object is centered in the video only by adjusting the attitude angle of the pod. At this time, the pitch angle of the nacelle with respect to the offending object is regarded as the second pitch angle. The phenomenon of acceleration/deceleration/change of moving track turning around/turning around exists in the illegal object, and the unmanned aerial vehicle adjusts the flight attitude thereof accordingly, for example, the unmanned aerial vehicle adaptively accelerates/decelerates/changes the moving track turning around/turning around along with the illegal vehicle, so that the pitch angle of the nacelle relative to the illegal object is always kept as a second pitch angle. Of course, the unmanned aerial vehicle can control the pod to lock the object so that the object is in the video center, and the illegal object can be always in the video center by continuously adjusting the real-time attitude angle of the pod.
In the embodiment of the invention, when the pod locks the illegal object by adjusting the flight attitude of the unmanned aerial vehicle to enable the illegal object to be positioned in the center of the video, a second zoom multiple can be obtained and sent to the pod, so that the pod zooms from the first zoom multiple to the second zoom multiple at a constant speed.
In the embodiment of the present invention, the second zoom factor ranges from 20 to 60 times. And in the second zoom multiple range, a second image with clear view and moderate illegal object occupation proportion can be obtained.
It should be noted that the second zoom factor obtained by the TX2 module may be set by comprehensively analyzing the flight height of the drone and the distance between the drone and the offending object, for example, when the flight height of the drone is higher and the distance between the drone and the offending object is farther, the second zoom factor should be set to be larger. When the second zoom factor is a preset fixed value, when the TX2 module detects an object violation, the fixed value is sent to the pod so as to make the pod zoom from the first zoom factor to the second zoom factor at a constant speed. In addition, the object size (i.e., (w, h) in the position coordinates (x, y, w, h) of the road object obtained through Bounding Box regression processing when the object is determined to be illegal) and the corresponding relationship between the image size and the zoom multiple can be preset, so that the corresponding zoom multiple can be determined according to the determined object size.
In the embodiment of the present invention, the second zoom magnification may be set to be equal to the image width/the object width × the first zoom magnification.
It should be noted that the TX2 module only sends the second zoom factor once to the pod, and does not need to send it in real time as the pixel coordinates of the determined object.
It should be noted that the pod is controlled to zoom from the first zoom multiple to the second zoom multiple at a constant speed, so that tracking loss caused by sudden change of the zoom multiple of the pod is avoided, and stability and accuracy of the unmanned aerial vehicle in adjusting the flight attitude to enable the illegal object to be in the center of the video can be improved. The uniform zooming of the camera is executed in the process of adjusting the flight attitude of the unmanned aerial vehicle, so that the real-time flight path and the flight speed of the unmanned aerial vehicle need to be considered, and the uniform zooming process of the camera is also considered. When the flying speed of the unmanned aerial vehicle is too fast, the process that the camera zooms to the preset second zooming multiple at the constant speed is set as fast zooming at the constant speed. On the contrary, when the flying speed of the unmanned aerial vehicle becomes slow, the process that the camera zooms to the preset second zooming multiple at the uniform speed is set as the slow zooming at the uniform speed. The process that the camera zooms to the preset second zoom multiple at the constant speed is set to be slower or faster, so that the problem that the illegal object is possibly lost in the visual field range of the camera when the illegal object on the road is shot is also considered.
It should be noted that this zooming action is performed by the pod during the control of the uniform zooming of the pod from the first zoom factor to the second zoom factor.
It should also be noted that, in the process of zooming the pod from the first zoom factor to the second zoom factor at the constant speed, the zoom value of the constant speed zoom is set according to the practical application scenario, for example, when the flying height of the unmanned aerial vehicle is 50 meters away from the ground, the zoom value of the constant speed zoom is set to be 1-3 times of the first zoom factor. It can be understood that, in the process of zooming the camera from the first zoom multiple to the preset first zoom multiple at the constant speed, the value of the zoom multiple at the constant speed obviously does not constitute a limitation to the specific protection scope of the present application.
In the embodiment of the invention, the pod preferably adjusts the attitude angle according to the pixel coordinate to enable the object to be positioned in the center of the video, and the pod keeps the second pitch angle to shoot the illegal object by adjusting the flight attitude of the unmanned aerial vehicle. The zoom factor is then readjusted to prevent loss of tracking during zooming. That is, during zooming, the target object is kept at the center of the video, and when zooming is performed to the second zoom multiple, the second image is captured. Wherein the first zoom factor is less than the second zoom factor. It should be noted that after the second image is taken, the unmanned aerial vehicle can control the attitude angle of the pod to return to the first pitch angle under the control of the unmanned aerial vehicle, and the zoom multiple of the pod is adjusted from the second zoom multiple to the first zoom multiple, so that the unmanned aerial vehicle can continuously cruise according to the preset flight path.
In an embodiment of the present invention, adjusting the attitude of the unmanned aerial vehicle to enable the pod to hold the second pitch angle locking object may include, referring to fig. 6:
s421: acquiring the attitude angle of the pod in real time;
s422: when the pitch angle in the attitude angle is not equal to the second pitch angle, determining the distance to be flown by the unmanned aerial vehicle according to the pitch angle;
s423: and determining the attitude data of the unmanned aerial vehicle according to the distance to be flown by the unmanned aerial vehicle.
The attitude angle of the nacelle can be understood here as the pitch angle of the nacelle or the heading angle of the nacelle. Since the pod is fixed relative to the drone, its roll angle does not need to be adjusted. In practical applications, the attitude angle of the nacelle is obtained and can be monitored by a sensor located on the nacelle.
It can be understood that when the nacelle pitch angle is adjusted, the road violation object is in the central region of the image taken by the drone. However, in the process of shooting the illegal road violation object, the movement of the illegal object has no fixed rule. For example, when the illegal road object suddenly increases or decelerates, the flight attitude of the unmanned aerial vehicle is not adjusted in time, and the pitch angle of the nacelle relative to the illegal object changes along with the adjustment, so that the illegal object disappears in the visual field range shot by the nacelle. Therefore, the flight attitude of the unmanned aerial vehicle is also required to be adjusted according to the real-time pitch angle of the pod, so that the road violation object can be ensured to be in the shooting area of the pod. Preferably in the central region of the pod within the shooting range.
Specifically, when the real-time pitch angle of the nacelle is consistent with the second pitch angle, it is indicated that the road violation object is still in the central area of the image shot by the nacelle. When the real-time pitch angle of the pod is not equal to the second pitch angle of the pod, the relative position between the unmanned aerial vehicle and the road violation object is changed. That is, the offending object has deviated from the central area of the pod-captured image. For example, when the pitch angle of the pod relative to the illegal object is smaller than the second pitch angle, the drone should adjust the flight attitude in time to approach the road illegal object until the pitch angle of the pod relative to the road illegal object is consistent with the second pitch angle. When the pitch angle of the nacelle relative to the illegal object is larger than the first target pitch angle, the unmanned aerial vehicle timely adjusts the flight attitude to be away from the illegal object until the pitch angle of the nacelle relative to the illegal object is consistent with the second pitch angle.
In the embodiment of the present invention, determining the distance at which the unmanned aerial vehicle flies according to the pitch angle includes: determining the horizontal distance between the unmanned aerial vehicle and the object according to the flying height and the pitch angle of the unmanned aerial vehicle; determining the horizontal distance between the unmanned aerial vehicle and an object when the pod holds a second pitch angle to lock the object according to the flying height of the unmanned aerial vehicle and the second pitch angle; and determining the distance to be flown by the unmanned aerial vehicle according to the horizontal distance between the unmanned aerial vehicle and the object and the horizontal distance between the unmanned aerial vehicle and the object when the pod keeps the second pitch angle to lock the object.
It should be noted that the drone maintains a certain flight altitude for road cruising. The horizontal distance here may be understood as the relative distance of the drone and the road violation object on the horizontal plane on which the flying height of the drone lies. Or, in the same right triangle where the unmanned aerial vehicle and the road violation object are located, the straight line formed by the unmanned aerial vehicle and the road violation object can be understood as the hypotenuse; a vertical line of a horizontal plane on which the unmanned aerial vehicle is located relative to the road violation object can be understood as a certain right-angle side; the horizontal distance corresponds to another right-angle side. And according to the real-time pitch angle of the unmanned aerial vehicle relative to the road violation object and through a related trigonometric function, the real-time horizontal distance between the unmanned aerial vehicle and the road violation object can be obtained. And in the standard state, the unmanned aerial vehicle pod keeps the second pitch angle to shoot the road violation object. The standard state here can be understood as a state corresponding to a road violation object in the central region of the image captured by the pod. And obtaining the standard horizontal distance between the unmanned aerial vehicle and the road violation object in the standard state through a related trigonometric function according to the second pitch angle in the standard state and the cruising height of the unmanned aerial vehicle.
And when the real-time horizontal distance is consistent with the standard horizontal distance, the real-time pitch angle of the unmanned aerial vehicle relative to the road violation object is consistent with a second pitch angle in a standard state. At the moment, the road violation object is in the central area of the image shot by the pod, and the flight attitude of the unmanned aerial vehicle does not need to be adjusted. And when the real-time horizontal distance is not equal to the standard horizontal distance, the real-time pitch angle of the unmanned aerial vehicle relative to the road violation object is not consistent with a second pitch angle in a standard state. At this time, the road violation object has deviated from the central area of the image captured by the pod. And the difference value between the real-time horizontal distance and the standard horizontal distance is the horizontal distance to be flown by the unmanned aerial vehicle. By adjusting the horizontal flying distance of the unmanned aerial vehicle, the pod can keep the second pitch angle to shoot the road violation object, so that a second image which is clear in view and contains the road violation object is obtained.
In one embodiment of the present invention, please refer to fig. 7, which is a schematic view illustrating the adjustment of the flight attitude of the unmanned aerial vehicle when the real-time pitch angle of the nacelle is smaller than the second pitch angle. In the figure, a road violation vehicle is located at C; the unmanned aerial vehicle is positioned at the position P; the unmanned aerial vehicle pod is also located at P; the real-time pitch angle of the unmanned aerial vehicle relative to the illegal vehicle is alpha0(ii) a The distance between the two points of the PG and the horizontal plane for the vehicle to run is the length of a connecting line between the two points PG and is represented by H; the real-time horizontal distance between the unmanned aerial vehicle and the road violation vehicle is the length of an OP two-point connecting line, and L is used1And (4) showing. But in the standard state, the drone should be located at P2At least one of (1) and (b); the standard (second) pitch angle of the unmanned aerial vehicle relative to the illegal vehicle is alpha2(ii) a The standard horizontal distance between the unmanned aerial vehicle and the road violation vehicle is OP2Length of the two-point connecting line, L2And (4) showing. According to the figure, the real-time pitch angle of the unmanned aerial vehicle relative to the illegal vehicle is alpha0Less than the standard (second) pitch angle of alpha2. At the moment, the unmanned aerial vehicle should adjust the flight state and fly from P to P2To (3). Distance to be flown by unmanned aerial vehicle is P P2Length of the two-point connecting line, L0And (4) showing. According to the figure, the distance L of the unmanned aerial vehicle to be flown0=L1–L2
It should be noted that the vertical distance H from the ground during flight of the drone is certainly constant. I.e. the altitude of the drone is determined to be constant, this valueCan be obtained by the sensor of the unmanned aerial vehicle. And, real-time pitch angle α of the unmanned pod0Standard (second) pitch angle alpha2Are known. According to the trigonometric function relationship, the real-time horizontal distance L between the unmanned aerial vehicle and the road violation vehicle can be determined1And a standard horizontal distance L2Determining the distance L of the unmanned plane from P to P20. In addition, it should be noted that the straight-line distance PC between the drone at P and the offending road object at C can be obtained by sensors of the drone or by laser ranging.
In the embodiment of the present invention, determining the attitude data of the unmanned aerial vehicle according to the distance to be flown by the unmanned aerial vehicle includes: determining the acceleration in the horizontal direction when the unmanned aerial vehicle moves to the nacelle and keeps a second pitch angle according to the distance to be flown by the unmanned aerial vehicle; determining the angular speed of the unmanned aerial vehicle when the unmanned aerial vehicle moves to the nacelle to maintain a second pitch angle according to the acceleration in the horizontal direction; and determining a course angle when the unmanned aerial vehicle moves to the pod to maintain a second pitch angle according to the angular speed so as to determine attitude data of the unmanned aerial vehicle.
It can be understood that, since the cruising height of the unmanned aerial vehicle is determined to be constant, the attitude angle of the unmanned aerial vehicle on the heading is calculated, that is, the attitude of the unmanned aerial vehicle can be determined. The attitude angle mainly comprises a roll angle, a pitch angle and a course angle corresponding to the unmanned aerial vehicle in flight. In the embodiment of the invention, the roll angle and the pitch angle of the unmanned aerial vehicle are unchanged, and only the course angle is adjusted. The acceleration of the drone in the horizontal direction can be understood as the acceleration of the drone when flying from the current position to the second pitch angle maintained by the pod.
In practical application, the acceleration of the unmanned aerial vehicle in the horizontal direction can be determined through an empirical formula. Wherein the empirical formula is represented as: l ═ k × a; in the formula, k is an empirical coefficient obtained through multiple tests, L is the distance in which the unmanned aerial vehicle flies, and a is the acceleration in the horizontal direction when the unmanned aerial vehicle moves to the nacelle and maintains the second pitch angle. From this, the acceleration a of unmanned aerial vehicle horizontal direction can be confirmed.
And determining the angular speed of the unmanned aerial vehicle when the unmanned aerial vehicle moves to the nacelle to keep the second pitch angle according to the acceleration a of the unmanned aerial vehicle in the horizontal direction and a circular motion formula. Wherein the circular motion formula is expressed as: ω v ═ a; where v is the velocity of the drone in the horizontal direction, a is the acceleration of the drone in the horizontal direction as it moves to the pod holding the second pitch angle, and ω is the angular velocity as it moves to the pod holding the second pitch angle. And if the value a is known, the angular velocity omega when the unmanned aerial vehicle moves to the nacelle and maintains the second pitch angle can be correspondingly obtained. According to the course angular velocity, the course angle when the unmanned aerial vehicle moves to the pod and keeps the second pitch angle can be determined, and therefore the attitude data of the unmanned aerial vehicle is determined.
S500: and acquiring the identification of the object according to the second image.
It is understood that the second image taken by the drone is an image for the offending object. In this way, from the second image, the relevant information of the offending object can be determined. For example, the violation object category attribute information, the violation object feature information, the road information where the violation object is located, the violation object location information, and so on. In the embodiment of the invention, the identification information of the object, namely the specific distinctive information of the object, is mainly acquired. Such as the license plate number of the vehicle, the face image of a pedestrian, etc.
It is noted that although, from the first image taken by the drone, it is also possible to determine the identity of the offending object. However, the zoom multiple of the first image shot by the unmanned aerial vehicle is lower than that of the second image shot by the unmanned aerial vehicle, and the shooting angle of the illegal object in the second image is better, so that the identification of the illegal object in the second image is more accurate and is easy to obtain, and therefore the identification of the object is obtained through the second image.
In an embodiment of the present invention, acquiring the identifier of the object according to the second image includes: detecting the object image from the second image according to a plurality of types of detection models; detecting a license plate image from the object image according to the license plate detection model; and acquiring the identification of the object from the license plate image by using an Optical Character Recognition (OCR) algorithm.
Specifically, when the unmanned aerial vehicle tracks the illegal object, the situation that the tracked object is lost exists because the illegal object is in the process of continuously moving, so that the photographed second image may not contain the illegal object, the proportion of the illegal object in the second image is too small, and the second image may contain a plurality of objects, and therefore, the license plate cannot be identified or cannot be identified accurately due to direct identification of the second image. Therefore, the identification of the illegal object is obtained according to the second image, and firstly, the object image needs to be detected from the second image according to the multi-class detection model and is used for recognition, so that the possibility of recognizing the license plate is increased, and the recognition accuracy is improved. Wherein the multi-class detection model is used for detecting the object image from the second image. The multi-class detection model can be obtained through negative feedback optimization of a neural network. And performing negative feedback optimization of the multi-class detection model, wherein a public image set for training is required. And the public image set for training at least comprises a plurality of image elements which are not marked with classification results and contain illegal objects. In this way, the classification detection model obtained through negative feedback optimization can identify whether the second image contains the illegal object. The category attribute of the violation object can be an automotive vehicle, a non-automotive vehicle, a pedestrian, and the like.
After the object image is obtained, due to the problem of the shooting angle of the unmanned aerial vehicle, some object images may not include the license plate image, or the license plate image has a small occupation ratio in the object image, so that the license plate image needs to be detected from the object image. In the embodiment of the invention, the license plate image can be further detected from the object image through the license plate detection model, so that the possibility of recognizing the license plate is increased, and the recognition accuracy is improved. And detecting the illegal object identification through a license plate detection model, wherein the detection of a license plate position area is mainly performed in an object image to obtain the image information of the license plate position area in a second image. The license plate detection model can be obtained through negative feedback optimization of a neural network. And performing negative feedback optimization on the license plate detection model, wherein a public image set for training is required. The public image set for training at least comprises a plurality of image elements marked with license plate positions. Thus, the license plate detection model obtained through negative feedback optimization can detect the license plate position in the second image containing the illegal object. The image elements with the marked license plate positions can be motor vehicle image elements with the marked license plate positions, non-motor vehicle image elements with the marked license plate positions and the like.
It should also be noted that the license plate detection model is obtained through an algorithm model, the algorithm model comprises two parts, one part is a network structure, and particularly means matrix calculation in a convolutional layer and a pooling layer; the other part is a weight file, which can be understood as an array consisting of many floating point numbers. It is also contemplated that in a preferred embodiment provided herein, the algorithm model accelerates the computation of the algorithm model by a Graphics Processing Unit (GPU), i.e. the computation of the algorithm model is accelerated. Because the graphic processor can simultaneously process a plurality of task calculations, compared with the existing processor, the image processor has the advantage of outstanding calculation, and the time consumption of the algorithm model adopted by the license plate monitoring model can be reduced to 10ms from the original 300 ms.
And after detecting the region of the license plate in the second image, identifying the license plate information. Specifically, an OCR algorithm is utilized. And performing OCR recognition on the image of the license plate position area to obtain license plate number information in the license plate. The OCR algorithm can be obtained through negative feedback optimization of a neural network. Negative feedback optimization of the OCR algorithm requires the use of a training public image set. The public image set for training at least comprises a plurality of license plate image elements marked with license plate information. Thus, the OCR algorithm obtained through negative feedback optimization can detect the license plate information in the second image. The image elements with the marked license plate positions can be motor vehicle image elements with the marked license plate positions, non-motor vehicle image elements with the marked license plate positions and the like.
It can also be understood that the license plate detection model mainly adopts a target detection technology in a convolutional neural network in the embodiment of the application, and an OCR algorithm uses an image classification technology. After the license plate detection model detects the license plate, 4 coordinate points (upper left corner, lower left corner, upper right corner and lower right corner) of the license plate are generated. The OCR algorithm outputs a one-dimensional vector corresponding to a number, letter, or chinese character in the list of license plate characters.
In an embodiment of the present invention, acquiring the identifier of the object according to the second image includes: periodically shooting a plurality of second images under a second zoom multiple; detecting an object image from each second image according to the multi-class detection model; detecting a license plate image from each object image according to a license plate detection model; recognizing the license plate identifier of each license plate image by using an Optical Character Recognition (OCR) algorithm; calculating the repetition rate of each license plate identifier according to the license plate identifiers of the plurality of second images; comparing the repetition rate with a set threshold; and determining the license plate identifier with the repetition rate exceeding a set threshold and the highest repetition rate as the identifier of the object.
Specifically, under the second zoom factor, a plurality of second images are acquired, that is, a plurality of second images are periodically taken. The second image is used for detecting identification information of the illegal vehicle information, so that a plurality of second images should be taken as many as possible to confirm the information of the illegal vehicle.
It should be noted that the acquisition of the plurality of second images may be continuous several frames or interval shooting. In the process of carrying out continuous shooting of a plurality of frames, the setting is needed to be carried out according to the actual shooting performance of the unmanned aerial vehicle. For example, a road recognition algorithm in an unmanned aerial vehicle needs 100ms to process one frame, an original video stream is 30Hz, and a vehicle can leave 90 frames in an image, i.e., the vehicle can present an image within 3 seconds. In an actual scene, the number of frames is usually set to 15 frames, that is, the vehicle presents 6 seconds of images, in consideration of the sufficiency of violation forensics.
It can be understood that the periodically capturing the plurality of second images may be started after a single image is captured and specific information of the vehicle is not obtained through the multiple types of detection models and the license plate detection model. The plurality of second images can be shot periodically by shooting the images every 1 second, the vehicle information detected by the plurality of types of detection models and the license plate detection models is carried out on the plurality of images, an Optical Character Recognition (OCR) algorithm is carried out, the mark with the high repetition rate of the generated license plate information is selected, and then the specific information of the vehicle is generated.
It is understood that the license plate identifier with the repetition rate exceeding a set threshold and the highest repetition rate is determined as the identifier of the object, that is, the identifier with the repetition rate exceeding a predetermined threshold is selected after a plurality of pictures are taken. For example, if 6 images are periodically captured and 3 or more pieces of license plate information are repeated, 3 or more pieces of license plate information are selected and selected. It will be understood that the time of periodically taking the images, the value of the repetition rate, the predetermined threshold value and the method of selecting the identifier obviously do not constitute a limitation to the scope of the present application.
In the embodiment of the present invention, the first image may be further saved as an evidence. It is understood that from the first image taken by the drone, road-related information in the image and object information located in the road can be determined. The road-related information refers to information related to road distribution. For example, details of the distribution of motor lanes, non-motor lanes, sidewalks in the road, or details of the distribution of lanes in the motor lanes, or the specific location coordinates of each road in the image, etc. The object information is an object located in a road in the image. For example, pedestrians on sidewalks, or non-motorized vehicles on non-motorized lanes, or motorized vehicles on motorized lanes, or category attributes of different objects respectively located in each road, etc. And identifying the illegal object according to the road related information and the object information positioned in the road and matching with the illegal constraint information. However, there is a possibility of erroneous determination. Therefore, the first image is required to be saved as evidence. Therefore, secondary recognition can be carried out on the road behaviors which are possibly misjudged according to the stored evidence, and the accuracy of road recognition is increased.
It should be noted that the second image may also be saved as an evidence to further save the license plate information including the illegal object, so that the evidence is more complete and clear.
Embodiments of the apparatus of the present application are described below, which may be used to perform the method for road violation identification in the above-described embodiments of the present application. Fig. 8 is a schematic structural diagram of a device 100 for identifying road violations according to an embodiment of the present invention. As shown in fig. 8, the apparatus 100 includes: an acquisition module 11, a processing module 12, a matching module 13, a control module 14 and an identification module 15.
The acquisition module 11 is used for acquiring a first image captured when the unmanned aerial vehicle is cruising on a road;
a processing module 12 for processing the first image forming road data and information of an object located on a road;
the matching module 13 is configured to match violation constraint information according to the information of the object and the road data, and obtain a matching result;
the control module 14 is used for controlling the unmanned aerial vehicle and the pod to track the object and shoot a second image when the object violation is determined according to the matching result;
and the identification module 15 acquires the identification of the object according to the second image.
According to the embodiment of the invention, the obtaining module 11 is configured to control a ground pitch angle of a pod of the unmanned aerial vehicle to be a first pitch angle and control a zoom multiple of the pod of the unmanned aerial vehicle to be a first zoom multiple according to the flight height of the unmanned aerial vehicle; keeping the first pitch angle and the first zoom multiple, and acquiring a first image in a video shot by the unmanned aerial vehicle when cruising on a road; wherein, be provided with in unmanned aerial vehicle's the nacelle along unmanned aerial vehicle course or deviate from the camera of unmanned aerial vehicle course.
According to an embodiment of the invention, the control module 14 is configured to control the pod to lock the object so that the object is centered in the video; adjusting the attitude of the unmanned aerial vehicle to enable the pod to keep a second pitch angle locking object; controlling the zoom factor of the pod to zoom from a first zoom factor to a second zoom factor; controlling the pod to capture a second image at the second pitch angle and the second zoom factor; and a camera which is arranged along or departs from the course is arranged in the pod of the unmanned aerial vehicle.
According to the embodiment of the invention, the control module 14 is configured to acquire the pixel coordinates of the object in real time and send the pixel coordinates to the pod, so that the pod adjusts the attitude angle of the pod according to the pixel coordinates to enable the object to be in the center of the video.
According to an embodiment of the invention, the control module 14 is configured to obtain the attitude angle of the nacelle in real time; when the pitch angle in the attitude angle is not equal to the second pitch angle, determining the distance to be flown by the unmanned aerial vehicle according to the pitch angle; and determining the attitude data of the unmanned aerial vehicle according to the distance to be flown by the unmanned aerial vehicle.
According to an embodiment of the present invention, the control module 14 is configured to determine a horizontal distance of the drone from the object according to the flying height and the pitch angle of the drone; determining the horizontal distance between the unmanned aerial vehicle and an object when the pod holds a second pitch angle to lock the object according to the flying height of the unmanned aerial vehicle and the second pitch angle; and determining the distance to be flown by the unmanned aerial vehicle according to the horizontal distance between the unmanned aerial vehicle and the object and the horizontal distance between the unmanned aerial vehicle and the object when the pod keeps the second pitch angle to lock the object.
According to an embodiment of the invention, the control module 14 is configured to determine the acceleration in the horizontal direction when the drone moves to the nacelle maintaining the second pitch angle, according to the distance the drone is to fly; determining the angular speed of the unmanned aerial vehicle when the unmanned aerial vehicle moves to the nacelle to maintain a second pitch angle according to the acceleration in the horizontal direction; and determining a course angle when the unmanned aerial vehicle moves to the pod to maintain a second pitch angle according to the angular speed so as to determine attitude data of the unmanned aerial vehicle.
The embodiment of the application also provides the electronic equipment. As shown in fig. 9, an electronic device 30 according to an embodiment of the present application is described. The electronic device 30 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 9, the electronic device 30 is in the form of a general purpose computing device. The components of the electronic device 30 may include, but are not limited to: the at least one processing unit 310, the at least one memory unit 320, and a bus 330 that couples various system components including the memory unit 320 and the processing unit 310.
Wherein the storage unit stores program code executable by the processing unit 310 to cause the processing unit 310 to perform steps according to various exemplary embodiments of the present invention described in the description part of the above exemplary methods of the present specification. For example, the processing unit 310 may perform the various steps as shown in fig. 2.
The storage unit 320 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)3201 and/or a cache memory unit 3202, and may further include a read only memory unit (ROM) 3203.
The storage unit 320 may also include a program/utility 3204 having a set (at least one) of program modules 3205, such program modules 3205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 330 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 30 may also communicate with one or more external devices 400 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 30, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 30 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 350. An input/output (I/O) interface 350 is connected to the display unit 340. Also, the electronic device 30 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 360. As shown, the network adapter 360 communicates with the other modules of the electronic device 30 via the bus 330. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 30, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, there is also provided a storage medium, the method described in the method embodiment section above.
According to an embodiment of the present application, there is also provided a program product for implementing the method in the above method embodiment, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods herein are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the statement that there is an element defined as "comprising" … … does not exclude the presence of other like elements in the process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method of road violation identification, comprising the steps of:
acquiring a first image captured when the unmanned aerial vehicle cruises a road;
processing the first image formation road data and information of an object located on a road;
matching violation constraint information according to the information of the object and the road data, and obtaining a matching result;
when the object is determined to be in violation according to the matching result, controlling the unmanned aerial vehicle and the pod to track the object, and shooting a second image;
and acquiring the identification of the object according to the second image.
2. The method according to claim 1, wherein the acquiring of the first image captured when the unmanned aerial vehicle is cruising on the road specifically comprises:
controlling the ground pitch angle of a pod of the unmanned aerial vehicle to be a first pitch angle and controlling the zoom multiple of the pod of the unmanned aerial vehicle to be a first zoom multiple according to the flight height of the unmanned aerial vehicle;
keeping the first pitch angle and the first zoom multiple, and acquiring a first image in a video shot by the unmanned aerial vehicle when cruising on a road;
wherein, be provided with in unmanned aerial vehicle's the nacelle along unmanned aerial vehicle course or deviate from the camera of unmanned aerial vehicle course.
3. The method of claim 1, wherein controlling the drone and the pod to track the object and take a second image comprises:
controlling a pod to lock the object so that the object is centered in the video;
adjusting the attitude of the unmanned aerial vehicle to enable the pod to keep a second pitch angle locking object;
controlling the zoom factor of the pod to zoom from a first zoom factor to a second zoom factor;
controlling the pod to capture a second image at the second pitch angle and the second zoom factor;
and a camera which is arranged along or departs from the course is arranged in the pod of the unmanned aerial vehicle.
4. The method of claim 3, wherein controlling a pod to lock the object so that the object is centered in the video comprises:
acquiring pixel coordinates of the object in real time, and sending the pixel coordinates to the pod, so that the pod adjusts the attitude angle of the pod according to the pixel coordinates, and the object is positioned in the center of the video.
5. The method of claim 4, wherein adjusting the attitude of the drone to hold the pod to the second pitch angle-locked object comprises:
acquiring the attitude angle of the pod in real time;
when the pitch angle in the attitude angle is not equal to the second pitch angle, determining the distance to be flown by the unmanned aerial vehicle according to the pitch angle;
and determining the attitude data of the unmanned aerial vehicle according to the distance to be flown by the unmanned aerial vehicle.
6. The method of claim 5, wherein determining a distance at which the drone is to fly from the pitch angle comprises:
determining the horizontal distance between the unmanned aerial vehicle and the object according to the flying height and the pitch angle of the unmanned aerial vehicle;
determining the horizontal distance between the unmanned aerial vehicle and an object when the pod holds a second pitch angle to lock the object according to the flying height of the unmanned aerial vehicle and the second pitch angle;
and determining the distance to be flown by the unmanned aerial vehicle according to the horizontal distance between the unmanned aerial vehicle and the object and the horizontal distance between the unmanned aerial vehicle and the object when the pod keeps the second pitch angle to lock the object.
7. The method of claim 5, wherein determining attitude data of the drone as a function of a distance the drone is to be flown comprises:
determining the acceleration in the horizontal direction when the unmanned aerial vehicle moves to the nacelle and keeps a second pitch angle according to the distance to be flown by the unmanned aerial vehicle;
determining the angular speed of the unmanned aerial vehicle when the unmanned aerial vehicle moves to the nacelle to maintain a second pitch angle according to the acceleration in the horizontal direction;
and determining a course angle when the unmanned aerial vehicle moves to the pod to maintain a second pitch angle according to the angular speed so as to determine attitude data of the unmanned aerial vehicle.
8. An apparatus for road violation identification, comprising:
the acquisition module is used for acquiring a first image captured when the unmanned aerial vehicle cruises a road;
a processing module for processing the first image forming road data and information of an object located on a road;
the matching module is used for matching violation constraint information according to the information of the object and the road data and obtaining a matching result;
the control module is used for controlling the unmanned aerial vehicle and the pod to track the object and shoot a second image when the object is determined to be in violation according to the matching result;
and the identification module acquires the identification of the object according to the second image.
9. An electronic device, comprising:
a memory storing computer readable instructions;
a processor to read computer readable instructions stored by the memory to perform the method of any of claims 1-7.
10. A storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-7.
CN202210015976.0A 2022-01-07 2022-01-07 Method and device for identifying road violation, electronic equipment and storage medium Pending CN114373152A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210015976.0A CN114373152A (en) 2022-01-07 2022-01-07 Method and device for identifying road violation, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210015976.0A CN114373152A (en) 2022-01-07 2022-01-07 Method and device for identifying road violation, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114373152A true CN114373152A (en) 2022-04-19

Family

ID=81143671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210015976.0A Pending CN114373152A (en) 2022-01-07 2022-01-07 Method and device for identifying road violation, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114373152A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690767A (en) * 2022-10-26 2023-02-03 北京远度互联科技有限公司 License plate recognition method and device, unmanned aerial vehicle and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443716A (en) * 2020-04-23 2020-07-24 西安智文琛软件有限公司 Unmanned aerial vehicle inspection control method, system, storage medium, program and terminal
CN112149595A (en) * 2020-09-29 2020-12-29 爱动超越人工智能科技(北京)有限责任公司 Method for detecting lane line and vehicle violation by using unmanned aerial vehicle
CN113763719A (en) * 2021-10-13 2021-12-07 深圳联和智慧科技有限公司 Unmanned aerial vehicle-based illegal emergency lane occupation detection method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443716A (en) * 2020-04-23 2020-07-24 西安智文琛软件有限公司 Unmanned aerial vehicle inspection control method, system, storage medium, program and terminal
CN112149595A (en) * 2020-09-29 2020-12-29 爱动超越人工智能科技(北京)有限责任公司 Method for detecting lane line and vehicle violation by using unmanned aerial vehicle
CN113763719A (en) * 2021-10-13 2021-12-07 深圳联和智慧科技有限公司 Unmanned aerial vehicle-based illegal emergency lane occupation detection method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115690767A (en) * 2022-10-26 2023-02-03 北京远度互联科技有限公司 License plate recognition method and device, unmanned aerial vehicle and storage medium
CN115690767B (en) * 2022-10-26 2023-08-22 北京远度互联科技有限公司 License plate recognition method, license plate recognition device, unmanned aerial vehicle and storage medium

Similar Documents

Publication Publication Date Title
US11745742B2 (en) Planning stopping locations for autonomous vehicles
KR102541561B1 (en) Method of providing information for driving vehicle and apparatus thereof
US11113961B2 (en) Driver behavior monitoring
KR102513185B1 (en) rules-based navigation
US11967230B2 (en) System and method for using V2X and sensor data
EP4086875A1 (en) Self-driving method and related device
CN114387533A (en) Method and device for identifying road violation, electronic equipment and storage medium
EP3933439A1 (en) Localization method and localization device
CN116745187A (en) Method and system for predicting the trajectory of an uncertain road user by semantic segmentation of the boundary of a travelable region
CN117280292A (en) Determining a path to a vehicle stop location in a cluttered environment
US20230192141A1 (en) Machine learning to detect and address door protruding from vehicle
CN114373152A (en) Method and device for identifying road violation, electronic equipment and storage medium
CN114333339B (en) Deep neural network functional module de-duplication method
US11884268B2 (en) Motion planning in curvilinear coordinates for autonomous vehicles
CN116724214A (en) Method and system for generating a lane-level map of a region of interest for navigation of an autonomous vehicle
KR102675640B1 (en) Drone for detecting traffic violation cars and method for the same
CN114729810A (en) Pedestrian crossing detection
CN116524311A (en) Road side perception data processing method and system, storage medium and electronic equipment thereof
Messenger et al. Real-time traffic end-of-queue detection and tracking in uav video
CN115775463A (en) Navigation method for automatically driving automobile
US20220343763A1 (en) Identifying parkable areas for autonomous vehicles
CN116745188A (en) Method and system for generating a longitudinal plan for an autonomous vehicle based on the behavior of an uncertain road user
CN114274978A (en) Obstacle avoidance method for unmanned logistics vehicle
KR20220071822A (en) Identification system and method of illegal parking and stopping vehicle numbers using drone images and artificial intelligence technology
CN114373139A (en) Method, device, electronic equipment and storage medium for identifying road violation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination