CN112417922A - Target identification method and device - Google Patents

Target identification method and device Download PDF

Info

Publication number
CN112417922A
CN112417922A CN201910769052.8A CN201910769052A CN112417922A CN 112417922 A CN112417922 A CN 112417922A CN 201910769052 A CN201910769052 A CN 201910769052A CN 112417922 A CN112417922 A CN 112417922A
Authority
CN
China
Prior art keywords
motor vehicle
video frame
brand
current video
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910769052.8A
Other languages
Chinese (zh)
Other versions
CN112417922B (en
Inventor
梁云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910769052.8A priority Critical patent/CN112417922B/en
Publication of CN112417922A publication Critical patent/CN112417922A/en
Application granted granted Critical
Publication of CN112417922B publication Critical patent/CN112417922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a target identification method and a target identification device. According to the method and the device, candidate operation information corresponding to the non-motor vehicle in the current video frame is determined, then whether the association degree between the non-motor vehicle and the candidate operation brand in the candidate operation information is larger than a preset association threshold corresponding to the candidate operation brand is checked for each candidate operation brand in the candidate operation information, if yes, the candidate operation brand is determined to be the target operation brand of the non-motor vehicle, the target operation brand of the non-motor vehicle in the video frame is finally identified, and the purpose of warning standard non-motor vehicle illegal behaviors through the target operation brand can be achieved subsequently.

Description

Target identification method and device
Technical Field
The present application relates to intelligent traffic technologies, and in particular, to a target identification method and apparatus.
Background
With the rapid development of the take-away industry, the number of non-motor vehicles on the road is increased dramatically. The non-motor vehicle herein mainly refers to a non-motor vehicle for the purpose of profitability, and includes a two-wheeled vehicle, a three-wheeled vehicle, and the like. Such profitable non-motor vehicles, which may also be referred to as operational non-motor vehicles, have corresponding operational information. The operation information here refers to which merchant brand the non-motor vehicle is currently engaged in. The merchant brand is also referred to herein as the operating brand.
At present, a plurality of detection modes can detect the non-motor vehicles from pictures captured by a camera, but the detection modes only detect the non-motor vehicles, and cannot identify the operation brands of the non-motor vehicles, so that whether the non-motor vehicles are the operation non-motor vehicles or not can not be confirmed, and a lot of inconvenience is brought to subsequent illegal behaviors of the non-motor vehicles through operation brand warning standards.
Disclosure of Invention
The application provides a target identification method and a target identification device for identifying a target operation brand of a non-motor vehicle.
The technical scheme provided by the application comprises the following steps:
a method of object recognition, the method comprising:
obtaining candidate operation information of the non-motor vehicle from the current video frame, wherein the candidate operation information at least comprises: at least one candidate operating brand associated with a non-motor vehicle, a degree of association between the non-motor vehicle and the at least one candidate operating brand;
and aiming at each candidate operation brand in the candidate operation information, checking whether the association degree between the non-motor vehicle and the candidate operation brand is greater than a preset association threshold corresponding to the candidate operation brand, and if so, determining the candidate operation brand as the target operation brand of the non-motor vehicle.
An object recognition apparatus, the apparatus comprising:
an obtaining unit, configured to obtain candidate operation information of the non-motor vehicle from a current video frame, where the candidate operation information at least includes: at least one candidate operating brand associated with a non-motor vehicle, a degree of association between the non-motor vehicle and the at least one candidate operating brand;
and the determining unit is used for checking whether the association degree between the non-motor vehicle and the candidate operation brand is greater than a preset association threshold corresponding to the candidate operation brand or not aiming at each candidate operation brand in the candidate operation information, and if so, determining the candidate operation brand as the target operation brand of the non-motor vehicle.
An electronic device, comprising:
a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the above-described method.
According to the technical scheme, the candidate operation information corresponding to the non-motor vehicle in the current video frame is obtained, then whether the association degree between the non-motor vehicle and the candidate operation brand in the candidate operation information is larger than the preset association threshold corresponding to the candidate operation brand is checked according to each candidate operation brand in the candidate operation information, if yes, determining the candidate operation brand as the target operation brand of the non-motor vehicle, finally identifying the target operation brand of the non-motor vehicle in the video frame, further realizing the purpose of regulating the illegal behaviors of the non-motor vehicles through the warning of the target operation brand, for example, sending illegal warning to a target company or a company responsible person and the like corresponding to the target operation brand, the purpose of restricting illegal driving behaviors of all non-motor vehicles under the target operation brand is achieved, and the road traffic safety is guaranteed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of a method provided by an embodiment of the present application;
FIG. 2 is a flowchart of an implementation of step 101 provided in an embodiment of the present application;
FIG. 3 is a flowchart of another implementation of step 101 provided in an embodiment of the present application;
FIG. 4 is a flowchart of a process performed before step 101 according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating an implementation of step 401 provided in an embodiment of the present application;
FIG. 6 is a flowchart of issuing an illegal alarm according to an embodiment of the present application;
FIG. 7 is a block diagram of an apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic hardware structure diagram of the apparatus shown in fig. 5 according to an embodiment of the present disclosure.
Detailed Description
According to the method, the operating brand of the non-motor vehicle in the video frame is obtained by detecting the video frame, so that the purpose of standardizing the illegal behaviors of the non-motor vehicle through the target operating brand warning is achieved, for example, the illegal warning is sent to a target company or a company responsible person and the like corresponding to the target operating brand, the purpose of restricting the illegal driving behaviors of the non-motor vehicle under the target operating brand is achieved, and the road traffic safety is guaranteed.
In order to make the method provided by the present application clearer, the method provided by the present application is described below with reference to the accompanying drawings and examples:
referring to fig. 1, fig. 1 is a flowchart of a method provided in an embodiment of the present application. As an example, the flow shown in fig. 1 may be applied to a camera, where the camera may perform the flow shown in fig. 1 on captured video frames. As another example, the flow shown in fig. 1 may also be applied to a server. The server here can connect multiple cameras simultaneously, and is responsible for executing the flow shown in fig. 1 on the video frames captured by each camera.
The flow shown in fig. 1 is described below:
as shown in fig. 1, the process may include the following steps:
step 101, obtaining candidate operation information of the non-motor vehicle from a current video frame.
As an example, the step 101 is to obtain possible operation information of the non-motor vehicle (such as which operation brand may be currently engaged in work) according to the current video frame. Here, the acquired possible operation information is referred to as candidate operation information. As for how to obtain the candidate operation information of the non-motor vehicle according to the current video frame in step 101, there are a plurality of implementation manners, and hereinafter, two implementation manners will be described by way of example, and will not be described herein again.
As an embodiment, the candidate operation information at least includes: at least one candidate operating brand associated with a non-motor vehicle, a degree of association between the non-motor vehicle and the at least one candidate operating brand. Here, the non-motor vehicle is associated with a candidate operating brand, meaning that the non-motor vehicle is currently likely to be engaged in work belonging to the operating brand. As an example, a higher degree of association between a non-motor vehicle and a candidate operating brand indicates a higher likelihood that work being performed by the non-motor vehicle belongs to the candidate operating brand, whereas a lower degree of association between a non-motor vehicle and a candidate operating brand indicates a lower likelihood that work being performed by the non-motor vehicle belongs to the candidate operating brand.
It should be noted that the above-mentioned current video frame represents a video frame to be currently processed, which is named for convenience of description and does not refer to a certain video frame.
Step 102, aiming at each candidate operation brand in the candidate operation information, checking whether the association degree between the non-motor vehicle and the candidate operation brand is larger than a preset association threshold corresponding to the candidate operation brand, and if so, determining the candidate operation brand as the target operation brand of the non-motor vehicle.
After acquiring the candidate operation information of the non-motor vehicle in step 101, it is required to check each candidate operation brand in the candidate operation information if the candidate operation brand in the candidate operation information is finally the operation brand really described by the non-motor vehicle (referred to as the target operation brand), as described in step 102.
If the step 102 involves a preset association threshold corresponding to the candidate operation brand, in order to make the step 102 clearer, the preset association threshold corresponding to the operation brand is explained first. In the embodiment of the application, the corresponding preset associated threshold value is preset for some operation brands on the current market. How to preset the preset associated threshold corresponding to each operation brand is described in the following example:
as an embodiment, the application can set corresponding preset associated threshold values for different operation brands according to application ranges of the different operation brands. In one example, the larger the application range of the operating brand, the larger the corresponding preset association threshold, and conversely, the smaller the application range of the operating brand, the smaller the corresponding preset association threshold. For example, the application range of the commercial brand "mei qu" in the market is relatively large, and the application range of another commercial brand "jie ye" is relatively small, so that the preset association threshold corresponding to the commercial brand "mei qu" may be larger than the preset association threshold corresponding to the commercial brand "jiye". For example, the preset association threshold corresponding to the "beauty team" of the operating brand is 80%, and the preset association threshold corresponding to the "lucky family" of the operating brand is 45%.
As another embodiment, the application may also set corresponding preset associated thresholds for different operation brands according to actual requirements. For example, the actual requirement is to detect a non-motor vehicle of the operating brand "jiye" near a specified intersection, and the non-motor vehicle of the operating brand "jiye" often frequently appears at the specified intersection, in this case, the preset association threshold corresponding to the operating brand "jiye" may be set to be relatively high, for example, the preset association threshold corresponding to the operating brand "jiye" is 80%, and the preset association threshold corresponding to other operating brands such as "mei bo" is 40%.
The above description is given by way of example and not by way of limitation how to set the corresponding preset associated threshold for each different operation brand.
Based on the preset association threshold set for each different operation brand, in step 102, it is checked whether the association degree between the non-motor vehicle and the candidate operation brand is greater than the preset association threshold corresponding to the candidate operation brand for each candidate operation brand, and if so, the candidate operation brand is determined as the target operation brand of the non-motor vehicle. Finally, via step 102, a target operating brand of the non-motor vehicle is determined. When the target operation brand of the non-motor vehicle is determined, and the non-motor vehicle is managed subsequently, the non-motor vehicle is not managed by the driver of the non-motor vehicle any more, but by the target operation brand of the non-motor vehicle, which will be described later.
Thus, the flow shown in fig. 1 is completed.
It can be seen from the process shown in fig. 1 that, in the present application, candidate operation information corresponding to the non-motor vehicle in the current video frame is determined, then, for each candidate operation brand in the candidate operation information, whether the association between the non-motor vehicle and the candidate operation brand in the candidate operation information is greater than a preset association threshold corresponding to the candidate operation brand is checked, if yes, the candidate operation brand is determined as a target operation brand of the non-motor vehicle, and finally, a target operation brand of the non-motor vehicle in the video frame is identified, so that the purpose of subsequently warning the illegal driving behavior of the non-motor vehicle through the target operation brand can be achieved, for example, the illegal driving behavior of the non-motor vehicle under the target operation brand is restricted by sending the illegal driving behavior to a target company or a responsible person corresponding to the target operation brand, the road traffic safety is guaranteed.
How to obtain the candidate operation information of the non-motor vehicle from the current video frame in step 101 is described by way of example as follows:
mode 1:
in the present embodiment 1, an operation information database is preset. The operation information database stores each operation brand and feature information (referred to as operation brand feature information) of each operation brand. The operation brand feature information includes but is not limited to: the model of the non-motor vehicle (for example, the model of the vehicle specified by the operation brand), the vehicle-mounted information (for example, the shape of the takeout box, the express box, and the like, the operation brand mark, and the like), and the wearing information of the driver on the non-motor vehicle (for example, the helmet worn by the non-motor vehicle driver, the clothing worn by the non-motor vehicle specified by the operation brand).
Based on this, in the present embodiment 1, the step 101 of obtaining the candidate operation information of the non-motor vehicle from the current video frame may include the process shown in fig. 2.
Referring to fig. 2, fig. 2 is a flowchart of a step 101 implemented by an embodiment of the present application. As shown in fig. 2, the process may include the following steps:
in step 201, the operation brand feature information (marked as reference operation brand feature information) is identified from the current video frame.
As an example, there are many implementations for identifying the reference operation brand feature information from the current video frame in step 201, for example, identifying the reference operation brand feature information from the current video frame according to an existing image identification algorithm, or inputting the current video frame into a trained image identification model for operation brand feature information identification to obtain the reference operation brand feature information. The embodiment of the present application does not limit how to identify the reference operation brand feature information from the current video frame in this step 201.
Based on the description of the operation brand feature information in the operation information database, the reference operation brand feature information identified herein includes, but is not limited to: the model of the non-motor vehicle, the on-board information, and the wearing information of the driver of the non-motor vehicle.
Step 202, calculating the similarity between the reference operation brand feature information and each operation brand feature information in the operation information database.
As an embodiment, there are many ways to calculate the similarity between the reference operation brand feature information and each operation brand feature information in the operation information database in this step 202, for example, by using the existing text similarity calculation, and the embodiment of the present application is not limited in particular.
And step 203, determining the operation information corresponding to the candidate operation brand feature information as candidate operation information.
In one example, the step 203 of selecting the operation brand feature information from the operation information database as the candidate operation brand feature information according to the similarity between the reference operation brand feature information and each operation brand feature information in the operation information database includes: and selecting the operation brand feature information with the similarity degree with the reference operation brand feature information larger than a set threshold value from the operation information database, and determining the selected operation brand feature information as candidate operation brand feature information. Here, the set threshold may be set according to actual conditions, and the embodiment of the present application is not particularly limited.
Thus, the flow shown in fig. 2 is completed. Through the process shown in fig. 2, it is finally realized how to obtain the candidate operation information of the non-motor vehicle from the current video frame.
Mode 1 is described above. Mode 2 is described below.
Mode 2:
in this mode 2, a deep learning model needs to be trained. The deep learning model is used to identify candidate operational information for the non-motor vehicle. How to train the deep learning model is described below:
first, an initial deep learning model is built.
Here, an initial deep learning model may be built according to the deep learning framework. The parameters in the initial YOLO deep learning model may be default values.
Secondly, calibrating a large number of sample pictures, and taking the calibrated sample pictures as training data.
As an example, the content of the calibration mainly includes: the coordinate information of the rectangular frame where the non-motor vehicle is located in the sample picture, the operation brand corresponding to the non-motor vehicle, the vehicle type of the non-motor vehicle, vehicle-mounted information (such as the shapes of a take-out box body, an express box body and the like, operation brand identification and the like), and wearing information of a driver on the non-motor vehicle (such as a helmet worn by the driver, clothing worn by the driver and the like). In one example, scaling a large number of sample pictures may specify one server implementation.
And finally, training the initial deep learning model by using the training data until all parameters in the deep learning model meet the requirements. Finally, a deep learning model is trained.
Based on the trained deep learning model, the step 101 of obtaining the candidate operation information of the non-motor vehicle from the current video frame may include the process shown in fig. 3:
referring to fig. 3, fig. 3 is a flowchart of another implementation of step 101 provided in this embodiment of the present application. As shown in fig. 3, the process may include the following steps:
step 301, inputting the current video frame into the trained deep learning model, and outputting candidate operation information of the non-motor vehicle when the deep learning model identifies that the non-motor vehicle exists in the current video frame.
As an example, the deep learning model first identifies a non-motor vehicle from the current video frame, and when the non-motor vehicle is identified, further identifies the vehicle type, vehicle-mounted information, and driver wearing information of the non-motor vehicle, and determines which operation brand (referred to as candidate operation brand) the non-motor vehicle is likely to work (referred to as candidate operation information) according to the vehicle type, vehicle-mounted information, and driver wearing information of the non-motor vehicle.
In one example, if the candidate operating brand identified based on the vehicle type of the non-motor vehicle and the on-board information and/or the driver wearing information of the non-motor vehicle are inconsistent, such as identifying the candidate operating brand as "beauty team" based on the vehicle type of the non-motor vehicle and identifying the candidate operating brand as "gym home" based on the on-board information and/or the driver wearing information of the non-motor vehicle, the deep learning model may determine the candidate operating brand in the following order:
preferentially referring to the candidate operation brand determined according to the vehicle type of the non-motor vehicle, then referring to the candidate operation brand determined according to the vehicle-mounted information of the non-motor vehicle, and finally referring to the candidate operation brand determined according to the wearing information of the driver on the non-motor vehicle.
The candidate operation information is as described in the mode 1, and is not described in detail here.
After the deep learning model is trained, the deep learning model has a certain requirement (denoted as a video frame requirement corresponding to the deep learning model) on the input video frame. For example, a video frame requires: filtering the video frame, and/or requiring that the size of the video frame cannot be larger than a specified value, etc. In this case, the step 301 of inputting the current video frame into the trained deep learning model may include: performing image processing on the current video frame to obtain a processed video frame, wherein the processed video frame meets the video frame requirement corresponding to the deep learning model; and inputting the processed video frame into the trained deep learning model. The final purpose of the image processing is to ensure that the processed video frame meets the video frame requirement corresponding to the deep learning model. In one example, image processing herein includes, but is not limited to: filtering, scaling, cropping, etc., when the video frame is greater than the specified value.
And step 302, acquiring candidate operation information of the non-motor vehicle output by the deep learning model.
The flow shown in fig. 3 is completed. Through the flow shown in fig. 3, it is finally realized how to obtain the candidate operation information of the non-motor vehicle from the current video frame.
This completes the description of mode 2.
In the above-described mode 2, as an example, the deep learning model may be a YOLO network model. The training of the YOLO network model can refer to the training of the existing YOLO network model and the training of the deep learning model, and finally the candidate operation information of the non-motor vehicle can be identified.
How to obtain the candidate operation information of the non-motor vehicle from the current video frame is described in the above manner 1 and manner 2.
As an example, the method may further comprise the process of fig. 4 before obtaining the candidate operating information of the non-motor vehicle from the current video frame.
Referring to fig. 4, fig. 4 is a flowchart executed before step 101 provided in an embodiment of the present application. As shown in fig. 4, the process may include the following steps:
step 401, determining whether there is an object satisfying a setting condition in the current video frame, where the setting condition is: the position of the object in the current video frame is different from the position of the object in the previous video frame of the current video frame, and if yes, step 402 is executed.
Here, if the position of the object in the current video frame is different from the position of the object in the previous video frame of the current video frame, it means that the object has moved, and the object at this time is regarded as a moving object. That is, step 401 is to determine the moving object by combining the current video frame and the previous video frame.
And step 402, continuing to execute the operation of acquiring the candidate operation information of the non-motor vehicle from the current video frame in the step 101.
This step 402 is performed when step 401 determines a movable object by combining the current video frame and the previous video frame of the current video frame. If the movable object is judged, the fact that the non-motor vehicle which is running possibly exists in the period from the last video frame of the current video frame to the current video frame is shown, and the operation of acquiring the candidate operation information of the non-motor vehicle from the current video frame can be continuously executed. And when the movable object is judged not to exist, it indicates that no running non-motor vehicle exists from the last video frame of the current video frame to the current video frame (for example, no non-motor vehicle exists, or a non-motor vehicle exists but the non-motor vehicle is still), in this case, the operation of acquiring the candidate operation information of the non-motor vehicle from the current video frame may not be executed. The method and the device realize that the operation of acquiring the candidate operation information of the non-motor vehicle from the current video frame is not performed blindly but performed selectively, improve the flexibility of operation execution and save resources.
To this end, the flow shown in FIG. 4 is completed
As an example, in step 401, determining whether there is an object satisfying a set condition in the current video frame may include the flow shown in fig. 5:
referring to fig. 5, fig. 5 is a flowchart of a step 401 implemented by an embodiment of the present application. As shown in fig. 5, the process may include the following steps:
step 501, calculating the difference between pixel characteristic parameter values of pixels at the same position in the current video frame and the previous video frame of the current video frame to obtain a pixel characteristic parameter difference value.
Here, there are many implementation forms for the pixel characteristic parameter values of the pixel points when the pixel characteristic parameter values are implemented specifically, such as gray values, and the embodiment of the present application is not limited specifically.
Step 502, if the pixel characteristic parameter difference values with the set proportion in all the pixel characteristic parameter difference values are less than or equal to the set pixel characteristic threshold value, determining that no object meeting the set condition exists in the current video frame, otherwise, determining that an object meeting the set condition exists in the current video frame.
The set proportion can be set according to actual requirements, and the embodiment of the application is not particularly limited.
In one example, when most of the pixel characteristic parameter differences are less than or equal to the set pixel characteristic threshold (that is, the pixel characteristic parameter differences with a set proportion of all the pixel characteristic parameter differences are less than or equal to the set pixel characteristic threshold), it is determined that there is no movable object in the current video frame compared with the previous video frame, that is, it is determined that there is no object satisfying the set condition in the current video frame. Otherwise, the current video frame is considered to have a movable object compared with the previous video frame, that is, the object meeting the set condition exists in the current video frame.
The flow shown in fig. 5 is completed. Through the flow shown in fig. 5, how to determine whether an object (i.e., a moving object) satisfying a set condition exists in the current video frame is realized.
The following describes how to manage the non-motor vehicle based on the identified target operation brand after the target operation brand of the non-motor vehicle is identified:
in one example, the candidate operation information may further include: position information of the non-motor vehicle in the current video frame. As an example, the position information here may be position information of a rectangular frame where the non-motor vehicle is located in the current video frame, such as a left vertex coordinate position of the rectangular frame and a length and a width of the rectangular frame. The rectangular frame here encompasses at least the entire non-motor vehicle.
Based on this, as one embodiment, after determining the candidate operating brand as the target operating brand for the non-motor vehicle, the method further comprises:
and determining whether the non-motor vehicle runs in the non-motor vehicle running area according to the position information, and if not, determining that the non-motor vehicle is illegal and sending an illegal alarm. Generally, the non-motor vehicle has a special driving area, and the embodiment of the present invention maps the current area of the non-motor vehicle according to the position information, and then checks whether the current area of the non-motor vehicle is the driving area of the non-motor vehicle. And when the current region of the non-motor vehicle is not the non-motor vehicle driving region, judging that the non-motor vehicle is illegal and sending an illegal alarm. As to how to issue the violation alarm, the following description will be given, and the details will not be repeated here.
In another example, after determining the candidate operating brand as the target operating brand for the non-motor vehicle, the method further comprises:
checking whether illegal behaviors exist in a current video frame, wherein the illegal behaviors at least comprise: the non-motor vehicle is driven in the reverse direction, and illegal actions of a user on the non-motor vehicle are taken;
if yes, the non-motor vehicle violation is determined, and a violation alarm is sent.
As an embodiment, the above-mentioned checking whether there is illegal activity in the current video frame may use the existing behavior recognition method for reference. Whether illegal behaviors exist in the current video frame can be identified through a behavior identification mode. And when the illegal action exists in the current video frame, giving an illegal alarm to a responsible person corresponding to the target operation brand.
How to issue violation alarms is described below:
here, as an embodiment, issuing an violation alarm may include the flow shown in FIG. 6:
referring to fig. 6, fig. 6 is a flowchart of issuing an illegal alarm according to an embodiment of the present application. As shown in fig. 6, the process may include the following steps:
step 601, checking whether the specified storage medium records the illegal times of the target operation brand, if not, executing step 602, and if so, executing step 603.
In one example, the designated storage medium records a corresponding relationship between the operating brand and the number of illegal times, based on which, in this step 601, the corresponding relationship including the keyword is searched in the designated storage medium by using the target operating brand as the keyword, if the corresponding relationship is found, it indicates that the designated storage medium records the number of illegal times of the target operating brand, and if not, it indicates that the designated storage medium does not record the number of illegal times of the target operating brand.
Step 602, recording the number of times of violation of the target operation brand as a first value in a specified storage medium. Step 604 is then performed.
As an example, in this step 602, the correspondence between the target operation brand and the number of violations is recorded in a designated storage medium, and the number of violations at this time is a first value. In one example, the first value here is, for example, 1.
Step 603, increasing the number of violations of the target operating brand recorded in the designated storage medium by a first value. Step 604 is then performed.
In one example, the first value here is, for example, 1.
Step 604, checking whether the number of times of violation of the target operation brand reaches a set violation threshold, and if so, sending an violation alarm.
In one embodiment, the illegal warning can be sent to the target company corresponding to the target operation brand, so that when the target company receives the illegal warning, the driving behaviors of all non-motor vehicles in the company are monitored and regulated, and the road safety is ensured.
In another embodiment, on the premise of determining the relevant main personnel of the target company corresponding to the target operation brand, such as a company leader, the illegal warning can also be directly sent to the company leader, so that when the company leader receives the illegal warning, the driving behaviors of all non-motor vehicles in the company are monitored and regulated, and the road safety is ensured.
It should be noted that the above is only an example of how to issue the violation alarm, and is not limiting.
The flow shown in fig. 6 is completed. How to send out illegal warning is realized through the process shown in FIG. 6, so that the driving behaviors of all non-motor vehicles under the target operation brand are regulated, and the road safety is ensured.
The method provided by the present application is described above, and the device provided by the present application is described below:
referring to fig. 7, fig. 7 is a diagram illustrating the structure of the apparatus according to the present invention. As shown in fig. 7, the apparatus includes:
an obtaining unit, configured to obtain candidate operation information of the non-motor vehicle from a current video frame, where the candidate operation information at least includes: at least one candidate operating brand associated with a non-motor vehicle, a degree of association between the non-motor vehicle and the at least one candidate operating brand;
and the determining unit is used for checking whether the association degree between the non-motor vehicle and the candidate operation brand is greater than a preset association threshold corresponding to the candidate operation brand or not aiming at each candidate operation brand in the candidate operation information, and if so, determining the candidate operation brand as the target operation brand of the non-motor vehicle.
As an embodiment, the acquiring unit acquires the candidate operation information of the non-motor vehicle from the current video frame includes:
inputting a current video frame into a trained deep learning model, wherein the deep learning model outputs candidate operation information of a non-motor vehicle when recognizing that the non-motor vehicle exists in the current video frame;
and acquiring candidate operation information of the non-motor vehicle output by the deep learning model.
As an embodiment, the obtaining unit inputting the current video frame to the trained deep learning model includes:
performing image processing on the current video frame to obtain a processed video frame, wherein the processed video frame meets the video frame requirement corresponding to the deep learning model;
and inputting the processed video frame into the trained deep learning model.
As an embodiment, before acquiring the candidate operation information of the non-motor vehicle from the current video frame, the acquiring unit further determines whether an object satisfying a setting condition exists in the current video frame, where the setting condition is: the position of the object in the current video frame is different from the position of the object in the last video frame of the current video frame; if yes, the operation of obtaining the candidate operation information of the non-motor vehicle from the current video frame is continuously executed.
As an embodiment, the determining, by the obtaining unit, whether an object satisfying a set condition exists in the current video frame includes:
calculating the difference between the pixel characteristic parameter values of the pixels at the same position in the current video frame and the previous video frame of the current video frame to obtain a pixel characteristic parameter difference value;
and if the pixel characteristic parameter difference values with the set proportion in all the pixel characteristic parameter difference values are smaller than or equal to the set pixel characteristic threshold value, determining that the current video frame does not have the object meeting the set condition, otherwise, determining that the current video frame has the object meeting the set condition.
As an embodiment, the candidate operational information further includes: position information of the non-motor vehicle in the current video frame;
based on this, the determining unit further determines whether the non-motor vehicle runs in the non-motor vehicle running area according to the position information after determining the candidate operation brand as the target operation brand of the non-motor vehicle, and if not, determines that the non-motor vehicle is illegal and sends an illegal warning.
As an embodiment, the determining unit further checks whether there is an illegal act in the current video frame after determining the candidate commercial brand as the target commercial brand of the non-motor vehicle, the illegal act at least including: the non-motor vehicle is driven in the reverse direction, and illegal actions of a user on the non-motor vehicle are taken; if yes, the non-motor vehicle violation is determined, and a violation alarm is sent.
As an embodiment, the above-mentioned issuing an illegal alarm to a responsible person corresponding to the target operation brand may include:
checking whether a specified storage medium records the illegal times of the target operation brand, and if not, recording the illegal times of the target operation brand as a first value in the specified storage medium; if yes, increasing the illegal times of the target operation brand recorded in the specified storage medium by a first value;
and checking whether the illegal times of the target operation brand reach a set illegal threshold value, and if so, sending an illegal alarm.
Thus, the description of the structure of the apparatus shown in fig. 7 is completed.
Correspondingly, the application also provides a hardware structure of the device shown in fig. 7. Referring to fig. 8, the hardware structure may include: a processor and a machine-readable storage medium having stored thereon machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. A method of object recognition, the method comprising:
obtaining candidate operation information of the non-motor vehicle from the current video frame, wherein the candidate operation information at least comprises: at least one candidate operating brand associated with a non-motor vehicle, a degree of association between the non-motor vehicle and the at least one candidate operating brand;
and aiming at each candidate operation brand in the candidate operation information, checking whether the association degree between the non-motor vehicle and the candidate operation brand is greater than a preset association threshold corresponding to the candidate operation brand, and if so, determining the candidate operation brand as the target operation brand of the non-motor vehicle.
2. The method of claim 1, wherein the obtaining candidate operational information of the non-motor vehicle from the current video frame comprises:
inputting a current video frame into a trained deep learning model, wherein the deep learning model outputs candidate operation information of a non-motor vehicle when recognizing that the non-motor vehicle exists in the current video frame;
and acquiring candidate operation information of the non-motor vehicle output by the deep learning model.
3. The method of claim 2, wherein inputting the current video frame to the trained deep learning model comprises:
performing image processing on the current video frame to obtain a processed video frame, wherein the processed video frame meets the video frame requirement corresponding to the deep learning model;
and inputting the processed video frame into the trained deep learning model.
4. The method of any of claims 1 to 3, wherein prior to said obtaining candidate operating information for the non-motor vehicle from the current video frame, the method further comprises:
judging whether an object meeting set conditions exists in the current video frame, wherein the set conditions are as follows: the position of the object in the current video frame is different from the position of the object in the last video frame of the current video frame;
if yes, the operation of obtaining the candidate operation information of the non-motor vehicle from the current video frame is continuously executed.
5. The method according to claim 4, wherein the determining whether the object satisfying the set condition exists in the current video frame comprises:
calculating the difference between the pixel characteristic parameter values of the pixels at the same position in the current video frame and the previous video frame of the current video frame to obtain a pixel characteristic parameter difference value;
and if the pixel characteristic parameter difference values with the set proportion in all the pixel characteristic parameter difference values are smaller than or equal to the set pixel characteristic threshold value, determining that the current video frame does not have the object meeting the set condition, otherwise, determining that the current video frame has the object meeting the set condition.
6. The method of claim 1, wherein the candidate operational information further comprises: position information of the non-motor vehicle in the current video frame;
after determining the candidate operating brand as the target operating brand for the non-motor vehicle, the method further comprises:
and determining whether the non-motor vehicle runs in the non-motor vehicle running area according to the position information, if not, determining that the non-motor vehicle is illegal, and sending an illegal alarm.
7. The method of claim 1, wherein after determining the candidate operating brand as the target operating brand for the non-motor vehicle, the method further comprises:
checking whether illegal behaviors exist in the current video frame, wherein the illegal behaviors at least comprise: the non-motor vehicle is driven in the reverse direction, and illegal actions of a user on the non-motor vehicle are taken;
if yes, the non-motor vehicle violation is determined, and a violation alarm is sent.
8. The method according to claim 6 or 7, wherein said issuing an alarm of violation comprises:
checking whether a specified storage medium records the illegal times of the target operation brand, and if not, recording the illegal times of the target operation brand as a first value in the specified storage medium; if yes, increasing the illegal times of the target operation brand recorded in the specified storage medium by a first value;
and checking whether the illegal times of the target operation brand reach a set illegal threshold value, and if so, sending an illegal alarm.
9. An object recognition apparatus, characterized in that the apparatus comprises:
an obtaining unit, configured to obtain candidate operation information of the non-motor vehicle from a current video frame, where the candidate operation information at least includes: at least one candidate operating brand associated with a non-motor vehicle, a degree of association between the non-motor vehicle and the at least one candidate operating brand;
and the determining unit is used for checking whether the association degree between the non-motor vehicle and the candidate operation brand is greater than a preset association threshold corresponding to the candidate operation brand or not aiming at each candidate operation brand in the candidate operation information, and if so, determining the candidate operation brand as the target operation brand of the non-motor vehicle.
10. The apparatus of claim 9, wherein the obtaining unit obtains the candidate operation information of the non-motor vehicle from the current video frame comprises:
inputting a current video frame into a trained deep learning model, wherein the deep learning model outputs candidate operation information of a non-motor vehicle when recognizing that the non-motor vehicle exists in the current video frame;
and acquiring candidate operation information of the non-motor vehicle output by the deep learning model.
11. The apparatus according to claim 9 or 10, wherein the obtaining unit further comprises, before the obtaining candidate operation information of the non-motor vehicle from the current video frame:
judging whether an object meeting set conditions exists in the current video frame, wherein the set conditions are as follows: the position of the object in the current video frame is different from the position of the object in the last video frame of the current video frame; if yes, the operation of obtaining the candidate operation information of the non-motor vehicle from the current video frame is continuously executed.
12. The apparatus of claim 9, wherein the candidate operational information further comprises: position information of the non-motor vehicle in the current video frame; after the candidate operation brand is determined as the target operation brand of the non-motor vehicle, the determining unit further determines whether the non-motor vehicle runs in a non-motor vehicle running area according to the position information, if not, determines that the non-motor vehicle is illegal, and sends an illegal alarm; alternatively, the first and second electrodes may be,
the determining unit further checks whether there is illegal activity in the current video frame after determining the candidate operation brand as the target operation brand of the non-motor vehicle, wherein the illegal activity at least comprises: the non-motor vehicle is driven in the reverse direction, and illegal actions of a user on the non-motor vehicle are taken; if yes, the non-motor vehicle violation is determined, and a violation alarm is sent.
13. An electronic device, comprising:
a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to perform the method steps of any of claims 1-8.
CN201910769052.8A 2019-08-20 2019-08-20 Target identification method and device Active CN112417922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910769052.8A CN112417922B (en) 2019-08-20 2019-08-20 Target identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910769052.8A CN112417922B (en) 2019-08-20 2019-08-20 Target identification method and device

Publications (2)

Publication Number Publication Date
CN112417922A true CN112417922A (en) 2021-02-26
CN112417922B CN112417922B (en) 2024-06-28

Family

ID=74778943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910769052.8A Active CN112417922B (en) 2019-08-20 2019-08-20 Target identification method and device

Country Status (1)

Country Link
CN (1) CN112417922B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050267657A1 (en) * 2004-05-04 2005-12-01 Devdhar Prashant P Method for vehicle classification
US20130216102A1 (en) * 2012-02-22 2013-08-22 Ebay Inc. User identification and personalization based on automotive identifiers
WO2014033885A1 (en) * 2012-08-30 2014-03-06 富士通株式会社 Image processing device, image processing method, and image processing program
CN104239898A (en) * 2014-09-05 2014-12-24 浙江捷尚视觉科技股份有限公司 Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate
CN105469046A (en) * 2015-11-23 2016-04-06 电子科技大学 Vehicle model identification method based on PCA and SURF characteristic cascade
CN105574543A (en) * 2015-12-16 2016-05-11 武汉烽火众智数字技术有限责任公司 Vehicle brand and model identifying method and system based on deep learning
US9792530B1 (en) * 2015-12-28 2017-10-17 Amazon Technologies, Inc. Generating and using a knowledge base for image classification
CN107274666A (en) * 2017-08-03 2017-10-20 周初 A kind of specification shares the monitoring system and method for bicycle driving behavior
CN107301777A (en) * 2016-11-25 2017-10-27 上海炬宏信息技术有限公司 Vehicle peccancy lane change detection method based on video detection technology
CN107871126A (en) * 2017-11-22 2018-04-03 西安翔迅科技有限责任公司 Model recognizing method and system based on deep-neural-network
CN108090429A (en) * 2017-12-08 2018-05-29 浙江捷尚视觉科技股份有限公司 Face bayonet model recognizing method before a kind of classification
US20180174446A1 (en) * 2015-02-09 2018-06-21 Kevin Sunlin Wang System and method for traffic violation avoidance
WO2018157862A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Vehicle type recognition method and device, storage medium and electronic device
CN108701324A (en) * 2018-05-31 2018-10-23 深圳市元征科技股份有限公司 A kind of management method and server of shared vehicle
CN109166284A (en) * 2018-09-11 2019-01-08 广东省电子技术研究所 A kind of unlawful practice alarm system and unlawful practice alarm method
CN109618140A (en) * 2019-01-14 2019-04-12 上海钧正网络科技有限公司 Vehicle monitoring method, apparatus, system and server based on video monitoring
CN109635645A (en) * 2018-11-01 2019-04-16 深圳云天励飞技术有限公司 The illegal monitoring and managing method of Manpower Transportation, device and electronic equipment
CN109800633A (en) * 2018-12-11 2019-05-24 深圳云天励飞技术有限公司 A kind of illegal judgment method of Manpower Transportation, device and electronic equipment
CN109993032A (en) * 2017-12-29 2019-07-09 杭州海康威视数字技术股份有限公司 A kind of shared bicycle target identification method, device and camera

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050267657A1 (en) * 2004-05-04 2005-12-01 Devdhar Prashant P Method for vehicle classification
US20130216102A1 (en) * 2012-02-22 2013-08-22 Ebay Inc. User identification and personalization based on automotive identifiers
WO2014033885A1 (en) * 2012-08-30 2014-03-06 富士通株式会社 Image processing device, image processing method, and image processing program
CN104239898A (en) * 2014-09-05 2014-12-24 浙江捷尚视觉科技股份有限公司 Method for carrying out fast vehicle comparison and vehicle type recognition at tollgate
US20180174446A1 (en) * 2015-02-09 2018-06-21 Kevin Sunlin Wang System and method for traffic violation avoidance
CN105469046A (en) * 2015-11-23 2016-04-06 电子科技大学 Vehicle model identification method based on PCA and SURF characteristic cascade
CN105574543A (en) * 2015-12-16 2016-05-11 武汉烽火众智数字技术有限责任公司 Vehicle brand and model identifying method and system based on deep learning
US9792530B1 (en) * 2015-12-28 2017-10-17 Amazon Technologies, Inc. Generating and using a knowledge base for image classification
CN107301777A (en) * 2016-11-25 2017-10-27 上海炬宏信息技术有限公司 Vehicle peccancy lane change detection method based on video detection technology
WO2018157862A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Vehicle type recognition method and device, storage medium and electronic device
CN107274666A (en) * 2017-08-03 2017-10-20 周初 A kind of specification shares the monitoring system and method for bicycle driving behavior
CN107871126A (en) * 2017-11-22 2018-04-03 西安翔迅科技有限责任公司 Model recognizing method and system based on deep-neural-network
CN108090429A (en) * 2017-12-08 2018-05-29 浙江捷尚视觉科技股份有限公司 Face bayonet model recognizing method before a kind of classification
CN109993032A (en) * 2017-12-29 2019-07-09 杭州海康威视数字技术股份有限公司 A kind of shared bicycle target identification method, device and camera
CN108701324A (en) * 2018-05-31 2018-10-23 深圳市元征科技股份有限公司 A kind of management method and server of shared vehicle
CN109166284A (en) * 2018-09-11 2019-01-08 广东省电子技术研究所 A kind of unlawful practice alarm system and unlawful practice alarm method
CN109635645A (en) * 2018-11-01 2019-04-16 深圳云天励飞技术有限公司 The illegal monitoring and managing method of Manpower Transportation, device and electronic equipment
CN109800633A (en) * 2018-12-11 2019-05-24 深圳云天励飞技术有限公司 A kind of illegal judgment method of Manpower Transportation, device and electronic equipment
CN109618140A (en) * 2019-01-14 2019-04-12 上海钧正网络科技有限公司 Vehicle monitoring method, apparatus, system and server based on video monitoring

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
QIAN WANG ET AL.: "A Novel Fine-Grained Method for Vehicle Type Recognition Based on the Locally Enhanced PCANet Neural Network", 《JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY》, vol. 22, 23 March 2018 (2018-03-23), pages 335 - 350, XP036466117, DOI: 10.1007/s11390-018-1822-7 *
叶琳等: "关于外卖行业电动自行车交通安全的分析", 《道路交通管理》, 31 July 2018 (2018-07-31), pages 28 - 30 *
叶青等: "基于近邻分类的增量学习分类算法研究", 《计算机工程与应用》, vol. 52, no. 20, 31 December 2016 (2016-12-31), pages 154 - 157 *
杨娟等: "区域建议网络的细粒度车型识别", 《中国图象图形学报》, vol. 23, no. 6, 31 December 2018 (2018-12-31), pages 837 - 845 *

Also Published As

Publication number Publication date
CN112417922B (en) 2024-06-28

Similar Documents

Publication Publication Date Title
CN108725440B (en) Forward collision control method and apparatus, electronic device, program, and medium
CN113179368B (en) Vehicle loss assessment data processing method and device, processing equipment and client
US20190122059A1 (en) Signal light detection
CN108665373B (en) Interactive processing method and device for vehicle loss assessment, processing equipment and client
CN110826370B (en) Method and device for identifying identity of person in vehicle, vehicle and storage medium
US11930293B2 (en) Systems and methods for redaction of screens
US10769454B2 (en) Camera blockage detection for autonomous driving systems
CN103927762B (en) Target vehicle automatic tracking method and device
CN113366487A (en) Operation determination method and device based on expression group and electronic equipment
TW201947528A (en) Vehicle damage identification processing method, processing device, client and server
CN112200081A (en) Abnormal behavior identification method and device, electronic equipment and storage medium
CN110956081B (en) Method and device for identifying position relationship between vehicle and traffic marking and storage medium
CN107133629B (en) Picture classification method and device and mobile terminal
CN111192277A (en) Instance partitioning method and device
CN107748882B (en) Lane line detection method and device
CN111783573B (en) High beam detection method, device and equipment
CN111222409A (en) Vehicle brand labeling method, device and system
CN105976570A (en) Driver smoking behavior real-time monitoring method based on vehicle video monitoring
CN112163544A (en) Method and system for judging random placement of non-motor vehicles
CN109800678A (en) The attribute determining method and device of object in a kind of video
US10007842B2 (en) Same person determination device and method, and control program therefor
US11961308B2 (en) Camera blockage detection for autonomous driving systems
JP2015019296A (en) Image processing system, image processing method, and image processing program
CN112417922A (en) Target identification method and device
CN113378803B (en) Road traffic accident detection method, device, computer and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant