CN112836631A - Vehicle axle number determining method and device, electronic equipment and storage medium - Google Patents

Vehicle axle number determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112836631A
CN112836631A CN202110138962.3A CN202110138962A CN112836631A CN 112836631 A CN112836631 A CN 112836631A CN 202110138962 A CN202110138962 A CN 202110138962A CN 112836631 A CN112836631 A CN 112836631A
Authority
CN
China
Prior art keywords
axle
vehicle
wheel axle
frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110138962.3A
Other languages
Chinese (zh)
Inventor
王磊
周清华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Yunjitang Information Technology Co ltd
Original Assignee
Nanjing Yunjitang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Yunjitang Information Technology Co ltd filed Critical Nanjing Yunjitang Information Technology Co ltd
Priority to CN202110138962.3A priority Critical patent/CN112836631A/en
Publication of CN112836631A publication Critical patent/CN112836631A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application is applicable to the technical field of vehicles, and provides a method and a device for determining the number of vehicle axles, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a wheel axle detection result of each frame of video frame in a target video, wherein the target video is a video corresponding to a time period from the vehicle entering a camera shooting area to the vehicle leaving the camera shooting area; the wheel axle detection result comprises information of a wheel axle image; determining an adaptive axle number calculation window of the target video; determining the condition of the wheel axle image of each frame in the self-adaptive wheel axle number calculation window according to the wheel axle detection result of each frame of the video frame and the self-adaptive wheel axle number calculation window respectively; and determining the number of vehicle axles of the vehicle according to the condition that the axle images appear in the self-adaptive axle number calculation window of the video frames of each continuous frame. The vehicle axle number can be accurately determined.

Description

Vehicle axle number determining method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of vehicles, and particularly relates to a vehicle axle number determining method and device, electronic equipment and a storage medium.
Background
The number of vehicle axles is the total number of axles arranged on the lower portion of the vehicle chassis, and currently, the number of vehicle axles is generally detected on a vehicle running on a road, so as to determine whether the vehicle meets the running specification. However, the existing vehicle axle number determination methods are less accurate.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for determining a number of vehicle axles, an electronic device, and a storage medium, so as to solve the problem in the prior art of how to accurately determine the number of vehicle axles.
A first aspect of an embodiment of the present application provides a vehicle axle number determination method, including:
determining a wheel axle detection result of each frame of video frame in a target video, wherein the target video is a video corresponding to a time period from the vehicle entering a camera shooting area to the vehicle leaving the camera shooting area; the wheel axle detection result comprises information of a wheel axle image; the wheel axle image is an image corresponding to a single wheel axle of the vehicle;
determining a self-adaptive wheel axle number calculation window of the target video according to the information of each wheel axle image; the adaptive axle number calculation window is used for representing: an estimated image detection area which is located at the target position and can contain a single wheel axle image;
determining the condition of the wheel axle image of each frame in the self-adaptive wheel axle number calculation window according to the wheel axle detection result of each frame of the video frame and the self-adaptive wheel axle number calculation window respectively;
and determining the number of vehicle axles of the vehicle according to the condition that the axle images appear in the self-adaptive axle number calculation window of the video frames of each continuous frame.
A second aspect of an embodiment of the present application provides a vehicle axle number determination device, including:
the wheel axle detection result determining unit is used for determining a wheel axle detection result of each frame of video frame in a target video, wherein the target video is a video corresponding to a time period from the time when a vehicle enters a camera shooting area to the time when the vehicle leaves the camera shooting area; the wheel axle detection result comprises information of a wheel axle image; the wheel axle image is an image corresponding to a single wheel axle of the vehicle;
the self-adaptive wheel axle number calculation window determining unit is used for determining a self-adaptive wheel axle number calculation window of the target video according to the information of each wheel axle image; the adaptive axle number calculation window is used for representing: an estimated image detection area which is located at the target position and can contain a single wheel axle image;
the wheel axle image occurrence condition determining unit is used for determining the wheel axle image occurrence condition of each frame of the video frame in the self-adaptive wheel axle number calculating window according to the wheel axle detection result of each frame of the video frame and the self-adaptive wheel axle number calculating window;
and the vehicle axle number determining unit is used for determining the condition that the axle image appears in the self-adaptive axle number calculating window of each frame of the video frame according to the axle detection result of each frame of the video frame and the self-adaptive axle number calculating window.
A third aspect of embodiments of the present application provides an electronic device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, which when executed by the processor, causes the electronic device to implement the steps of the vehicle axle number determination method as described.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes an electronic device to implement the steps of the vehicle axle number determination method as described.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to execute the vehicle axle number determination method according to the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, the target video is a video corresponding to a time period from the time when the vehicle enters the camera shooting area to the time when the vehicle leaves the camera shooting area, and the wheel axle detection result comprises information of the wheel axle image, so that the wheel axle detection result of each frame of video frame in the determined target video can completely and continuously represent the image information (namely the information of each wheel axle image) corresponding to each wheel axle in the duration time when the vehicle passes through the camera shooting area; then, according to the information of each wheel axle image contained in all wheel axle detection results, determining an adaptive wheel axle number calculation window, wherein the adaptive wheel axle number calculation window represents an estimated image detection area which is positioned at a target position and can contain a single wheel axle image, namely the adaptive wheel axle number calculation window is a fixed image detection area where the wheel axle image is likely to appear, so that the adaptive wheel axle number calculation window can correspond to a fixed actual space area (hereinafter referred to as a designated space area) where a vehicle wheel axle is likely to pass in a camera shooting area, so that according to the wheel axle detection results of each frame of video frames and the adaptive number calculation window, the condition that the wheel axle image appears in each frame of video frames in the adaptive wheel axle number calculation window is determined, and the wheel axle passing condition of the vehicle in the designated space area of the actual camera shooting area (i.e. the actual physical space) at each time point can be reflected, the number of the wheel axles of the vehicle passing through the actual specified space region can be reflected on the basis of the time continuity and the space position fixity according to the condition of the wheel axle images appearing in the self-adaptive wheel axle number calculation window of each continuous frame of video frame, so that the number of the vehicle axles of the vehicle can be accurately obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a vehicle axle number determining method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating an implementation of a method for determining the number of axles of a vehicle according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an axle identification box provided in an embodiment of the present application;
FIG. 4 is an exemplary diagram of an adaptive axle count window and axle detection window provided in an embodiment of the present application;
FIG. 5 is a state transition diagram of a vehicle axle count determination process provided by an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating the number of wheel axle arrangements occurring in the wheel axle detection window according to an embodiment of the present disclosure;
FIG. 7 is a state transition diagram of a process for determining an axle set arrangement according to an embodiment of the present application;
FIG. 8 is a state transition diagram of another axle group arrangement determination process provided by embodiments of the present application;
fig. 9 is a schematic view of a vehicle axle number determining apparatus according to an embodiment of the present application;
fig. 10 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
The number of axles of a vehicle is an important detection content in vehicle identification, and in a specific application, the number of axles of the vehicle is determined by detecting the number of axles of the vehicle running on a road, so as to determine whether the vehicle meets the running specification (for example, whether the vehicle is overloaded, that is, whether the current load of the vehicle exceeds the total mass limit of the vehicle of the axle type). Therefore, how to detect the number of vehicle axles efficiently and accurately becomes a problem that needs to be solved by those skilled in the art.
At present, the number of vehicle axles is mainly detected by adopting a traditional piezoelectric sensing type identification method, a traditional laser identification method or a mode identification method based on a vehicle side panoramic image. However, in the conventional piezoelectric sensing type identification, a plurality of groups of pressure sensors such as a contact type piezoelectric switch sensor or a piezoelectric film sensor need to be installed at a designated position, the installation is complex, the implementation cost is high, and the pressure sensors are easily damaged by repeated rolling of a vehicle, so that the maintenance and the replacement are difficult; in addition, in the use process, the pressure sensor is possibly inaccurate in detection due to light vehicles or shaking in the running process of the vehicles, and the accuracy of the determined number of the vehicle axles is further low. The laser identification method has the problems that the debugging is difficult, and the accuracy of the vehicle axle number identification is difficult to ensure. For the axle mode identification method of the vehicle side panoramic image, the camera equipment needs to be installed remotely, and even the panoramic image needs to be acquired by means of the wide-angle lens, however, the problem of insufficient light supplement and unclear image can be caused by remote camera shooting (if the light supplement equipment needs to be additionally added, the wiring is long, the installation and implementation are inconvenient), and the panoramic image shot by means of the wide-angle lens can have certain image distortion, so that the number of the vehicle axles determined based on the vehicle side panoramic image is inaccurate. Therefore, the existing vehicle axle number detection methods have the defects of complex equipment installation and low accuracy.
In order to solve the technical problem, the application provides a vehicle axle number determining method, a vehicle axle number determining device, an electronic device and a storage medium, wherein in the vehicle axle number determining method, a target video is a video corresponding to a time period from entering a camera shooting area to leaving the camera shooting area of a vehicle, and an axle detection result comprises information of an axle image, so that the determined axle detection result of each frame of video frame in the target video can completely and continuously represent image information (namely information of each axle image) corresponding to each axle in the duration that the vehicle passes through the camera shooting area; then, according to the information of each wheel axle image contained in all wheel axle detection results, determining an adaptive wheel axle number calculation window, wherein the adaptive wheel axle number calculation window represents an estimated image detection area which is positioned at a target position and can contain a single wheel axle image, namely the adaptive wheel axle number calculation window is a fixed image detection area in which a wheel axle image is likely to appear, so that the adaptive wheel axle number calculation window can correspond to a fixed specified space area in which a vehicle wheel axle is likely to pass in a camera shooting area, so that according to the wheel axle detection results of each frame of video frames and the adaptive number calculation window, the wheel axle image appearing in each frame of video frames in the adaptive wheel axle number calculation window can reflect the wheel axle passing condition of the vehicle in the specified space area of the actual camera shooting area (i.e. the actual physical space) at each time point, the number of the wheel axles of the vehicle passing through the actual specified space region can be reflected on the basis of the time continuity and the space position fixity according to the condition of the wheel axle images appearing in the self-adaptive wheel axle number calculation window of each continuous frame of video frame, so that the number of the vehicle axles of the vehicle can be accurately obtained. Compared with the existing vehicle axle number detection method, the vehicle axle number determination method does not need complex hardware installation, does not need to acquire a panoramic image of the vehicle, and can conveniently, efficiently and accurately determine the vehicle axle number of the vehicle based on time continuity and relative fixity of spatial positions only by using a target video capable of containing an axle image of the vehicle (namely, the requirement on shooting visual field is low, and only simple camera equipment is needed, and a local area containing a vehicle axle is shot in a short distance to obtain the target video without long-distance shooting).
Fig. 1 shows a schematic application scenario diagram of a vehicle axle number determining method provided by an embodiment of the present application. In the application scene, the side direction of the camera shooting equipment is erected on the upright stanchion beside the road in a short distance, the side surface of the passing vehicle is shot, the camera shooting area of the camera shooting equipment is the local part of the vehicle, the obtained video is shot through the camera shooting equipment, only the local image of the vehicle needs to be contained, and the panoramic image of the vehicle does not need to be acquired by erecting the camera shooting equipment or a wide-angle lens in a long distance.
The first embodiment is as follows:
fig. 2 is a schematic flowchart illustrating a vehicle axle number determining method provided in an embodiment of the present application, where an execution subject of the vehicle axle number determining method is an electronic device. In one embodiment, the electronic device is a computing device capable of acquiring video data captured by the camera device shown in fig. 1, including but not limited to a smartphone, a tablet, a wearable device, a computer, a server, and the like; in another embodiment, the electronic device may also be directly an image pickup device as shown in fig. 1. The vehicle axle number determination method shown in fig. 2 is detailed as follows:
in S201, determining a wheel axle detection result of each frame of video frame in a target video, wherein the target video is a video corresponding to a time period from the time when a vehicle enters a camera shooting area to the time when the vehicle leaves the camera shooting area; the wheel axle detection result comprises information of a wheel axle image; the wheel axle image is an image corresponding to a single wheel axle of the vehicle.
In the embodiment of the application, the target video is a corresponding video in a time period from the time when the vehicle enters the image pickup area shown in fig. 1 to the time when the vehicle leaves the image pickup area. The vehicle entering the camera shooting area is the vehicle with the number of the vehicle shafts to be determined currently. In one embodiment, the target video includes each frame of video captured by the camera device during a period from when the vehicle enters the camera area to when the vehicle leaves the camera area. In another embodiment, the target video comprises video frames collected from a camera device according to a preset sampling frequency in a time period from the vehicle entering the camera area to the vehicle leaving the camera area; the preset sampling frequency is less than the shooting frame rate of the camera device, for example, if the shooting frame rate of the camera device is 30 frames/second, the sampling frequency can be 15 frames/second, so that the number of video frames required to be processed can be reduced on the premise of ensuring that the image information of the vehicle passing through the camera area is accurately tracked, and the efficiency of the vehicle axle number determining method can be improved. Alternatively, the target video may be obtained by acquiring video data currently captured by the image capture device in real time; alternatively, a video corresponding to a period of time from entering the image pickup area to leaving the image pickup area of the vehicle captured at the past time is acquired as the target video from a storage unit that stores video data captured in advance by the image pickup apparatus.
After the target video is obtained, the wheel axle detection result of each frame of video frame in the target video can be determined. The wheel axle detection result includes information of a wheel axle image, which is an image corresponding to a single wheel axle of the vehicle. Namely, by performing wheel axle detection on each frame of video frame, whether each frame of video frame has a wheel axle image corresponding to a wheel axle of the vehicle can be determined, and a wheel axle detection result can be obtained, wherein the wheel axle detection result can reflect information that the wheel axle of the vehicle passes through the camera shooting area at the moment corresponding to the video frame. Specifically, when the wheel axis image exists in the video frame, the information of the wheel axis image may include: indication information indicating that the wheel axle image exists in the video frame, the number of the wheel axle images, position and size information of an image area where each wheel axle image is located, and the like. Specifically, the position and size information of the image area where the wheel axis image is located may be: the pixel length and width information of the minimum circumscribed quadrangle frame (namely, the wheel axle identification frame) of the identified wheel axle image, and the pixel coordinates of the four vertexes of the wheel axle identification frame. The minimum circumscribed quadrilateral frame of the wheel axle image can be specifically a minimum circumscribed rectangle frame of the wheel axle image. Specifically, when the video frame does not have the wheel axis image, the information of the wheel axis image includes: and indicating information indicating that the wheel axle image does not exist in the video frame.
Optionally, the step S201 includes:
acquiring each frame of video frame obtained by shooting the side face of the vehicle in a time period from a first moment to a second moment, and determining a wheel axle detection result of the video frame; the first moment is the moment when the vehicle is detected to enter a camera shooting area, and the second moment is the moment when the vehicle is detected to leave the camera shooting area;
determining the axle detection result of each frame of video frame from the first time to the second time as follows: and detecting the wheel axle of each frame of video frame in the target video.
In one embodiment, an image pickup apparatus continuously picks up images on the road side, determines the current time as the time when the vehicle is detected to enter the image pickup area when the image pickup apparatus detects that a vehicle image starts to appear in the picked-up images, and records the time as a first time. Then, the camera device continuously shoots the side face of the vehicle, the electronic device of the embodiment of the application obtains each frame of video frame shot by the camera device in real time, and performs axle detection on each frame of video frame to obtain an axle detection result of each frame of video frame. Thereafter, when the camera device detects that the vehicle image disappears from the captured image, it is determined that the vehicle is detected to leave the camera area, and this time is recorded as a second time, at which time the acquisition of the video frame and/or the wheel axle detection may be stopped. In another embodiment, the camera device is in a standby state when no vehicle passes, when a sensor (e.g., a photoelectric sensor or a pressure sensor) located at one end of a camera area detects that a vehicle enters, the moment is recorded as a first moment, and the camera device is awakened to continuously shoot the side face of the vehicle, the electronic device of the embodiment of the application acquires video frames of each frame shot by the camera device in real time, and performs axle detection on the video frames of each frame to obtain an axle detection result of each video frame; then, when the sensor at the other end of the imaging area detects that the vehicle is away, the time is recorded as a second time, and the imaging apparatus is instructed to enter a sleep state.
And then, determining the wheel axle detection result of each frame of video frame determined in the time period from the first moment to the second moment as the detection result of each frame of video frame in the target video to be acquired currently.
In the embodiment of the application, each frame of video frame obtained by shooting the side face of the vehicle in the time period from the first moment when the vehicle enters the camera shooting area to the second moment when the vehicle leaves the camera shooting area can be obtained in real time, and the wheel axle detection result of each frame of video frame is determined, so that the wheel axle detection result of each frame of video frame in the target video can be obtained efficiently and completely in real time.
Optionally, the step S201 includes:
and determining the wheel axle detection result of each frame of video frame in the target video according to each frame of video frame in the target video and the pre-trained deep learning model.
In the embodiment of the application, axle detection is performed on each frame of video frame of a target video through a pre-trained deep learning model, so that an axle detection result of each frame of video frame in the target video is accurately determined. Specifically, the pre-trained deep learning model is an unsupervised deep learning model obtained by training in advance. Optionally, for each frame of video frame in the target video, the video frame may be cut according to a preset image area, so as to obtain a to-be-processed picture corresponding to the frame of video frame; and then, inputting the picture to be processed into the pre-trained deep learning model for processing, so as to obtain a wheel axle detection result corresponding to the video frame. Specifically, the axle detection result may include: indicating information indicating that the video frame has wheel axle images, the number of the identified wheel axle images, and the size and position information of a wheel axle identification frame corresponding to each wheel axle image; or indicating information indicating that the video frame does not have the wheel axle image.
Optionally, before determining the wheel axle detection result of each frame of video frame in the target video according to each frame of video frame in the target video and the pre-trained deep learning model, the method further includes:
a1: acquiring a preset number of sample videos, wherein the sample videos contain side image information of a vehicle;
a2: respectively intercepting a preset image area of each frame of video frame in the sample video to obtain each sample picture;
a3: marking information of the wheel axle image in the sample picture as a sample label to obtain each sample picture carrying the sample label;
a4: and converting each sample picture carrying the sample label into an input vector, and inputting the deep learning model to be trained for training to obtain the pre-trained deep learning model.
In the embodiment of the application, before determining the wheel axle detection result of each frame of video frame in the target video through the pre-trained deep learning model, the deep learning model is trained through steps a1 to a4, so as to obtain the pre-trained deep learning model.
In a1, a preset number of sample videos including side image information of the vehicle are captured by the image capturing apparatus in advance. Specifically, the sample video includes video frames in which the wheel axis image exists, and includes video frames in which the wheel axis image does not exist. The larger the preset number is, the more accurately the deep learning model obtained by training can process the image corresponding to the video frame, and the more accurate wheel axle detection result is obtained.
In a2, for each frame of video frame in each sample video, the frame of video frame is intercepted according to a preset image area with fixed image position and size, so as to obtain each sample picture.
In a3, information on the axle images in the sample pictures, for example, indication information indicating that the sample pictures contain the axle images, the number of the axle images, and the size and position information of the axle identification frame corresponding to the axle images, or indication information indicating that the sample pictures do not contain the axle images, is marked, and the marked information on the axle images is used as sample labels, so that the sample pictures each carrying the sample label are obtained.
In a4, converting each sample picture carrying a sample label into a vector form to obtain each input vector, inputting the input vectors into a deep learning model to be trained for training, and finally obtaining a pre-trained deep learning model. Specifically, the deep learning model may be an unsupervised learning model adopting an automatic coding mode, and correspondingly, the training step of the deep learning model to be trained may include:
a41: and converting the sample picture into a corresponding input vector, and inputting the input vector into the deep learning model to be trained.
A42: and carrying out coding processing on the input vector through an automatic coder of the deep learning model to be trained so as to convert the input vector into corresponding feature codes.
A43: and decoding the characteristic codes through the decoder of the deep learning model to be trained to obtain an output vector. The output vector is in a data form which can be identified by the unsupervised deep learning model, so that the unsupervised deep learning model can learn the features to be extracted to obtain an expression form of the sample picture carrying the sample label.
A44: and calculating the error between the output vector and the input vector, and judging whether the error meets a preset condition. The preset condition may be: the error is less than a predetermined error threshold, which may be a percentage value, such as 5%. The smaller the error threshold value is, the more stable the deep learning model obtained by final training is, and the higher the recognition accuracy is.
A45: and when the error does not meet the preset condition, adjusting the model parameters of the deep learning model to be trained to obtain the updated deep learning model to be trained, and returning to execute the step A41 and the subsequent steps.
A46: and when the error meets the preset condition, determining that the deep learning model to be trained is completely trained, and using the deep learning model at the moment as the final pre-trained deep learning model.
In the embodiment of the application, the sample images carrying the sample labels are obtained through image interception and information marking in advance through the sample videos with the preset number, so that the deep learning model to be trained is trained, and the pre-trained deep learning model capable of accurately detecting the wheel axle is finally obtained, so that the accuracy of the wheel axle detection result is improved, and the accuracy of the vehicle axle number determining method is further improved.
In S202, determining a self-adaptive wheel axle number calculation window of the target video according to the information of each wheel axle image; the self-adaptive wheel axle number calculation window is used for representing an estimated image detection area which is located at the target position and can contain a single wheel axle image.
In the embodiment of the application, according to the wheel axle detection result of each frame of video frame, information of all wheel axle images included in each frame of video frame can be obtained, that is, information of each wheel axle image appearing in the target video can be obtained, and specifically, the information can include the size and position information of each wheel axle image. According to the size and position information of each wheel axle image, a self-adaptive wheel axle number calculation window corresponding to the target video can be determined, the self-adaptive wheel axle number calculation window is a target position located in a video frame and estimated according to the size and position information of each wheel axle image, and the size of the self-adaptive wheel axle number calculation window can include an image detection area of a single wheel axle image. By reasonably setting the self-adaptive axle number calculation window, the condition-meeting axle image can be ensured to appear in the self-adaptive axle number calculation window at most. For the target video, the intersection of the adaptive wheel axle number calculation window and each frame of video frame is a fixed image area, and the fixed image area corresponds to a fixed actual physical space area (designated space area for short) in the camera shooting area.
Optionally, the information of the wheel axle image includes a size and a vertex coordinate of a wheel axle identification frame, where the wheel axle identification frame is a minimum circumscribed quadrilateral frame of the wheel axle image, and correspondingly, the determining an adaptive wheel axle number calculation window of the target video according to the information of each wheel axle image includes:
b1: determining a target size according to the size and/or vertex coordinates of each wheel axle identification frame;
b2: determining a target position according to preset horizontal position information and the vertex coordinates of the wheel axle identification frame;
b3: and taking the target size as the size of the self-adaptive axle number calculation window, and taking the target position as the position of the self-adaptive axle number calculation window to obtain the self-adaptive axle number calculation window of the target video.
In the embodiment of the application, an image area of a video frame is an image area obtained after a camera device shoots a camera area, in the video frame, information of an axle image is specifically represented by information of an axle identification frame, and the axle identification frame is a minimum circumscribed quadrilateral frame of the identified axle image. Specifically, the information of the wheel axis identification box may include the number of wheel axis identification boxes appearing in the image area of the video frame, and the size and vertex coordinates of each wheel axis identification box. Specifically, the size of each wheel axle identification frame may include length information (the length information may be in units of pixels, referred to as pixel length) of the wheel axle identification frame in the abscissa direction as shown in fig. 3, and width information (the width information may be in units of pixels, referred to as pixel width) in the ordinate direction as shown in fig. 3. Specifically, the vertex coordinates of each wheel axis identification box may include coordinate information of four vertices, i.e., a first vertex, a second vertex, a third vertex, and a fourth vertex, as shown in fig. 3.
In step B1, the target size is a size that can contain a single hub image. In one embodiment, the target size may be determined by the size of each axle identification box. Specifically, according to the sizes of all wheel axle identification boxes identified in the target video, the maximum value, namely the maximum length, noted as L, in the length information of all wheel axle identification boxes is determinedmax(ii) a And determining the maximum value in the width information of all the wheel axle identification frames, namely the maximum width and recording as Wmax(ii) a Then, the maximum length L is usedmaxAs a length in the abscissa direction, with the maximum width WmaxAs a width in the direction of the ordinate, i.e. a size Lmax×WmaxTarget size of (2). In another embodiment, the minimum value in the direction of the ordinate, called the minimum ordinate and noted as Y, is determined from the vertex coordinates of all the wheel axle identification boxes identified in the target videotop(ii) a Determining the maximum value in the direction of the ordinate, called the maximum ordinate, denoted Ybot(ii) a Determining the minimum value in the direction of the abscissa, called the minimum abscissa, denoted Xmin(ii) a Determining the maximum value in the direction of the abscissa, called the maximum abscissa, denoted Xmax(ii) a Then, the maximum abscissa X is setmaxWith the smallest abscissa XminDifference value L ofdAs the length, the maximum ordinate YtopWith the smallest ordinate YbotDifference value W ofdAs width, a size L is obtainedd×WdTarget size of (2). Generally, the target size is determined directly according to the sizes of all wheel axle identification boxes in the target video, relatively speaking, the data amount required to be compared is small, and the determination efficiency is high; and the target size is determined according to the vertex coordinates of all the wheel axle identification boxes in the target video, relatively speakingCompared with the mode of obtaining the target size by determining length and width information directly according to the wheel axle identification frame, the method has the advantages that the condition that the sides of the wheel axle rectangular frame are possibly not parallel to the coordinate axes (namely the wheel axle of the vehicle has height or left and right deviation when passing through the shooting area) can be compatible, the determined target size can be ensured to completely contain a single wheel axle image as far as possible, and the mode of determining the target size through the vertex coordinates is high in stability and accuracy. Further, considering that the wheel axle of the vehicle is usually deviated in height in the vertical direction due to the uneven road surface when passing through the imaging area, such deviation usually results in the above-determined maximum width W in the ordinate directionmaxSome wheel axle images cannot be contained completely in the ordinate direction, but for the maximum length LmaxThen there is no effect, i.e. LmaxThe single wheel axle image can be completely and accurately contained in the abscissa direction. Thus, in one embodiment, the efficiency and accuracy may be combined, with the maximum length L determined by the dimensions of each axle identification box as described abovemaxAs the length, the maximum ordinate Y among the above-mentioned vertex coordinates of the frame recognized by each wheel axistopWith the smallest ordinate YbotDifference value W ofdAs width, a size L is obtainedmax×WdTarget size of (2).
In step B2, since the vehicle can pass through any position in the horizontal direction of the camera area while traveling in the camera area, and the position in the vertical direction is usually determined by the height of the vehicle axle, i.e., the position in the vertical direction is relatively fixed, correspondingly, in the target video frame, there is a possibility that an axle image exists at a certain time point at each abscissa position in the abscissa direction as shown in fig. 3, and there exists an axle image only in some ordinate position ranges in the ordinate direction. And the target position is the position of an adaptive axle number calculation window to be determined, which needs to be capable of containing a single axle image, and therefore can be randomly pre-determined in the horizontal direction, i.e. in the abscissa directionSetting a horizontal position, namely preset horizontal position information, and determining a vertical coordinate position in a vertical direction, namely a vertical coordinate direction according to the vertical coordinate range of the wheel axle identification frame, namely, the vertical coordinate direction, so as to obtain a target position where a single wheel axle image can appear. Optionally, the abscissa of the image center point of the video frame is taken as the preset horizontal position information XoWith the above-mentioned maximum ordinate YtopWith the smallest ordinate YbotAs vertical position information Y, i.e. the longitudinal center point of the wheel axle imageoThen, the target position (X) with the preset horizontal position information as the abscissa and the vertical position information as the ordinate can be obtainedo,Yo)。
In step B3, the size of the adaptive axle number calculation window is determined by using the target size determined in step B1 as the size of the adaptive axle number calculation window, and the position of the target position determined in step B2 is used as the position of the adaptive axle number calculation window, so that the adaptive axle number calculation window which is located at the fixed image position of the target video and can contain a single axle image is obtained. Illustratively, a schematic diagram of the adaptive axle number calculation window is shown in fig. 4.
In the embodiment of the application, the target size and the target position can be accurately determined as the size and the position of the adaptive axle number calculation window according to the size and the vertex coordinates of the axle identification frame, so that the determined adaptive axle number calculation window can be ensured to be an image detection area which is positioned at a fixed position and can contain a single axle image, and the accuracy of the number of the vehicle axles determined based on the adaptive axle number calculation window is improved.
In S203, determining the situation of the wheel axle image appearing in the adaptive wheel axle number calculation window of each frame of the video frame according to the wheel axle detection result of each frame of the video frame and the adaptive wheel axle number calculation window.
After determining the self-adaptive wheel axle number calculation window which is fixed in position and can contain a single wheel axle image, determining the condition of the wheel axle image of each frame of video frame in the self-adaptive wheel axle number calculation window according to the wheel axle detection result of the frame of video frame and the self-adaptive wheel axle number calculation window. Specifically, when the wheel axle detection result of the video frame is the indication information indicating that no wheel axle image exists in the video frame, it is directly determined that no wheel axle image exists in the video frame in the adaptive wheel axle number calculation window. When the wheel axle detection result of the video frame is indication information indicating that the wheel axle image exists in the video frame, whether the wheel axle image in the video frame falls in the self-adaptive wheel axle number calculation window is further judged according to the information of the existing wheel axle image, if yes, the wheel axle image in the self-adaptive wheel axle number calculation window is determined to appear in the video frame, and if not, the wheel axle image in the self-adaptive wheel axle number calculation window does not appear in the video frame. Specifically, the wheel axle image in the video frame falls within the adaptive wheel axle number calculation window, which means that the overlapping area of the wheel axle image and the adaptive wheel axle number calculation window is larger than a preset area, which is usually larger than 1/2 of the area of the wheel axle image itself and smaller than or equal to the area of the wheel axle image itself.
The self-adaptive axle number calculation window is an image detection area which is fixed in position and can contain a single axle image in a target video frame, so that the self-adaptive axle number calculation window corresponds to an actual physical space area (referred to as a designated space area for short) which is fixed in position and can contain an actual vehicle axle in size in a camera area, and therefore the condition that the axle image appears in the self-adaptive axle number calculation window of each determined frame video frame actually reflects whether the axle of the vehicle passes through the designated space area at the moment corresponding to each video frame.
In S204, determining the number of vehicle axles of the vehicle according to the situation that the axle image appears in the adaptive axle number calculation window in the video frames of each continuous frame.
The condition that each frame of video frame has the wheel axle image in the self-adaptive wheel axle number calculation window can reflect whether the wheel axle of the vehicle passes through the designated space area at each moment, so that the number of the wheel axles passing through the designated space area in the continuous time period when the vehicle passes through the camera area can be counted according to the condition that each continuous frame of video frame has the wheel axle image in the self-adaptive wheel axle number calculation window, and the number of the vehicle axles of the vehicle can be obtained. Illustratively, in the video frame close to the starting time of the target video, no wheel axle image appears in the adaptive wheel axle number calculation window, that is, the wheel axle of the vehicle has not passed through the specified spatial region; when detecting that a first video frame of the wheel axle image appears in the self-adaptive wheel axle number calculation window and represents the moment corresponding to the first video frame, the wheel axle of the vehicle starts to enter the specified space area, and at the moment, the wheel axle count is increased by 1; after the first video frame, until the situation that the wheel axle image appears is detected to be a second video frame without the wheel axle image, the second video frame represents the moment corresponding to the second video frame, and the first wheel axle of the vehicle already exits the appointed space area; after the second video frame, if the situation that the wheel axle appears is detected to be a third video frame with a wheel axle image, the second wheel axle of the vehicle starts to enter the specified space area at the moment corresponding to the third video frame, and at this moment, the wheel axle count is increased by 1; after the third frame of video frame, if it is detected that the axle occurrence condition is a fourth video frame without an axle image, it indicates that the second axle of the vehicle has moved out of the specified spatial region … … at the moment corresponding to the fourth video frame, and so on, until the axle images of all video frames of the target video in the adaptive axle number calculation window are analyzed, the obtained axle count value is the vehicle axle number of the vehicle.
Optionally, in this embodiment of the present application, the axle count identifier indicates that the video frame has the axle image in the adaptive axle number calculation window, and step S203 described above includes:
determining the wheel axle counting identification of each video frame according to the wheel axle detection result of each video frame and a self-adaptive wheel axle number calculation window; the axle counting mark comprises a first counting mark and a second counting mark, the first counting mark represents that the video frame has a single axle image which meets the condition in the self-adaptive axle number calculation window, and the second counting mark represents that the video frame does not have the single axle image which meets the condition in the self-adaptive axle number calculation window;
correspondingly, the step S204 includes:
determining the number of axles of the vehicle according to the change of the axle count identification between the video frames of the continuous frames.
In the embodiment of the application, the situation that an axle image appears in the adaptive axle number window of a video frame is specifically represented by an axle count identifier. The axle counting mark comprises a first counting mark and a second counting mark, the first counting mark represents that a video frame has a single axle image which meets the condition in the adaptive axle number calculation window, and the second counting mark represents that the video frame does not have the single axle image which meets the condition in the adaptive axle number calculation window. That is, for each frame of video frame, if a single wheel axle image meeting the conditions exists in the self-adaptive wheel axle number calculation window of the video frame, determining the wheel axle count identifier of the video frame as a first count identifier; and if the video frame does not have a single wheel axle image meeting the condition in the self-adaptive wheel axle number calculation window, determining the wheel axle counting identifier of the video frame as a second counting identifier. The conditions may be: the overlapping area of the single wheel axle image and the self-adaptive wheel axle number calculating window is larger than the preset area.
Then, in step S204, the axle count is performed according to the change of the corresponding axle count identifier between the consecutive frames of video frames, so as to determine the axle number of the vehicle. Specifically, sequentially reading the axle counting identifiers of each continuous frame of video frame, adding 1 to the axle counting of the vehicle when the axle counting identifiers are changed from the second counting identifier to the first counting identifier each time, until the axle counting identifiers of all the video frames are completely read, and determining the final axle counting as the axle number of the vehicle; wherein the initial value of the axle count for the vehicle is 0. For example, if the first count flag is N1, the second count flag is N0, and the starting state of the vehicle axle count is K0 (indicating that 0 axle images pass through the adaptive axle number calculation window at the start), the axle count flags of consecutive video frames may be sequentially used as input according to the time sequence by using the state transition diagram shown in fig. 5, when the state transition condition is satisfied, the next state is entered, and when the state transition condition is not satisfied, the original state is maintained, and when the axle count flags of all the video frames are used as input, the axle number corresponding to the final state is the vehicle axle number of the current vehicle. Specifically, in fig. 5, starting from the initial state, at the initial time, the wheel axle of the vehicle has not yet driven into the designated spatial region, no wheel axle image exists within the adaptive wheel axle number calculation window, the wheel axle count of the video frame at this stage is identified as the second count identifier N0, the state transition condition is not satisfied, and the state of the vehicle wheel axle count is maintained at the initial state K0. When the wheel axle of the vehicle enters the designated space area, and the wheel axle image corresponding to the first wheel axle meeting the conditions starts to appear in the adaptive wheel axle number calculation window, the wheel axle count mark of the corresponding video frame is the first count mark N1 at the moment, the state transition condition is met, the state of the vehicle wheel axle count is changed from the initial state K0 to the state K1, and the vehicle wheel axle count corresponding to the state is 1. After the state of the vehicle axle count is converted into the state K1, the overlapping area of the axle image corresponding to the first axle of the vehicle and the adaptive axle number calculation window is larger than the preset area in a continuous period of time, the axle count flag of the video frame at this stage is kept at the first count flag N1, the state conversion condition is not met, and the state of the vehicle axle count is kept at the state K1. And until when the overlapping area of the wheel axle image corresponding to the first wheel axle of the vehicle and the adaptive wheel axle number calculation window is smaller than or equal to the preset area, the wheel axle count identifier of the corresponding video frame is the second count identifier N0, the state transition condition is met, the state of the wheel axle count of the vehicle is changed from the state K1 to the state K1', the wheel axle count of the vehicle corresponding to the state is still 1, and only the state that the first wheel axle gradually leaves the state corresponding to the specified space region is represented at this time. During a period of time which lasts after the state transition of the vehicle wheel axle count to state K1', the first wheel axle is gradually moved away, the second wheel axle has not entered or has entered only a small portion of the designated spatial region, the wheel axle count flag of the video frame at this stage remains at the second count flag N0, the state transition condition is not satisfied, and the state of the wheel axle count of the vehicle remains at K1'. Until the overlapping area of the wheel axle image corresponding to the second wheel axle of the vehicle and the self-adaptive wheel axle number calculation window is larger than the preset area, the wheel axle count mark of the corresponding video frame is the first count mark N1 at the moment, the state transition condition is met, the state of the wheel axle count of the vehicle is changed from the state K1' to the state K2, and the wheel axle count of the vehicle corresponding to the state is 2; thereafter, similarly, when the wheel axle count flag of the subsequent video frame continues to be the first count flag N1, the state transition condition is not satisfied, and the state of the vehicle wheel axle count is maintained at K2; and when the axle count flag of the video frame is detected as the second count flag N0, the state of the vehicle axle count is changed to the transition of the other state of K2' … …, and so on. When the axle count flag of the last frame of video frame is input, the corresponding state flag at this time is determined as the final state, which may be any one of states K2-K6 shown in fig. 5, and according to the final state, the corresponding vehicle axle number may be determined. Specifically, the correspondence relationship between the various final states and the number of vehicle axles is shown in table 1 below.
Table 1:
Figure BDA0002928089470000171
Figure BDA0002928089470000181
in the embodiment of the application, the condition that the video frames have the wheel axle images in the self-adaptive wheel axle number calculation window is represented through the wheel axle counting identification, and the number of the axles of the vehicle can be conveniently and accurately determined according to the change of the wheel axle counting state between the continuous video frames.
Optionally, the determining, by the computing window of the number of adaptive axles and the axle detection result of each frame of the video frame, the axle count identifier of each frame of the video frame includes:
for each frame of video frame, if the video frame has the axle identification frame overlapped with the self-adaptive axle number calculation window, and the ratio of the overlapped area of the axle identification frame and the self-adaptive axle number calculation window to the area of the axle identification frame is greater than a preset ratio, determining that the axle count identifier of the video frame is a first count identifier; otherwise, determining that the wheel axle count identifier of the video frame is a second count identifier.
In the embodiment of the application, the information of the wheel axle image is specifically represented by a wheel axle identification frame, namely a minimum circumscribed quadrilateral frame of the identified wheel axle image. For each frame of video frame in the target video, if the video frame has an axle identification frame overlapped with the adaptive axle number calculation window, and the ratio of the overlapped area of the axle identification frame and the adaptive axle number calculation window to the area of the axle identification frame is greater than a preset threshold, determining that an axle image meeting the condition exists in the adaptive axle number calculation window of the video frame, and recording the axle count identifier of the video frame as a first count identifier. On the contrary, if the video frame does not have the axle recognition frame overlapped with the adaptive axle number calculation window, or the ratio of the overlapped area of the axle recognition frame and the adaptive axle number calculation window to the area of the axle recognition frame is smaller than or equal to the preset threshold, it is determined that the video frame does not have the axle image meeting the condition in the adaptive axle number calculation window, and the axle counting identifier of the video frame is marked as a second counting identifier. Wherein the preset threshold is greater than or equal to 1/2 and less than 1.
In the embodiment of the application, whether the video frame has the wheel axle image meeting the condition in the adaptive wheel axle number calculation window can be accurately determined through the overlapping condition of the wheel axle identification frame of the video frame and the adaptive wheel axle number calculation window, so that the wheel axle counting identifier of each frame of video frame can be accurately determined, and the number of the vehicle axles can be accurately determined according to the wheel axle counting identifier.
Optionally, after the step S204, the method further includes:
determining the wheel axle arrangement number of each video frame according to the wheel axle detection result of each video frame and a preset wheel axle detection window; the wheel axle detection window is a preset image detection area capable of containing at least 3 wheel axle images which are adjacently arranged in a video frame; the wheel axle arrangement number is used for representing the number of wheel axle images appearing in the preset wheel axle detection window in the video frame;
and determining the vehicle type and/or the total mass limit value of the vehicle according to the number of vehicle axles of the vehicle and the change of the arrangement number of the wheel axles among the video frames of each continuous frame.
In the embodiment of the present application, the axle detection window is a preset image detection area capable of containing at least 3 adjacently arranged axle images in a video frame, for example, as shown in the axle detection window of fig. 4. In one embodiment, the predetermined axle detection window may be defined by: and acquiring the coordinate position of the click or sliding of the user on the picture of the video frame by means of the touch device and drawing. In another embodiment, the preset axle detection window may be set according to an adaptive axle number calculation window. Specifically, the length obtained by multiplying the length of the adaptive axle number window by a preset multiple (e.g., 3 to 5 times) may be used as the length of the axle detection window, the width of the adaptive axle number calculation window is used as the width of the axle detection window, and the center position of the adaptive axle number calculation window is used as the center position of the axle detection window, so as to obtain the preset axle detection window.
The number of the wheel axle images falling in the preset wheel axle detection window in each frame of video frame can be respectively determined according to the wheel axle detection result of each frame of video frame and the preset wheel axle detection window, the number is called as the wheel axle arrangement number, and the wheel axle arrangement number can reflect the actual arrangement condition of the wheel axle of the vehicle.
After the wheel axle arrangement number of each frame of video frame is determined, the actual arrangement condition of the wheel axle of the vehicle is accurately determined according to the vehicle axle number of the vehicle determined in the step S204 and the wheel axle arrangement number change between the frames of video frames which are sequentially and continuously arranged according to the time sequence, and the vehicle type and/or the total mass limit value of the vehicle is determined according to the vehicle axle number and the actual arrangement condition of the wheel axle of the vehicle. Specifically, according to the requirements of road freight vehicles in China, the axle group types and arrangement of the axles of the common freight vehicles are fixed and meet the related requirements of the country on the vehicles, wherein the axle group types can comprise a single axle (namely, the axle group only has one independent axle), a double axle group (namely, the axle group comprises two adjacent axles) and a triple axle group (namely, the axle group comprises three adjacent axles). In the embodiment of the application, the axle group type and the arrangement condition of the vehicle are determined according to the axle arrangement number of each frame of video frame, and then the complete axle group arrangement structure of the vehicle can be obtained by combining the axle number of the vehicle, and the vehicle type and/or the total mass limit value of the vehicle are determined. For example, if the number of vehicle axles is 3 and the axle set arrangement is 1+2 (indicating that the vehicle is a single axle at the front end and a double axle set at the rear end), then according to the axle set arrangement, a preset vehicle standard specification, such as "overall dimensions, axle loads, and mass limits of automobiles, trailers, and trains" (GB1589-2016) may be queried to determine that the vehicle type of the vehicle is a truck and the corresponding total mass limit is 25 tons. And the total mass limit value is the maximum limit value of the whole vehicle mass after the vehicle is loaded. Optionally, after determining the total mass limit of the vehicle according to the number of axles of the vehicle and the arrangement structure of the axle sets, acquiring the current actual load of the vehicle through a sensor, and if the actual load is greater than the total mass limit, determining that the vehicle is overloaded and sending an alarm message.
In the embodiment of the application, the wheel axle arrangement number of each frame of video frame can be accurately determined through the preset wheel axle detection window, and the actual wheel axle group arrangement structure of the vehicle can be accurately determined according to the number of the vehicle axles of the vehicle and the wheel axle arrangement number, so that the vehicle type and/or the total mass limit value corresponding to the vehicle can be accurately inquired, and richer vehicle detection information can be provided.
Optionally, the determining the vehicle type and/or the total mass limit value of the vehicle according to the number of vehicle axles of the vehicle and the change of the number of wheel axle arrangements between the video frames of each continuous frame includes:
counting the maximum value of the wheel axle arrangement quantity in all video frames of the target video, and determining a corresponding vehicle identification sub-process according to the maximum value;
and identifying the change of the wheel axle arrangement quantity among the video frames of each frame through the vehicle identification sub-process according to the number of the vehicle axles of the vehicle, and determining the vehicle type and/or the total mass limit value of the vehicle.
In the embodiment of the application, in a video frame obtained by shooting a vehicle passing through a shooting area, a preset wheel axle detection window is set to only simultaneously contain 3 wheel axle images meeting a condition at most (where the meeting condition means that the area of the wheel axle image falling in the wheel axle detection window is larger than a preset area), and then for each frame of video frame, the corresponding wheel axle arrangement number of each frame of video frame may include the following four types:
(1) the video frame does not have wheel axle images meeting the conditions in a preset wheel axle detection window, namely the arrangement number of the wheel axles is 0, and the video frame is marked as W0;
(2) the video frame has an axle image meeting the condition in a preset axle detection window, namely the number of the arranged axles is 1, and the video frame is marked as W1;
(3) two axle images meeting the conditions exist in a preset axle detection window of the video frame, namely the number of the arranged axles is 2, and the video frame is marked as W2;
(4) three axle images meeting the condition exist in the preset axle detection window of the video frame, namely the number of the arranged axles is 3, and the video frame is marked as W3.
The corresponding schematic diagram of the above 4 cases is shown in fig. 6.
In the embodiment of the application, since the target video is a video recording the process from entering the camera shooting area to leaving the camera shooting area of the vehicle, the maximum value of the wheel axle arrangement number is at least 1 in all video frames of the target video, that is, at least a video frame marked as W1 exists; the maximum number of wheel axle arrangements can also be 2 or 3. According to the maximum value of the number of the wheel axle arrangements, the number of the wheel axles arranged in the maximum adjacent arrangement of the vehicle can be basically determined, and therefore the corresponding vehicle identification sub-process is determined. Corresponding to the maximum number of wheel axle arrangements possible in the target video, the vehicle identification sub-process comprises: a vehicle identification subprocess 1 corresponding to the maximum value 1, a vehicle identification subprocess 2 corresponding to the maximum value 2, and a vehicle identification subprocess 3 corresponding to the maximum value 3.
After the corresponding vehicle identification sub-process is determined according to the maximum value of the wheel axle arrangement number of the target video, the actual wheel axle group arrangement condition of the vehicle can be determined according to the vehicle axle number of the vehicle determined in step S204 and the vehicle identification sub-process, so as to determine the vehicle type and/or the total mass limit value of the vehicle. Specifically, if the maximum value of the number of the arranged wheel axles is 1, it is indicated that each wheel axle of the vehicle is a single wheel axle which is separately arranged, that is, the type of the axle group is only a single axle group, and no double axle group or triple axle group exists, and all vehicle types only including single wheel axles and total mass limits thereof can be listed, so as to obtain the following table 2; the vehicle identification subroutine 1 corresponding to the maximum value of 1 may include: and according to the number of the vehicle axles, inquiring the table 2 to obtain the corresponding vehicle type and/or total mass limit value.
Table 2:
Figure BDA0002928089470000221
specifically, if the maximum value of the number of arranged axles is 2, which indicates that the vehicle has a two-axle group, then the axle images of the vehicle with 2 adjacently arranged axles may be listed through all possible state transition processes of the axle detection windows, as shown in fig. 7. In fig. 7, in each circle is the status indicator of the wheel axle count of the vehicle, where S0 is the initial status indicator, and in other status indicators, the letter S in the first digit indicates the status, the number in the second digit indicates the accumulated wheel axle count, and the letter in the third digit is used to distinguish different wheel axle arrangements under the same wheel axle count; specifically, when the maximum value of the wheel axle arrangement number is 2, it indicates that three video frame markers of W0, W1, and W2 exist for each frame of video frame in the target video, and according to the time arrangement sequence, the video frame markers of each frame of video frame in the target video are sequentially used as input markers, and the state transition is performed according to the state markers and the transition conditions of the input markers in fig. 7, and when the video frame markers of each frame of video frame in the target video are all used as input markers, the final state markers are obtained; the final status flag corresponds to the number of vehicle axles, the vehicle type, and the total mass limit as shown in table 3.
Table 3:
Figure BDA0002928089470000231
wherein the states that FIG. 7 exists and Table 3 does not exist are identified as culled unreasonable states that are not likely to be identified for the final state. In the embodiment of the present application, when the maximum value of the number of wheel axle arrangements of the target video is 2, the corresponding vehicle identification sub-process 2 is: and (3) with S0 as an initial state, sequentially taking the video frame markers (W0, W1 or W2) of the frames of the video frames arranged in time sequence by the target video as input markers, obtaining the final state identification according to the graph 7, and inquiring the table 3 to obtain the vehicle type and the total mass limit value of the vehicle.
Specifically, if the maximum value of the number of wheel axle arrangements is 3, which indicates that the vehicle has a three-axle group, the wheel axle images of the vehicle with 3 wheel axles arranged adjacently may be listed through all possible state transition processes of the wheel axle detection windows, as shown in fig. 8. In fig. 8, in each circle is the status indicator of the wheel axle count of the vehicle, where S0 is the initial status indicator, and in other status indicators, the letter S in the first digit indicates the status, the number in the second digit indicates the accumulated wheel axle count, and the letter in the third digit is used to distinguish different wheel axle arrangements under the same wheel axle count; specifically, when the maximum value of the wheel axle arrangement number is 3, it is described that four video frame markers of W0, W1, W2 and W3 exist for each frame of video frame in the target video, and the video frame markers of each frame of video frame in the target video are sequentially taken as input markers in chronological order, state conversion is performed according to the state markers and the conversion conditions of the input markers in fig. 8, and when the video frame markers of each frame of video frame in the target video are all taken as input markers, a final state marker is obtained; the final status flag corresponds to the number of vehicle axles, vehicle type, and total mass limit as shown in table 4:
table 4:
Figure BDA0002928089470000241
where the states that FIG. 8 exists and Table 4 does not exist are identified as culled unreasonable states that are not likely to be identified for the final state. In the embodiment of the present application, when the maximum value of the wheel axle arrangement number of the target video is 3, the corresponding vehicle identification subprocess 3 is as follows: and (3) with S0 as an initial state, sequentially taking the video frame markers (W0, W1, W2 or W3) of the frames of video frames arranged in time sequence by the target video as input markers, obtaining a final state identifier according to the graph 8, and inquiring the table 3 to obtain the vehicle type and the total mass limit value of the vehicle.
In the embodiment of the application, the corresponding vehicle identification sub-process can be determined according to the maximum value of the wheel axle arrangement number in the target video so as to distinguish the determination processes of the vehicle types and the total mass limit values of different types of vehicles, and therefore the determination of the vehicle types and/or the total mass limit values can be more efficiently and accurately achieved.
In the embodiment of the application, the target video is a video corresponding to a time period from the time when the vehicle enters the camera shooting area to the time when the vehicle leaves the camera shooting area, and the wheel axle detection result comprises information of the wheel axle image, so that the wheel axle detection result of each frame of video frame in the determined target video can completely and continuously represent the image information (namely the information of each wheel axle image) corresponding to each wheel axle in the duration time when the vehicle passes through the camera shooting area; then, according to the information of each wheel axle image contained in all wheel axle detection results, determining an adaptive wheel axle number calculation window, wherein the adaptive wheel axle number calculation window represents an estimated image detection area which is positioned at a target position and can contain a single wheel axle image, namely the adaptive wheel axle number calculation window is a fixed image detection area in which a wheel axle image is likely to appear, so that the adaptive wheel axle number calculation window can correspond to a fixed actual space area (hereinafter referred to as a designated space area) in which a vehicle wheel axle is likely to pass in a camera shooting area, so that according to the wheel axle detection results of each frame of video frames and the adaptive number calculation window, the wheel axle image appearing in each frame of video frames is determined in the adaptive wheel axle number calculation window, and the wheel axle passing condition of the vehicle in the designated space area of the actual camera shooting area (i.e. the actual physical space) at each time point can be reflected, the number of the wheel axles of the vehicle passing through the actual specified space region can be reflected on the basis of the time continuity and the space position fixity according to the condition of the wheel axle images appearing in the self-adaptive wheel axle number calculation window of each continuous frame of video frame, so that the number of the vehicle axles of the vehicle can be accurately obtained.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two:
fig. 9 is a schematic structural diagram of a vehicle axle number determining apparatus according to an embodiment of the present application, and for convenience of description, only portions related to the embodiment of the present application are shown:
the vehicle axle number determination device includes: an axle detection result determining unit 91, an adaptive axle number calculation window determining unit 92, an axle image occurrence determining unit 93, and a vehicle axle number determining unit 94. Wherein:
the axle detection result determining unit 91 is configured to determine an axle detection result of each frame of video frame in a target video, where the target video is a video corresponding to a time period from when a vehicle enters a camera shooting area to when the vehicle leaves the camera shooting area; the wheel axle detection result comprises information of a wheel axle image; the wheel axle image is an image corresponding to a single wheel axle of the vehicle.
The adaptive axle number calculation window determining unit 92 is configured to determine an adaptive axle number calculation window of the target video according to information of each axle image; the adaptive axle number calculation window is used for representing: and (4) estimating an image detection area which is located at the target position and can contain a single wheel axle image.
And an axle image occurrence determination unit 93, configured to determine, according to the axle detection result of each frame of the video frame and the adaptive axle number calculation window, an occurrence of the axle image in the adaptive axle number calculation window of each frame of the video frame.
And a vehicle axle number determining unit 94, configured to determine, according to the axle detection result of each frame of the video frame and the adaptive axle number calculation window, that the axle image appears in the adaptive axle number calculation window for each frame of the video frame.
Optionally, the axle detection result determining unit 91 includes a first determining module and a second determining module:
the first determining module is used for acquiring video frames of each frame obtained by shooting the side face of the vehicle in a time period from a first moment to a second moment and determining a wheel axle detection result of the video frames; the first moment is the moment when the vehicle is detected to enter a camera shooting area, and the second moment is the moment when the vehicle is detected to leave the camera shooting area;
a second determining module, configured to determine, in a time period from the first time to the second time, a wheel axis detection result of each determined frame of the video frame as: and detecting the wheel axle of each frame of video frame in the target video.
Optionally, the axle detection result determining unit 91 is specifically configured to determine an axle detection result of each frame of video frame in the target video according to each frame of video frame in the target video and the pre-trained deep learning model.
Optionally, the information of the wheel axle image includes a size and vertex coordinates of a wheel axle identification box, where the wheel axle identification box is a minimum circumscribed quadrilateral box of the wheel axle image, and correspondingly, the adaptive wheel axle number calculation window determining unit 92 includes: the device comprises a target size determining module, a target position determining module and an adaptive axle number calculating window determining module, wherein the target size determining module comprises:
the target size determining module is used for determining a target size according to the size and/or the vertex coordinates of each wheel axle identification frame;
the target position determining module is used for determining a target position according to preset horizontal position information and the vertex coordinates of the wheel axle identification frame;
and the self-adaptive axle number calculation window determining module is used for taking the target size as the size of the self-adaptive axle number calculation window and taking the target position as the position of the self-adaptive axle number calculation window to obtain the self-adaptive axle number calculation window of the target video.
Optionally, the wheel axle counting identifier indicates a situation that the video frame has the wheel axle image in the adaptive wheel axle number calculation window, and correspondingly, the wheel axle image occurrence determination unit 93 is specifically configured to determine the wheel axle counting identifier of each frame of the video frame according to the wheel axle detection result of each frame of the video frame and the adaptive wheel axle number calculation window, respectively; the axle counting mark comprises a first counting mark and a second counting mark, the first counting mark represents that the video frame has a single axle image which meets the condition in the self-adaptive axle number calculation window, and the second counting mark represents that the video frame does not have the single axle image which meets the condition in the self-adaptive axle number calculation window;
correspondingly, the vehicle axle number determining unit 94 is specifically configured to determine the vehicle axle number of the vehicle according to the change of the axle count identifier between the video frames of consecutive frames.
Optionally, the determining, by the axle image occurrence determining unit 93, the axle counting identifier of each frame of the video frame according to the axle detection result of each frame of the video frame and the adaptive axle number calculation window respectively includes: for each frame of video frame, if the video frame has the axle identification frame overlapped with the self-adaptive axle number calculation window, and the ratio of the overlapped area of the axle identification frame and the self-adaptive axle number calculation window to the area of the axle identification frame is greater than a preset ratio, determining that the axle count identifier of the video frame is a first count identifier; otherwise, determining that the wheel axle count identifier of the video frame is a second count identifier.
Optionally, the vehicle axle number determination device further includes:
the detection unit is used for determining the wheel axle arrangement number of each frame of the video frame according to the wheel axle detection result of each frame of the video frame and a preset wheel axle detection window; the wheel axle detection window is a preset image detection area capable of containing at least 3 wheel axle images which are adjacently arranged in a video frame; the wheel axle arrangement number is used for representing the number of wheel axle images appearing in the preset wheel axle detection window in the video frame; and determining the vehicle type and/or the total mass limit value of the vehicle according to the number of vehicle axles of the vehicle and the change of the arrangement number of the wheel axles among the video frames of each continuous frame.
Optionally, in the detecting unit, the determining a vehicle type and/or a total mass limit of the vehicle according to the number of vehicle axles of the vehicle and the change of the number of wheel axle arrangements between consecutive frames of the video frames includes:
counting the maximum value of the wheel axle arrangement quantity in all video frames of the target video, and determining a corresponding vehicle identification sub-process according to the maximum value;
and identifying the change of the wheel axle arrangement quantity among the video frames of each frame through the vehicle identification sub-process according to the number of the vehicle axles of the vehicle, and determining the vehicle type and/or the total mass limit value of the vehicle.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Example three:
fig. 10 is a schematic diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 10, the electronic apparatus 10 of this embodiment includes: a processor 100, a memory 101 and a computer program 102, such as a vehicle axle number determining program, stored in said memory 101 and operable on said processor 100. The processor 100, when executing the computer program 102, implements the steps in the various vehicle axle number determination method embodiments described above, such as the steps S201 to S204 shown in fig. 2. Alternatively, the processor 100, when executing the computer program 102, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the units 91 to 94 shown in fig. 9.
Illustratively, the computer program 102 may be partitioned into one or more modules/units that are stored in the memory 101 and executed by the processor 100 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 102 in the electronic device 10. For example, the computer program 102 may be divided into an axle detection result determining unit, an adaptive axle number calculating window determining unit, an axle image occurrence determining unit, and a vehicle axle number determining unit, and the specific functions of each unit are as follows:
the wheel axle detection result determining unit is used for determining a wheel axle detection result of each frame of video frame in a target video, wherein the target video is a video corresponding to a time period from the time when a vehicle enters a camera shooting area to the time when the vehicle leaves the camera shooting area; the wheel axle detection result comprises information of a wheel axle image; the wheel axle image is an image corresponding to a single wheel axle of the vehicle.
The self-adaptive wheel axle number calculation window determining unit is used for determining a self-adaptive wheel axle number calculation window of the target video according to the information of each wheel axle image; the adaptive axle number calculation window is used for representing: and (4) estimating an image detection area which is located at the target position and can contain a single wheel axle image.
And the wheel axle image occurrence condition determining unit is used for determining the wheel axle image occurrence condition of each frame of the video frame in the self-adaptive wheel axle number calculating window according to the wheel axle detection result of each frame of the video frame and the self-adaptive wheel axle number calculating window.
And the vehicle axle number determining unit is used for determining the condition that the axle image appears in the self-adaptive axle number calculating window of each frame of the video frame according to the axle detection result of each frame of the video frame and the self-adaptive axle number calculating window.
The electronic device 10 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The electronic device may include, but is not limited to, a processor 100, a memory 101. Those skilled in the art will appreciate that fig. 10 is merely an example of an electronic device 10 and does not constitute a limitation of the electronic device 10 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 101 may be an internal storage unit of the electronic device 10, such as a hard disk or a memory of the electronic device 10. The memory 101 may also be an external storage device of the electronic device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 10. Further, the memory 101 may also include both an internal storage unit and an external storage device of the electronic device 10. The memory 101 is used for storing the computer program and other programs and data required by the electronic device. The memory 101 may also be used to temporarily store data that has been output or is to be output.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (11)

1. A vehicle axle number determination method, characterized by comprising:
determining a wheel axle detection result of each frame of video frame in a target video, wherein the target video is a video corresponding to a time period from the vehicle entering a camera shooting area to the vehicle leaving the camera shooting area; the wheel axle detection result comprises information of a wheel axle image; the wheel axle image is an image corresponding to a single wheel axle of the vehicle;
determining a self-adaptive wheel axle number calculation window of the target video according to the information of each wheel axle image; the adaptive axle number calculation window is used for representing: an estimated image detection area which is located at the target position and can contain a single wheel axle image;
determining the condition of the wheel axle image of each frame in the self-adaptive wheel axle number calculation window according to the wheel axle detection result of each frame of the video frame and the self-adaptive wheel axle number calculation window respectively;
and determining the number of vehicle axles of the vehicle according to the condition that the axle images appear in the self-adaptive axle number calculation window of the video frames of each continuous frame.
2. The vehicle axle number determination method according to claim 1, wherein the determining of the wheel axle detection result of each frame of video frames in the target video comprises:
acquiring each frame of video frame obtained by shooting the side face of the vehicle in a time period from a first moment to a second moment, and determining a wheel axle detection result of the video frame; the first moment is the moment when the vehicle is detected to enter a camera shooting area, and the second moment is the moment when the vehicle is detected to leave the camera shooting area;
determining the axle detection result of each frame of video frame from the first time to the second time as follows: and detecting the wheel axle of each frame of video frame in the target video.
3. The vehicle axle number determination method according to claim 1, wherein the determining of the wheel axle detection result of each frame of video frames in the target video comprises:
and determining the wheel axle detection result of each frame of video frame in the target video according to each frame of video frame in the target video and the pre-trained deep learning model.
4. The vehicle axle number determination method according to claim 1, wherein the information of the axle image includes a size and vertex coordinates of an axle identification box, the axle identification box is a minimum circumscribed quadrilateral box of the axle image, and correspondingly, the determining an adaptive axle number calculation window of the target video according to the information of each axle image includes:
determining a target size according to the size and/or vertex coordinates of each wheel axle identification frame;
determining a target position according to preset horizontal position information and the vertex coordinates of the wheel axle identification frame;
and taking the target size as the size of the self-adaptive axle number calculation window, and taking the target position as the position of the self-adaptive axle number calculation window to obtain the self-adaptive axle number calculation window of the target video.
5. The vehicle axle number determination method according to claim 1, wherein the indication of the occurrence of the axle image in the adaptive axle number calculation window by the axle count flag indicates that the occurrence of the axle image in the adaptive axle number calculation window by the video frame is determined, and correspondingly, the determination of the occurrence of the axle image in the adaptive axle number calculation window by the video frame according to the axle detection result and the adaptive axle number calculation window of each frame respectively comprises:
determining the wheel axle counting identification of each video frame according to the wheel axle detection result of each video frame and a self-adaptive wheel axle number calculation window; the axle counting mark comprises a first counting mark and a second counting mark, the first counting mark represents that the video frame has a single axle image which meets the condition in the self-adaptive axle number calculation window, and the second counting mark represents that the video frame does not have the single axle image which meets the condition in the self-adaptive axle number calculation window;
correspondingly, the determining the number of vehicle axles of the vehicle according to the situation that the axle image appears in the adaptive axle number calculation window of each continuous frame of the video frame comprises:
and determining the number of vehicle axles of the vehicle according to the change of the axle counting identifiers between the video frames of the continuous frames.
6. The vehicle axle number determination method according to claim 5, wherein the information of the axle image includes information of an axle identification box, the axle identification box is a minimum circumscribed quadrangle box of the axle image, and correspondingly, the determining the axle count identifier of each frame of the video frame according to the axle detection result and the adaptive axle number calculation window of each frame of the video frame respectively comprises:
for each frame of video frame, if the video frame has the axle identification frame overlapped with the self-adaptive axle number calculation window, and the ratio of the overlapped area of the axle identification frame and the self-adaptive axle number calculation window to the area of the axle identification frame is greater than a preset ratio, determining that the axle count identifier of the video frame is a first count identifier; otherwise, determining that the wheel axle count identifier of the video frame is a second count identifier.
7. The vehicle axle number determination method according to any one of claims 1 to 6, further comprising, after the determining the vehicle axle number of the vehicle:
determining the wheel axle arrangement number of each video frame according to the wheel axle detection result of each video frame and a preset wheel axle detection window; the wheel axle detection window is a preset image detection area capable of containing at least 3 wheel axle images which are adjacently arranged in a video frame; the wheel axle arrangement number is used for representing the number of wheel axle images appearing in the preset wheel axle detection window in the video frame;
and determining the vehicle type and/or the total mass limit value of the vehicle according to the number of vehicle axles of the vehicle and the change of the arrangement number of the wheel axles among the video frames of each continuous frame.
8. The vehicle axle number determination method according to claim 7, wherein determining the vehicle type and/or the total mass limit of the vehicle based on the vehicle axle number of the vehicle and the change in the number of wheel axle arrangements between the video frames of successive frames comprises:
counting the maximum value of the wheel axle arrangement quantity in all video frames of the target video, and determining a corresponding vehicle identification sub-process according to the maximum value;
and identifying the change of the wheel axle arrangement quantity among the video frames of each frame through the vehicle identification sub-process according to the number of the vehicle axles of the vehicle, and determining the vehicle type and/or the total mass limit value of the vehicle.
9. A vehicle axle number determination device, characterized by comprising:
the wheel axle detection result determining unit is used for determining a wheel axle detection result of each frame of video frame in a target video, wherein the target video is a video corresponding to a time period from the time when a vehicle enters a camera shooting area to the time when the vehicle leaves the camera shooting area; the wheel axle detection result comprises information of a wheel axle image; the wheel axle image is an image corresponding to a single wheel axle of the vehicle;
the self-adaptive wheel axle number calculation window determining unit is used for determining a self-adaptive wheel axle number calculation window of the target video according to the information of each wheel axle image; the adaptive axle number calculation window is used for representing: an estimated image detection area which is located at the target position and can contain a single wheel axle image;
the wheel axle image occurrence condition determining unit is used for determining the wheel axle image occurrence condition of each frame of the video frame in the self-adaptive wheel axle number calculating window according to the wheel axle detection result of each frame of the video frame and the self-adaptive wheel axle number calculating window;
and the vehicle axle number determining unit is used for determining the condition that the axle image appears in the self-adaptive axle number calculating window of each frame of the video frame according to the axle detection result of each frame of the video frame and the self-adaptive axle number calculating window.
10. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the computer program, when executed by the processor, causes the electronic device to carry out the steps of the method according to any one of claims 1 to 8.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes an electronic device to carry out the steps of the method according to any one of claims 1 to 8.
CN202110138962.3A 2021-02-01 2021-02-01 Vehicle axle number determining method and device, electronic equipment and storage medium Pending CN112836631A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110138962.3A CN112836631A (en) 2021-02-01 2021-02-01 Vehicle axle number determining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110138962.3A CN112836631A (en) 2021-02-01 2021-02-01 Vehicle axle number determining method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112836631A true CN112836631A (en) 2021-05-25

Family

ID=75931474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110138962.3A Pending CN112836631A (en) 2021-02-01 2021-02-01 Vehicle axle number determining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112836631A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537238A (en) * 2021-07-05 2021-10-22 上海闪马智能科技有限公司 Information processing method and image recognition device
CN114332827A (en) * 2022-03-10 2022-04-12 浙江大华技术股份有限公司 Vehicle identification method and device, electronic equipment and storage medium
CN114332681A (en) * 2021-12-08 2022-04-12 上海高德威智能交通***有限公司 Vehicle identification method and device
CN117953460A (en) * 2024-03-26 2024-04-30 江西众加利高科技股份有限公司 Vehicle wheel axle identification method and device based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425764A (en) * 2013-07-30 2013-12-04 广东工业大学 Vehicle matching method based on videos
US20180025249A1 (en) * 2016-07-25 2018-01-25 Mitsubishi Electric Research Laboratories, Inc. Object Detection System and Object Detection Method
CN109949331A (en) * 2019-04-17 2019-06-28 合肥泰禾光电科技股份有限公司 Container edge detection method and device
CN111292432A (en) * 2020-01-14 2020-06-16 北京巨视科技有限公司 Vehicle charging type distinguishing method and device based on vehicle type recognition and wheel axle detection
CN111860201A (en) * 2020-06-28 2020-10-30 中铁大桥科学研究院有限公司 Image recognition and bridge monitoring combined ramp heavy vehicle recognition method and system
CN111950394A (en) * 2020-07-24 2020-11-17 中南大学 Method and device for predicting lane change of vehicle and computer storage medium
CN112258536A (en) * 2020-10-26 2021-01-22 大连理工大学 Integrated positioning and dividing method for corpus callosum and lumbricus cerebellum

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425764A (en) * 2013-07-30 2013-12-04 广东工业大学 Vehicle matching method based on videos
US20180025249A1 (en) * 2016-07-25 2018-01-25 Mitsubishi Electric Research Laboratories, Inc. Object Detection System and Object Detection Method
CN109949331A (en) * 2019-04-17 2019-06-28 合肥泰禾光电科技股份有限公司 Container edge detection method and device
CN111292432A (en) * 2020-01-14 2020-06-16 北京巨视科技有限公司 Vehicle charging type distinguishing method and device based on vehicle type recognition and wheel axle detection
CN111860201A (en) * 2020-06-28 2020-10-30 中铁大桥科学研究院有限公司 Image recognition and bridge monitoring combined ramp heavy vehicle recognition method and system
CN111950394A (en) * 2020-07-24 2020-11-17 中南大学 Method and device for predicting lane change of vehicle and computer storage medium
CN112258536A (en) * 2020-10-26 2021-01-22 大连理工大学 Integrated positioning and dividing method for corpus callosum and lumbricus cerebellum

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAMBAM KUMAR 等: "Development of an Adaptive Approach for Identification of Targets (Match Box, Pocket Diary and Cigarette Box) under the Cloth with MMW Imaging System", 《PROGRESS IN ELECTROMAGNETICS RESEARCH B》, 31 December 2017 (2017-12-31), pages 1 - 19 *
张勤 等: "基于YOLOv3目标检测的秧苗列中心线提取方法", 《 农业机械学报》, 29 June 2020 (2020-06-29), pages 34 - 43 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537238A (en) * 2021-07-05 2021-10-22 上海闪马智能科技有限公司 Information processing method and image recognition device
CN114332681A (en) * 2021-12-08 2022-04-12 上海高德威智能交通***有限公司 Vehicle identification method and device
CN114332827A (en) * 2022-03-10 2022-04-12 浙江大华技术股份有限公司 Vehicle identification method and device, electronic equipment and storage medium
CN117953460A (en) * 2024-03-26 2024-04-30 江西众加利高科技股份有限公司 Vehicle wheel axle identification method and device based on deep learning

Similar Documents

Publication Publication Date Title
CN112836631A (en) Vehicle axle number determining method and device, electronic equipment and storage medium
CN108986465B (en) Method, system and terminal equipment for detecting traffic flow
CN112800860B (en) High-speed object scattering detection method and system with coordination of event camera and visual camera
CN108647587B (en) People counting method, device, terminal and storage medium
CN111383460B (en) Vehicle state discrimination method and device and computer storage medium
CN111627215A (en) Video image identification method based on artificial intelligence and related equipment
CN111144337B (en) Fire detection method and device and terminal equipment
CN112132071A (en) Processing method, device and equipment for identifying traffic jam and storage medium
CN113505638B (en) Method and device for monitoring traffic flow and computer readable storage medium
CN112348686B (en) Claim settlement picture acquisition method and device and communication equipment
CN112434657A (en) Drift carrier detection method, device, program, and computer-readable medium
CN112487884A (en) Traffic violation behavior detection method and device and computer readable storage medium
CN111369317B (en) Order generation method, order generation device, electronic equipment and storage medium
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium
CN110599520B (en) Open field experiment data analysis method, system and terminal equipment
WO2024067732A1 (en) Neural network model training method, vehicle view generation method, and vehicle
CN113947744A (en) Fire image detection method, system, equipment and storage medium based on video
CN112153320B (en) Method and device for measuring size of article, electronic equipment and storage medium
CN112308848A (en) Method and system for identifying state of baffle plate of scrap steel truck, electronic equipment and medium
CN114724119B (en) Lane line extraction method, lane line detection device, and storage medium
CN115965636A (en) Vehicle side view generating method and device and terminal equipment
CN117058912A (en) Method and device for detecting abnormal parking of inspection vehicle, storage medium and electronic equipment
CN117041484A (en) People stream dense area monitoring method and system based on Internet of things
Kini Real time moving vehicle congestion detection and tracking using OpenCV
CN114267076B (en) Image identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination