CN114882451A - Image processing method, device, equipment and medium - Google Patents

Image processing method, device, equipment and medium Download PDF

Info

Publication number
CN114882451A
CN114882451A CN202210517329.XA CN202210517329A CN114882451A CN 114882451 A CN114882451 A CN 114882451A CN 202210517329 A CN202210517329 A CN 202210517329A CN 114882451 A CN114882451 A CN 114882451A
Authority
CN
China
Prior art keywords
value
channel
image
pixel
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210517329.XA
Other languages
Chinese (zh)
Inventor
戚璇月瞳
林骏
王亚运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210517329.XA priority Critical patent/CN114882451A/en
Publication of CN114882451A publication Critical patent/CN114882451A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Nonlinear Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method, device, equipment and medium, which are used for solving the problem that whether the high beam of a vehicle is started or not accurately under complex conditions such as rainy days at night and the like. In the embodiment of the invention, based on the first image acquired by the image acquisition equipment, second-order Laplace filtering is performed on the pixel value corresponding to each channel of each pixel point in the first image containing the vehicle, the reflection component operation value of each channel corresponding to each pixel point is acquired, and the filtered pixel value corresponding to each channel of each pixel point is determined according to the reflection component operation value of each channel and the pixel value of each channel, so that the interference of reflected light in the first image can be removed, and whether the high beam of the vehicle in the first image after the pixel value filtering is on or not can be accurately identified.

Description

Image processing method, device, equipment and medium
Technical Field
The present invention relates to the field of intelligent traffic technologies, and in particular, to an image processing method, apparatus, device, and medium.
Background
With the rapid development of science and technology, vehicles bring convenience to life of people and simultaneously cause a series of potential safety hazards, wherein according to traffic accident data analysis of the past year, in a traffic accident at night, because the accident ratio caused by the opening of a vehicle high beam is large, the problems of instant blindness, decreased perception and the like caused by the opening of a high beam lamp at night are caused to drivers, and the potential safety hazards become important potential hazards of traffic accidents at night. The detection of high beam lights for night vehicles has become an indispensable part of intelligent traffic control systems.
The prior art mainly calculates a binarization connected domain of an area where a vehicle light is located in a collected image containing the vehicle, and determines whether a high beam of the vehicle is turned on. The method is only suitable for high beam detection in a simple scene, diffuse emission can exist in complex environments such as rainy days at night, and reflected light can cause inaccurate high beam detection.
Disclosure of Invention
The invention provides an image processing method, device, equipment and medium, which are used for solving the problem that whether the high beam of a vehicle is started or not accurately in complex conditions such as rainy days at night and the like.
The invention provides an image processing method, which comprises the following steps:
acquiring a first image acquired by image acquisition equipment, and if the first image is identified to contain vehicles, sequentially taking each pixel point as a pixel point to be processed aiming at each pixel point in the first image, and executing the following operations: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; determining a filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value;
and inputting the first image after the pixel value filtration into a classification model after pre-training, and identifying whether a high beam of the vehicle in the first image after the pixel value filtration is started or not.
Further, after determining the filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value, and the second maximum value, the method further includes:
processing the filtered pixel value corresponding to each channel of the pixel point by adopting a dark channel algorithm, and determining the brightness value of atmospheric light, the brightness value of ambient light and the atmospheric transmittance corresponding to each channel of the pixel point;
and determining the processed pixel value corresponding to each channel of the pixel point according to the filtered pixel value corresponding to each channel of the pixel point, the brightness value of the atmospheric light, the brightness value of the ambient light and the atmospheric transmittance.
Further, the acquiring the first image acquired by the image acquisition device includes:
when the vehicle is detected to pass through a preset high beam detection line, a first image collected by the image collecting equipment is obtained.
Further, the determining, according to the pixel value of each channel, the average value, the first maximum value, and the second maximum value, the filtered pixel value corresponding to each channel of the pixel point includes:
determining a proportional value according to the first maximum value and the average value;
determining the difference between the second maximum value and the proportional value, and determining a second difference value;
and determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value of each channel and the second difference value.
Further, the determining a ratio value according to the first maximum value and the average value includes:
calculating a sum of pixel values for each of said channels and calculating a product of said sum and said first maximum, determining a first difference of said product and said average;
and determining a proportional value according to the first difference value and the first maximum value.
Further, the method further comprises:
if the high beam is recognized to be turned on, when the vehicle is recognized to pass through a preset vehicle attribute recognition line, the image acquisition equipment is controlled to turn on an exposure lamp, a second image acquired by the image acquisition equipment is acquired, and identification information of the vehicle is acquired according to the second image and a vehicle attribute recognition module trained in advance.
The present invention provides an image processing apparatus, the apparatus including:
the acquisition module is used for acquiring a first image acquired by the image acquisition equipment;
the determining module is used for sequentially taking each pixel point in the first image as the pixel point to be processed according to the fact that if the first image is identified to contain the vehicle, and executing the following operations: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; determining a filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value;
and the first identification module is used for inputting the first image after the pixel value filtration to the classification model after the pre-training is finished and identifying whether the high beam of the vehicle in the first image after the pixel value filtration is started or not.
Further, the determining module is further configured to determine, by using a dark channel algorithm, an atmospheric light brightness value, an ambient light brightness value, and an atmospheric transmittance corresponding to each pixel point for the filtered pixel value corresponding to each channel of the pixel point; and determining the processed pixel value corresponding to each channel of the pixel point according to the filtered pixel value corresponding to each channel of the pixel point, the brightness value of the atmospheric light, the brightness value of the ambient light and the atmospheric transmittance.
Further, the apparatus further comprises:
and the second identification module is used for controlling the image acquisition equipment to turn on the exposure lamp if the high beam is identified to be turned on and when the vehicle passes through a preset vehicle attribute identification line, acquiring a second image acquired by the image acquisition equipment and acquiring the identification information of the vehicle according to the second image and the vehicle attribute identification module trained in advance.
Further, the acquisition module is specifically used for acquiring a first image acquired by the image acquisition device when the vehicle is detected to pass through a preset high beam detection line.
Further, the determining module is specifically configured to calculate a sum of the pixel values of each channel, calculate a product of the sum and the first maximum value, and determine a first difference between the product and the average value;
determining a proportional value according to the first difference value and the first maximum value;
determining the difference between the second maximum value and the proportional value, and determining a second difference value;
and determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value of each channel and the second difference value.
The invention provides an electronic device comprising at least a processor and a memory, the processor being adapted to implement the steps of the image processing method of any of the above when executing a computer program stored in the memory.
The present invention provides a computer-readable storage medium storing a computer program executable by an electronic device, the program, when run on the electronic device, causing the electronic device to perform the steps of any of the image processing methods described above.
The embodiment of the invention provides an image processing method, an image processing device, an image processing apparatus and a medium, wherein based on an acquired first image, if a vehicle is identified, an average value and a second maximum value of pixel values corresponding to each channel of each pixel are determined according to pixel values corresponding to each channel of each pixel of the first image, second-order Laplace filtering is carried out on the pixel values of each channel, a reflection component operation value of each channel is acquired, a first maximum value of the reflection component operation value of each channel is determined, and a filtered pixel value corresponding to each channel of the pixel is determined according to the pixel values, the average value, the first maximum value and the second maximum value of each channel; and inputting the first image after the pixel value filtration into a classification model after pre-training, and identifying whether a high beam of the vehicle in the first image after the pixel value filtration is started or not. According to the embodiment of the invention, the second-order Laplace filtering is carried out on the pixel value corresponding to each channel of each pixel point in the first image containing the vehicle, the reflection component operation value of each channel corresponding to the pixel point is obtained, and the filtered pixel value corresponding to each channel of the pixel point is determined according to the reflection component operation value of each channel and the pixel value of each channel, so that the interference of the reflection light in the first image can be removed, and whether the high beam of the vehicle in the first image after the pixel value filtering is turned on or not can be accurately identified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic process diagram of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a target detection process according to an embodiment of the present invention;
FIG. 3 is an exemplary diagram of a complex light source for a vehicle in rainy night days according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a high beam detection line and a vehicle attribute identification line provided in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the brief descriptions of the terms in the present invention are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present invention. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and in the claims, as well as in the drawings, are used for distinguishing between similar or analogous objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are merely for illustrating and explaining the present invention, and are not intended to limit the present invention, and that the embodiments and features of the embodiments in the present invention may be combined with each other without conflict.
The embodiment of the invention provides an image processing method, an image processing device and a medium, wherein a first image acquired by image acquisition equipment is acquired, if a vehicle is identified to be contained in the first image, each pixel point in the first image is sequentially used as a pixel point to be processed, and the following operations are executed: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; determining a filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value; and inputting the first image after the pixel value filtration into a classification model after pre-training, and identifying whether a high beam of the vehicle in the first image after the pixel value filtration is started or not.
Example 1:
in order to improve the accuracy of detecting whether the high beam is turned on, embodiments of the present invention provide an image processing method, apparatus, device, and medium.
Fig. 1 is a schematic process diagram of an image processing method according to an embodiment of the present invention, where the process includes the following steps:
s101: a first image acquired by an image acquisition device is acquired.
The image processing method provided by the embodiment of the invention is applied to electronic equipment, and the electronic equipment can be image acquisition equipment, a PC (personal computer), a server and the like.
The following describes the image processing process in detail by taking an electronic device as an image capturing device as an example.
In order to solve the problem of inaccurate high beam detection caused by complex reflected light such as rainy days at night, whether a high beam of a vehicle is turned on or not is accurately identified, and a first image containing the vehicle needs to be acquired at first. In order to acquire a first image containing a vehicle, after the image is acquired, the image can be detected, whether the image contains the vehicle or not is detected, and when the image contains the vehicle, the image is taken as the first image.
Specifically, the vehicle target detection model based on the anchor-base in the target detection may be not limited to the detector algorithm based on the anchor-base in the target detection, and may be, for example, a Single neural network (YOLO), a model (SSD) predicting an object in an image using a Single deep neural network, a candidate area convolutional network (regioncnn, RCNN), or the like, or may be detected by another detector algorithm based on the anchor-free, and specifically includes an algorithm such as a center point network (centrnet) and an angle network (CornerNet).
In the embodiment of the invention, the vehicle target detection model is adopted to carry out target detection based on the CenterNet algorithm. Fig. 2 is a schematic diagram of a target detection process according to an embodiment of the present invention, where an image acquired by an image acquisition device is input into a vehicle target detection model that is trained in advance, where a backbone Network (backbone) is a Residual Neural Network (resnet) + a central pool Network (centerpooling), and an encoding-decoding Network (Encode-Decode) is a key point heat map encoding-decoding regression Network, so as to obtain a center point position Tc of a vehicle in the image and a width Tw and a height Th of a body frame, and output an image labeled with a vehicle target frame, thereby determining that the image includes the vehicle.
In order to improve the accuracy of high beam detection, after a first image containing a vehicle is acquired, if the first image is a red, green and blue (RGB) image, subsequent processing is directly performed based on the first image, and if the first image is a non-RGB image, the first image is converted into an RGB image.
S102: if the first image is identified to contain the vehicle, regarding each pixel point in the first image, sequentially taking each pixel point as a pixel point to be processed, and executing the following operations: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; and determining the filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value.
After the first image is converted into an RGB image, R, G, B sub-images of three channels are obtained, for each pixel point, a pixel value corresponding to the pixel point in the sub-image of each channel may be obtained, and an average value of the pixel point may be determined according to each pixel value.
In addition, after the pixel value corresponding to the pixel point in the sub-image of each channel is obtained, second-order laplacian filtering is performed on the pixel value of each channel, and the reflection component operation value corresponding to each channel of the pixel point is obtained. After the reflection component operation value of each channel corresponding to each pixel point is obtained, for each pixel point, according to the reflection component operation value of each channel corresponding to the pixel point, the sum of the reflection component operation values of each channel corresponding to the pixel point can be determined, and the ratio of the maximum value in the reflection component operation values of each channel to the sum is determined as the first maximum value. In addition, for each pixel point, according to the corresponding pixel value of the pixel point in the sub-image of each channel, the maximum pixel value is taken as the second maximum value of the pixel value of each channel.
And determining the filtered pixel value corresponding to each channel of each pixel point according to the pixel value of each channel of each pixel point, the average value of the pixel values of the pixel points, the first maximum value and the second maximum value.
S103: and inputting the first image after the pixel value filtration into a classification model after pre-training, and identifying whether a high beam of the vehicle in the first image after the pixel value filtration is started or not.
After the pixel value of each channel of each pixel point in the RGB image of the first image is filtered in the above manner, that is, after the pixel value of each channel of the pixel point is processed in the above manner, the processed RGB image can be obtained, and the processed first image of each pixel point is obtained according to the pixel value of each channel of the pixel point in the processed RGB image.
And inputting the processed first image into a classification model which is trained in advance, carrying out high beam identification through the classification model which is completed in advance, judging whether the vehicle in the first image is in a high beam opening state, and outputting the high beam state of the motor vehicle as on if the confidence coefficient of the high beam opening state is larger than a set threshold value. The confidence may be a value between 0 and 100, the set threshold may be 70, and the confidence that the high beam on state is recognized to be greater than 70, the high beam state of the motor vehicle is output as on. In particular, the classification model may be a deep learning model, for example, a CNN classification model. The training process of the classification model may adopt the prior art, and is not described herein again.
In the embodiment of the invention, based on the first image acquired by the image acquisition device, second-order Laplace filtering is performed on pixel values corresponding to R, G, B channels of each pixel point in the first image containing the vehicle, so as to obtain a reflection component operation value of each channel corresponding to the pixel point, and the filtered pixel value corresponding to each channel of the pixel point is determined according to the reflection component operation value of each channel and the pixel value of each channel, so that the interference of reflected light in the first image can be removed, and whether a high beam of the vehicle in the first image after the pixel value filtering is on or not can be accurately identified.
Example 2:
in order to further improve the accuracy of high beam identification, on the basis of the above embodiment, in an embodiment of the present invention, after determining the filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value, and the second maximum value, the method further includes:
processing the filtered pixel value corresponding to each channel of the pixel point by adopting a dark channel algorithm, and determining the brightness value of atmospheric light, the brightness value of ambient light and the atmospheric transmittance corresponding to each channel of the pixel point;
and determining the processed pixel value corresponding to each channel of the pixel point according to the filtered pixel value corresponding to each channel of the pixel point, the brightness value of the atmospheric light, the brightness value of the ambient light and the atmospheric transmittance.
Because a complex environment of rainy night can generate a lot of complex reflected lights, such as ambient light and atmospheric light, etc., fig. 3 is an exemplary diagram of a complex light source of a vehicle in rainy night provided by an embodiment of the present invention, as shown in fig. 3, in rainy night, apart from road diffuse reflection and road strong reflection, ambient light and atmospheric light can cause interference to high beam detection, which causes high beam detection inaccuracy, so that in order to remove all lights except the high beam as much as possible and improve accuracy of high beam identification, in the embodiment of the present invention, ambient light removal and atmospheric light processing are performed on pixel values corresponding to R, G, B channels of each pixel point in a processed first image.
Specifically, in the embodiment of the present invention, the pixel values corresponding to the R, G, B three channels of each pixel point in the first image may be processed according to the atmospheric scattering model and the pixel values corresponding to the R, G, B three channels of each pixel point in the processed first image, so as to obtain the first image from which the ambient light and the atmospheric light are removed.
According to the atmospheric scattering model, the atmospheric light and ambient light removal processing is performed on the pixel values corresponding to the R, G, B channels of each pixel point in the processed first image for the first time, and the following formula can be adopted:
Figure BDA0003640237250000101
wherein, F (x, y) is the pixel value of the pixel point with the coordinate (x, y) in the first image after removing the ambient light and the atmospheric light, O (x, y) is the pixel value corresponding to the pixel point in the processed first image, O (x, y) is the brightness value of the atmospheric light corresponding to the pixel point in the processed first image, a (x, y) is the brightness value of the ambient light corresponding to the pixel point in the processed first image, and τ (x, y) is the atmospheric transmittance corresponding to the pixel point in the processed first image.
In order to obtain the first image from which the ambient light and the atmospheric light have been removed, it is necessary to obtain a luminance value O of the atmospheric light corresponding to the pixel point (x, y), the luminance value a (x, y) of the ambient light, and the atmospheric transmittance τ (x, y) are estimated. According to a dark channel algorithm, calculating a dark channel value O of a pixel value O (x, y) corresponding to each channel of the pixel point in the processed first image dark (x, y), specifically:
Figure BDA0003640237250000102
Figure BDA0003640237250000103
wherein, (x, Y) represents the coordinates of the pixel, that is, Y (x, Y) represents the central neighborhood range of the pixel in the processed first image, which is set to 7 in this embodiment, C represents a C channel corresponding to each pixel in the processed first image, and the C channel specifically includes R, G, B channels; o is C (Y) represents the pixel value, O, of each pixel point in the central neighborhood range of the pixel point in the processed first image, corresponding to the C channel C (Y) specifically includes O r (Y)、O g (Y) and O b (Y)。
The dark channel value O of each pixel point is counted dark (x, y), sorting the dark channel values in descending order of the dark channel values, and taking the average value of the dark channel values sorted at the top 0.5% as the brightness value O of the atmospheric light (x, y). In order to prevent the loss of detail texture in the far field depth region and perform the guided filtering process on the atmospheric transmittance τ (x, y), specifically:
Figure BDA0003640237250000111
wherein epsilon is a guide filtering processing parameter, and 0.95 is taken according to an experiment value. Brightness value O of atmospheric light corresponding to pixel point (x, y), the luminance value a (x, y) of the ambient light, and the atmospheric transmittance τ (x, y), and a pixel value F (x, y) for each channel corresponding to the pixel point processed by the first processing is obtained.
In order to suppress strong halos in the first image from which the ambient light and the atmospheric light are removed, the atmospheric light and the ambient light are removed for the second time for pixel values F (x, y) corresponding to the R, G, B channels of each pixel point in the first image from which the ambient light and the atmospheric light are removed for the first time, and a pixel value F' (x, y) corresponding to each channel of the pixel point after the final processing is obtained.
Specifically, the atmospheric light and ambient light removal process is performed for the second time using the following formula:
F′(x,y)=F(x,y)-(O (x,y)+A(x,y))
wherein, F (x, y) is the pixel value of each channel corresponding to the pixel point after the ambient light and the atmospheric light are removed for the first time, and O (x, y) is the brightness value of the atmospheric light corresponding to the pixel point in the processed first image, and a (x, y) is the brightness value of the ambient light corresponding to the pixel point in the processed first image.
According to the embodiment of the invention, the environment light and the atmosphere light are removed through processing the filtered pixel value corresponding to each channel of the pixel point, so that the interference of the environment light and the atmosphere light on the high beam detection in rainy night is eliminated, and whether the high beam of the vehicle is started or not is more accurately identified, so that the problem of inaccurate high beam detection caused by complex light such as rainy night and the like is solved.
Example 3:
in order to further improve the accuracy of high beam identification, on the basis of the foregoing embodiments, in an embodiment of the present invention, the acquiring the first image acquired by the image acquisition device includes:
when the vehicle is detected to pass through a preset high beam detection line, a first image collected by the image collecting equipment is obtained.
In a possible implementation mode, in order to further improve the accuracy of high beam identification, excessive useless images collected by the image collection equipment are avoided, and the waste of resources is avoided. Therefore, the image collected by the image collecting device can be obtained, whether the vehicle passes through the high beam detection line or not can be detected according to the position of the vehicle in the image and the position of the light detection line, if the vehicle in the image passes through the preset high beam detection line, the image can be used as the first image, or the image collecting device collects the image again, and the collected image is used as the first image.
Example 4:
in order to further improve the accuracy of high beam identification, on the basis of the foregoing embodiments, in an embodiment of the present invention, the determining, according to the pixel value of each channel, the average value, the first maximum value, and the second maximum value, the filtered pixel value corresponding to each channel of the pixel point includes:
determining a proportional value according to the first maximum value and the average value; determining the difference between the second maximum value and the proportional value, and determining a second difference value; and determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value of each channel and the second difference value.
In the embodiment of the present invention, after the first maximum value, the second maximum value, and the average value are determined, when the pixel value after the pixel point filtering is determined, the ratio value may be determined according to the first maximum value and the average value; determining a second difference value according to the difference between the second maximum value and the proportional value; and then determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value corresponding to the pixel point in the sub-image of each channel and the second difference value.
Specifically, the filtered pixel value corresponding to each channel of the pixel point can be determined by the following formula:
O(x,y)=I(x,y)-M(x,y)
wherein I (x, y) is a pixel value corresponding to each channel R, G, B for a pixel point with coordinates (x, y) of the determined first image, O (x, y) is a filtered pixel value corresponding to each channel of the pixel point, M (x, y) represents a corresponding second difference value, and the second difference value can be determined according to a difference between the second maximum value and the ratio:
M(x,y)=I max (x,y)-R(x,y)
wherein R (x, y) is a proportional value, I r (x,y),I g (x,y),I b (x, y) are pixel values of three channels of R, G and B corresponding to the pixel point with the coordinate (x, y) in the first image, I (x, y) represents an average value of the pixel values of each channel corresponding to the pixel point with the coordinate (x, y), and I (x, y) represents an average value of the pixel values of each channel corresponding to the pixel point with the coordinate (x, y) max (x, y) represents a second maximum value of the pixel value of each channel corresponding to the pixel point with the coordinate (x, y), and can be obtained by the following formula:
I max (x,y)={I r (x,y),I g (x,y),I b (x,y)} max
in order to further improve the accuracy of high beam identification, in the embodiments of the present invention, on the basis of the foregoing embodiments, the determining a proportional value according to the first maximum value and the average value includes:
calculating a sum of pixel values for each of said channels and calculating a product of said sum and said first maximum, determining a first difference of said product and said average; and determining a proportional value according to the first difference and the first maximum value.
In the embodiment of the present invention, after the first maximum value and the average value are determined, when the ratio value is determined, the sum of the pixel values of each channel may be calculated according to the pixel value of the pixel point corresponding to the sub-image of each channel, the product of the sum and the first maximum value may be calculated, and the first difference between the product and the average value of the pixel values may be determined. And determining a proportion value of the first difference value and the first maximum value according to the first difference value and the first maximum value.
Specifically, the proportional value may be determined by the following formula:
Figure BDA0003640237250000131
wherein α (x, y) represents a corresponding first maximum value of the pixel point whose coordinate is (x, y), and can be obtained by the following formula:
Figure BDA0003640237250000141
wherein d (x, y) represents the reflection component operation value of each channel corresponding to the pixel point with the coordinate (x, y), and the reflection component operation value of each channel is obtained by calculating through the following second-order laplacian filter formula:
d(x,y)=I(x+1,y)+I(x-1,y)+I(x,y+1)+I(x,y-1)-4I(x,y)
respectively obtaining the reflection component operation value d of each channel corresponding to the pixel point according to the formula r (x,y),d g (x,y),d b (x,y),max{d r (x,y),d g (x,y),d b (x, y) } represents the maximum reflection component operation value.
In an embodiment of the present invention, a first difference value of the product and the average value is determined by calculating a sum value of pixel values of each channel and calculating a product of the sum value and a first maximum value; determining a ratio value according to the first difference and the first maximum value, and determining a second difference between the second maximum value and the ratio value; and determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value of each channel and the second difference value. Therefore, the interference of the reflected light in the first image can be removed, and whether the high beam of the vehicle in the first image is turned on after the pixel value is filtered can be accurately identified.
Example 5:
after the vehicle with the far-reaching light turned on is accurately determined, in order to determine which vehicle has a violation, on the basis of the above embodiments, in an embodiment of the present invention, the method further includes:
if the high beam is recognized to be turned on, when the vehicle is recognized to pass through a preset vehicle attribute recognition line, the image acquisition equipment is controlled to turn on an exposure lamp, a second image acquired by the image acquisition equipment is acquired, and identification information of the vehicle is acquired according to the second image and a vehicle attribute recognition module trained in advance.
In a possible implementation manner, after the vehicle with the far-reaching light turned on is accurately determined, in order to determine which vehicle has a violation, vehicle attribute identification can be performed on the vehicle, so that the vehicle is snapshotted and reported. The embodiment of the invention adopts the flashing lights to suppress the high beam lights of the vehicle, so that the clear images of the vehicle body can be obtained. In order to identify the vehicle attribute, a vehicle attribute identification model may be set in advance in the image capturing apparatus.
Specifically, the vehicle attribute identification model comprises a license plate identification model and a vehicle type identification model; the license plate detection method of the license plate recognition model is not limited to methods of Anchor-base including YOLO, SSD, RCNN series and the like, or Anchor-free including CenterNet, CornerNet and the like. The license plate recognition module can respectively recognize the color of the license plate, the type of the license plate and the number of the license plate and output the color of the license plate, the type of the license plate and the number of the license plate, wherein the color of the license plate can be yellow, gradually changed green, blue, white and the like. The vehicle type identification method of the vehicle type identification model adopts a CNN network classification method, the vehicle type identification model can output vehicle types, and the vehicle types can be taxies, small trucks, medium trucks, large trucks, small buses, medium buses, two-wheel vehicles, buses, cars, pickup trucks, special vehicles and the like. The specific vehicle attribute identification process belongs to the prior art, and is not described herein again.
Fig. 4 is a schematic diagram of the high beam detection line and the vehicle attribute identification line provided in the embodiment of the present invention, and as shown in fig. 4, the high beam detection line and the vehicle attribute identification line are provided on the lane line, and the high beam detection line is farther from the image acquisition device, and the vehicle attribute identification line is closer to the image acquisition device. When a vehicle in the lane line passes through the preset high beam detection line, the front image containing the vehicle high beam can be acquired, so that the vehicle high beam can be more accurately identified.
Example 6:
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, where the apparatus includes:
an obtaining module 501, configured to obtain a first image collected by an image collecting device;
a determining module 502, configured to, according to that if it is identified that the first image includes a vehicle, sequentially take each pixel point in the first image as a pixel point to be processed, and execute the following operations: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; determining a filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value;
the first identifying module 503 is configured to input the first image with the filtered pixel value to the pre-trained classification model, and identify whether a high beam of the vehicle in the first image with the filtered pixel value is turned on.
In a possible implementation manner, the determining module 502 is further configured to determine, by using a dark channel algorithm, a brightness value of atmospheric light, a brightness value of ambient light, and an atmospheric transmittance corresponding to each pixel point for the filtered pixel value corresponding to each channel of the pixel point; and determining the processed pixel value corresponding to each channel of the pixel point according to the filtered pixel value corresponding to each channel of the pixel point, the brightness value of the atmospheric light, the brightness value of the ambient light and the atmospheric transmittance.
In a possible embodiment, the apparatus further comprises:
and the second identification module 504 is configured to control the image acquisition device to turn on the exposure lamp if the high beam is identified to be turned on, and acquire a second image acquired by the image acquisition device when the vehicle is identified to pass through a preset vehicle attribute identification line, and acquire identification information of the vehicle according to the second image and the vehicle attribute identification module trained in advance.
In a possible implementation manner, the obtaining module 501 is specifically configured to obtain a first image collected by an image collecting device when it is detected that a vehicle passes through a preset high beam detection line.
In a possible implementation, the determining module 502 is specifically configured to calculate a sum of the pixel values of each channel, calculate a product of the sum and the first maximum value, and determine a first difference between the product and the average value; determining a proportional value according to the first difference value and the first maximum value; determining the difference between the second maximum value and the proportional value, and determining a second difference value; and determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value of each channel and the second difference value.
According to the method, based on the acquired first image, if the vehicle is identified to be contained in the first image, second-order Laplace filtering is carried out according to the pixel value corresponding to each channel of each pixel point of the image, the reflection component operation value corresponding to each channel of each pixel point is acquired, the first maximum value is determined according to the reflection component operation value, and the filtered pixel value corresponding to each channel of each pixel point is determined according to the reflection component operation value of each channel and the pixel value of each channel; and inputting the first image after the pixel value filtration into a classification model after pre-training, and identifying whether a high beam of the vehicle in the first image after the pixel value filtration is started or not. Therefore, the problem that whether the high beam of the vehicle is started or not to be detected inaccurately in complex conditions such as rainy days at night can be solved.
Example 7:
on the basis of the foregoing embodiments, some embodiments of the present invention further provide an electronic device, and fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention. As shown in fig. 6, includes: the system comprises a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602 and the memory 603 are communicated with each other through the communication bus 604.
The memory 603 has stored therein a computer program which, when executed by the processor 401, causes the processor 601 to perform the steps of:
acquiring a first image acquired by image acquisition equipment, and if the first image is identified to contain vehicles, sequentially taking each pixel point as a pixel point to be processed aiming at each pixel point in the first image, and executing the following operations: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; determining a filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value; and inputting the first image after the pixel value filtration into a classification model after pre-training, and identifying whether a high beam of the vehicle in the first image after the pixel value filtration is started or not.
Further, the processor 601 is further configured to process the filtered pixel value corresponding to each channel of the pixel point by using a dark channel algorithm, and determine a brightness value of atmospheric light, a brightness value of ambient light, and an atmospheric transmittance corresponding to each channel of the pixel point; and determining the processed pixel value corresponding to each channel of the pixel point according to the filtered pixel value corresponding to each channel of the pixel point, the brightness value of the atmospheric light, the brightness value of the ambient light and the atmospheric transmittance.
Further, the processor 601 is further configured to acquire a first image acquired by the image acquisition device when it is detected that the vehicle passes through a preset high beam detection line.
Further, the processor 601 is further configured to calculate a sum of the pixel values of each channel, calculate a product of the sum and the first maximum value, and determine a first difference between the product and the average value; determining a proportional value according to the first difference value and the first maximum value; determining the difference between the second maximum value and the proportional value, and determining a second difference value; and determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value of each channel and the second difference value.
Further, the processor 601 is further configured to control the image acquisition device to turn on the exposure light if the high beam is identified to be turned on, and acquire the second image acquired by the image acquisition device when the vehicle is identified to pass through a preset vehicle attribute identification line, and acquire the identification information of the vehicle according to the second image and the vehicle attribute identification module trained in advance.
The communication bus mentioned in the above server may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 602 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
Example 8:
on the basis of the foregoing embodiments, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program executable by an electronic device is stored, and when the program is run on the electronic device, the electronic device is caused to execute the following steps:
acquiring a first image acquired by image acquisition equipment, and if the first image is identified to contain vehicles, sequentially taking each pixel point as a pixel point to be processed aiming at each pixel point in the first image, and executing the following operations: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; determining a filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value;
and inputting the first image after the pixel value filtration into a classification model after pre-training, and identifying whether a high beam of the vehicle in the first image after the pixel value filtration is started or not.
Further, after determining the filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value, and the second maximum value, the method further includes:
processing the filtered pixel value corresponding to each channel of the pixel point by adopting a dark channel algorithm, and determining the brightness value of atmospheric light, the brightness value of ambient light and the atmospheric transmittance corresponding to each channel of the pixel point; and determining the processed pixel value corresponding to each channel of the pixel point according to the filtered pixel value corresponding to each channel of the pixel point, the brightness value of the atmospheric light, the brightness value of the ambient light and the atmospheric transmittance.
Further, the acquiring the first image acquired by the image acquisition device includes:
when the vehicle is detected to pass through a preset high beam detection line, a first image collected by the image collecting equipment is obtained.
Further, the determining, according to the pixel value of each channel, the average value, the first maximum value, and the second maximum value, the filtered pixel value corresponding to each channel of the pixel point includes:
calculating a sum of pixel values for each of said channels and calculating a product of said sum and said first maximum, determining a first difference of said product and said average; determining a proportional value according to the first difference value and the first maximum value; determining the difference between the second maximum value and the proportional value, and determining a second difference value; and determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value of each channel and the second difference value.
Further, the method further comprises:
if the high beam is recognized to be turned on, when the vehicle is recognized to pass through a preset vehicle attribute recognition line, the image acquisition equipment is controlled to turn on an exposure lamp, a second image acquired by the image acquisition equipment is acquired, and identification information of the vehicle is acquired according to the second image and a vehicle attribute recognition module trained in advance.
In the embodiment of the invention, based on the first image acquired by the image acquisition equipment, second-order Laplace filtering is carried out on the pixel value corresponding to each channel of each pixel point in the first image containing the vehicle, the reflection component operation value of each channel corresponding to each pixel point is acquired, and the filtered pixel value corresponding to each channel of each pixel point is determined according to the reflection component operation value of each channel and the pixel value of each channel, so that the interference of reflected light in the first image can be removed, and whether the high beam of the vehicle in the first image after the pixel value filtering is on or not can be accurately identified.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a first image acquired by image acquisition equipment, and if the first image is identified to contain vehicles, sequentially taking each pixel point as a pixel point to be processed aiming at each pixel point in the first image, and executing the following operations: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; determining a filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value;
and inputting the first image after the pixel value filtration into a classification model after pre-training, and identifying whether a high beam of the vehicle in the first image after the pixel value filtration is started or not.
2. The method according to claim 1, wherein after determining the filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value, and the second maximum value, the method further comprises:
processing the filtered pixel value corresponding to each channel of the pixel point by adopting a dark channel algorithm, and determining the brightness value of atmospheric light, the brightness value of ambient light and the atmospheric transmittance corresponding to each channel of the pixel point;
and determining the processed pixel value corresponding to each channel of the pixel point according to the filtered pixel value corresponding to each channel of the pixel point, the brightness value of the atmospheric light, the brightness value of the ambient light and the atmospheric transmittance.
3. The method of claim 1, wherein the obtaining the first image acquired by the image acquisition device comprises:
when the vehicle is detected to pass through a preset high beam detection line, a first image collected by the image collecting equipment is obtained.
4. The method of claim 1, wherein determining the filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value, and the second maximum value comprises:
determining a proportional value according to the first maximum value and the average value;
determining the difference between the second maximum value and the proportional value, and determining a second difference value;
and determining the filtered pixel value corresponding to each channel of the pixel point according to the difference between the pixel value of each channel and the second difference value.
5. The method of claim 4, wherein determining a proportional value based on the first maximum value and the average value comprises:
calculating a sum of pixel values for each of said channels and calculating a product of said sum and said first maximum, determining a first difference of said product and said average;
and determining a proportional value according to the first difference value and the first maximum value.
6. The method of claim 1, further comprising:
if the high beam is recognized to be turned on, when the vehicle is recognized to pass through a preset vehicle attribute recognition line, the image acquisition equipment is controlled to turn on an exposure lamp, a second image acquired by the image acquisition equipment is acquired, and identification information of the vehicle is acquired according to the second image and a vehicle attribute recognition module trained in advance.
7. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first image acquired by the image acquisition equipment;
the determining module is used for sequentially taking each pixel point as a pixel point to be processed according to each pixel point in the first image if the first image is identified to contain the vehicle, and executing the following operations: determining an average value and a second maximum value of pixel values corresponding to each channel of the pixel point, performing second-order Laplace filtering on the pixel values of each channel to obtain a reflection component operation value of each channel, and determining a first maximum value of the reflection component operation value of each channel; determining a filtered pixel value corresponding to each channel of the pixel point according to the pixel value of each channel, the average value, the first maximum value and the second maximum value;
and the first identification module is used for inputting the first image after the pixel value filtration to the classification model after the pre-training is finished and identifying whether the high beam of the vehicle in the first image after the pixel value filtration is started or not.
8. The apparatus of claim 7, further comprising:
and the second identification module is used for controlling the image acquisition equipment to turn on the exposure lamp if the high beam is identified to be turned on and when the vehicle passes through a preset vehicle attribute identification line, acquiring a second image acquired by the image acquisition equipment and acquiring the identification information of the vehicle according to the second image and the vehicle attribute identification module trained in advance.
9. An electronic device, characterized in that the electronic device comprises at least a processor and a memory, the processor being adapted to carry out the steps of an image processing method according to any of claims 1-6 when executing a computer program stored in the memory.
10. A computer-readable storage medium, characterized in that it stores a computer program executable by an electronic device, which when run on the electronic device causes the electronic device to perform the steps of an image processing method according to any one of claims 1-6.
CN202210517329.XA 2022-05-12 2022-05-12 Image processing method, device, equipment and medium Pending CN114882451A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210517329.XA CN114882451A (en) 2022-05-12 2022-05-12 Image processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210517329.XA CN114882451A (en) 2022-05-12 2022-05-12 Image processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114882451A true CN114882451A (en) 2022-08-09

Family

ID=82675445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210517329.XA Pending CN114882451A (en) 2022-05-12 2022-05-12 Image processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114882451A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115762178A (en) * 2023-01-09 2023-03-07 长讯通信服务有限公司 Intelligent electronic police violation detection system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115762178A (en) * 2023-01-09 2023-03-07 长讯通信服务有限公司 Intelligent electronic police violation detection system and method

Similar Documents

Publication Publication Date Title
CN108090411B (en) Traffic light detection and classification using computer vision and deep learning
CN110197589B (en) Deep learning-based red light violation detection method
CA2885019C (en) Robust windshield detection via landmark localization
CN103034836B (en) Road sign detection method and road sign checkout equipment
CN111815959B (en) Vehicle violation detection method and device and computer readable storage medium
CN110991221B (en) Dynamic traffic red light running recognition method based on deep learning
CN106650611B (en) Method and device for recognizing color of vehicle body
CN114387591A (en) License plate recognition method, system, equipment and storage medium
CN112766115A (en) Traffic travel scene violation intelligence based analysis method and system and storage medium
CN114565895A (en) Security monitoring system and method based on intelligent society
CN114882451A (en) Image processing method, device, equipment and medium
CN112289021A (en) Traffic signal lamp detection method and device and automatic driving automobile
CN113792600B (en) Video frame extraction method and system based on deep learning
CN115019263A (en) Traffic supervision model establishing method, traffic supervision system and traffic supervision method
CN112863194B (en) Image processing method, device, terminal and medium
Tiwari et al. Automatic vehicle number plate recognition system using matlab
CN112528944A (en) Image identification method and device, electronic equipment and storage medium
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
Mohammad et al. An Efficient Method for Vehicle theft and Parking rule Violators Detection using Automatic Number Plate Recognition
CN114333296A (en) Traffic volume statistics and analysis system based on machine vision
CN106920398A (en) A kind of intelligent vehicle license plate recognition system
CN110795974B (en) Image processing method, device, medium and equipment
CN113283303A (en) License plate recognition method and device
JP2002008186A (en) Vehicle type identification device
Zhong-xun et al. The research on edge detection algorithm of lane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination