CN113792793B - Road video monitoring and lifting method under bad weather environment - Google Patents

Road video monitoring and lifting method under bad weather environment Download PDF

Info

Publication number
CN113792793B
CN113792793B CN202111080366.0A CN202111080366A CN113792793B CN 113792793 B CN113792793 B CN 113792793B CN 202111080366 A CN202111080366 A CN 202111080366A CN 113792793 B CN113792793 B CN 113792793B
Authority
CN
China
Prior art keywords
image data
weather
road
image
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111080366.0A
Other languages
Chinese (zh)
Other versions
CN113792793A (en
Inventor
齐树平
王志斌
邱文利
许忠印
权恒友
冯雷
张少波
杨海峰
高新文
刘鹏祥
张莹
王洪涛
刘栋
郝文世
孙乙博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Xiong'an Jingde Expressway Co ltd
Original Assignee
Hebei Xiong'an Jingde Expressway Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Xiong'an Jingde Expressway Co ltd filed Critical Hebei Xiong'an Jingde Expressway Co ltd
Priority to CN202111080366.0A priority Critical patent/CN113792793B/en
Publication of CN113792793A publication Critical patent/CN113792793A/en
Application granted granted Critical
Publication of CN113792793B publication Critical patent/CN113792793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a road monitoring and lifting method under bad weather environment, which belongs to the field of road video monitoring and comprises the steps of obtaining original image data of a road under the existing bad weather environment, inputting the original image data into a multi-model image enhancement unit and obtaining enhanced image data; and introducing a multi-perception strategy, carrying out fusion judgment on the measured data set N and the actual data set P, and outputting a final set A. The invention realizes video monitoring under different bad weather environment scenes based on multi-model image enhancement, improves the robustness of the system by introducing a multi-perception strategy, improves the video quality of road monitoring under bad weather such as rainy days, snowy days, foggy days and the like, and reduces the influence of the bad environment factors on the image recognition technology.

Description

Road video monitoring and lifting method under bad weather environment
Technical Field
The invention belongs to the field of road video monitoring, and relates to a road monitoring and lifting method, in particular to a road video monitoring and lifting method under bad weather environment.
Background
The monitoring system based on the video detection technology is a computer processing system for realizing traffic target detection and identification by using an image processing and pattern identification method. With the maturity of video monitoring systems, the application scenes of the video monitoring systems are more and more, the video monitoring systems are applied to traffic, traffic targets such as vehicles, pedestrians and the like can be detected, positioned, identified and tracked by analyzing traffic images captured by cameras, and analysis and judgment of traffic behaviors are carried out on the detected, tracked and identified targets, so that calculation and collection of various traffic flow data are completed, and various adjustment and management related to traffic management are carried out at the same time, so that intelligent traffic management is realized.
However, in rainy days and snowy days, the probability of traffic accidents in foggy days is much higher than that in normal weather, but in such an environment, the video monitoring system is difficult to reach the practical application requirements, and the recognition performance of the video monitoring system still needs to be improved continuously. At present, a conventional single model method is generally utilized for a video monitoring system under the conditions of rainy days, snowy days and foggy days, namely, a video monitoring system capable of identifying vehicles under bad environments such as rainy days, snowy days and foggy days is trained through a neural network. However, the following disadvantages remain in the prior art:
1. images at the same visual angle show different weather environments, so that an ideal monitoring effect cannot be achieved by a monitoring system which simply uses a single model;
2. external information cannot be introduced to carry out auxiliary judgment, and robustness is poor.
Disclosure of Invention
In order to solve the problems, the invention designs a road video monitoring and lifting method under the bad weather environment, and the monitoring effect under the bad weather environment is improved by introducing a multi-perception strategy and multi-model image enhancement.
The technical proposal adopted by the invention is that,
a method for monitoring and lifting a road video in an adverse weather environment, which comprises the following steps,
step 1: constructing a rainy day image enhancement model, a foggy day image enhancement model and a snowy day image enhancement model, and combining the three models as a multi-model image enhancement unit of image data;
step 2: acquiring original image data of a road in the existing bad weather environment, and inputting the original image data into a multi-model image enhancement unit to obtain enhanced image data;
step 3: taking the original image data and the enhanced image data as training data to train a weather identification model;
step 4: acquiring real-time image data of a road under a current meteorological environment, inputting the real-time image data into a meteorological recognition model, and obtaining a measurement set N= [ N1, N2, N3, N4], wherein N1, N2, N3, N4 respectively represent rainy day probability, snowy day probability, foggy day probability and other weather probability, and the sum of N1, N2, N3, N4 is 1;
step 5: obtaining current meteorological data to obtain an actual set P= [ P1, P2, P3, P4], wherein P1, P2, P3, P4 respectively represent rainy days, snowy days, foggy days and other weather, and P1, P2, P3 and P4 are all represented by 0 or 1;
step 6: introducing a multi-perception strategy, carrying out fusion judgment on a measurement data set N and an actual data set P, and outputting a final set A= [ a1, a2, a3 and a4], wherein a1=n1×p1, a2=n2×p2, a3=n3×p3 and a4=n4×p4, so as to obtain an actual meteorological type of a photographed road condition;
step 7: and according to the actual meteorological type, inputting real-time image data of the road in the current meteorological environment into a corresponding image enhancement model, and outputting the enhanced real-time image data.
The working principle and the beneficial effects of the invention are as follows:
the invention realizes video monitoring under different bad weather environment scenes based on multi-model image enhancement. By introducing the multi-perception strategy, the robustness of the system is improved. The invention improves the video quality of road monitoring in bad weather such as rainy days, snowy days, foggy days and the like, and reduces the influence of the bad environmental factors on the image recognition technology. The present invention will be described in detail with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a diagram of the CNN network structure of the present invention;
FIG. 3 is a network frame diagram of a weather identification model of the present invention;
FIG. 4 is a flow chart of the rainy day image enhancement model of the present invention;
fig. 5 is a flow chart of the snow image enhancement model of the present invention.
Detailed Description
The technical scheme of the present invention is described in further detail below with reference to specific examples and drawings, but the scope and embodiments of the present invention are not limited thereto.
Specific example 1, as shown in figure 1,
the invention relates to a method for improving road video monitoring in bad weather environment, which can ensure effective monitoring in weather environment of rain, snow and fog,
step 1: constructing a rainy day image enhancement model, a foggy day image enhancement model and a snowy day image enhancement model, and combining the three models as a multi-model image enhancement unit of image data;
step 2: acquiring original image data of a road in the existing bad weather environment, and inputting the original image data into a multi-model image enhancement unit to obtain enhanced image data;
step 3: taking the original image data and the enhanced image data as training data to train a weather identification model;
step 4: acquiring real-time image data of a road under a current meteorological environment, inputting the real-time image data into a meteorological recognition model, and obtaining a measurement set N= [ N1, N2, N3, N4], wherein N1, N2, N3, N4 respectively represent rainy day probability, snowy day probability, foggy day probability and other weather probability, and the sum of N1, N2, N3, N4 is 1;
step 5: obtaining current meteorological data to obtain an actual set P= [ P1, P2, P3, P4], wherein P1, P2, P3, P4 respectively represent rainy days, snowy days, foggy days and other weather, and P1, P2, P3 and P4 are all represented by 0 or 1;
step 6: introducing a multi-perception strategy, carrying out fusion judgment on a measurement data set N and an actual data set P, and outputting a final set A= [ a1, a2, a3 and a4], wherein a1=n1×p1, a2=n2×p2, a3=n3×p3 and a4=n4×p4, so as to obtain an actual meteorological type of a photographed road condition;
step 7: and inputting the video stream of the road under the current meteorological environment to a corresponding image enhancement model according to the actual meteorological type, and outputting the enhanced video stream.
The invention aims at judging the meteorological conditions firstly, and the meteorological conditions can be judged in two modes in general: (1) acquiring current meteorological data through a meteorological website. And (2) judging through a weather identification model. According to the invention, by combining the two modes, a large amount of rainy, snowy and foggy road meteorological data are acquired from open source data on the network and real road scenes, a meteorological identification model capable of judging the three bad meteorological weather is trained by using a convolutional neural network, the current meteorological data are acquired through a meteorological website, fusion judgment of multiple perception strategies is carried out on video streams, and the actual meteorological types of road conditions shot by videos can be accurately output. And when judging that the weather is bad in the rain, the snow and the fog, sending the video stream into a corresponding image enhancement model to obtain an enhanced video stream.
In the case of the embodiment of the present invention 2,
the invention also comprises inputting the video stream subjected to enhancement processing into a road detection module.
The road detection module is a function to be completed by the video detection system, and mainly analyzes traffic images captured by the camera to detect, locate, identify and track traffic targets such as vehicles and pedestrians, and analyzes and judges traffic behaviors of the detected, tracked and identified targets, so that calculation and collection of various traffic flow data are completed, and various adjustment and management related to traffic management are performed at the same time, so that intelligent traffic management is realized.
In order to realize that the improvement effect is shown in the road monitoring performance under the bad weather environment, the invention adopts a vehicle identification algorithm, and the algorithm flow is based on the image identification algorithm of the common master-rcnn network. The training data are three types of weather road pictures (rainy, snowy, foggy) passing through the corresponding image enhancement model (by manual classification, feeding into the corresponding network, and enhancing) and three types of weather road pictures (rainy, snowy, foggy) not passing through the corresponding image enhancement model, and the pictures are signed for the vehicle, so that 3:1, the master-rcnn model training was performed, where the enhanced data was 3 and the original image data was 1 (random decimation). After training is completed, the frame of fig. 1 can improve the road vehicle recognition effect on three bad weather such as rainy days, snowy days and foggy days.
The module can be flexibly configured according to the functions required by actual monitoring, such as traffic targets of vehicles, pedestrians and the like, and the scheme is not unique.
In the case of the embodiment 3 of the present invention,
the construction of the meteorological identification model is based on the convolutional neural network, the convolutional neural network is a deep neural network with a convolutional structure, the convolutional structure can reduce the internal storage occupied by the deep network, the three key operations are local receptive field, weight sharing and a pooling layer, the number of parameters of the network is effectively reduced, and the overfitting problem of the model is relieved. A common network architecture is shown in figure 2,
the convolutional neural network mainly comprises five basic constituent units: input layer, convolution layer, pooling layer, full connection layer and output layer.
The formula is input:
and (3) outputting a formula:
the above formula is that for each convolution layer, each convolution layer has a different weight matrix W, where W, X, Y are in matrix form, and for the last full-connection layer, let L be the layer L, the output be yL in vector form, and the desired output be d, the total error formula is:
wherein,is a function of the convolution operation, and the third parameter valid indicates the type of convolution operation, and the former convolution mode is valid type. W is the convolution kernel matrix, X is the input matrix, b is the bias, < >>Is an activation function. D, y in the total error are the vectors of the desired output and the network output, respectively.
The CNN is trained by gradient descent and back propagation algorithms, and the gradient formula of the fully connected layer is identical to that of the BP network. The following is the convolution equation for the convolution layer and the pooling layer:
CNN is a feed-forward neural network, each neuron is connected with the neurons of the previous layer only, receives the output of the previous layer, outputs the output to the next layer after operation, and has no feedback between the layers.
Therefore, the model which can be used for identifying various weather can be trained by utilizing the convolutional neural network to the various road weather picture data which are already marked. Four weather pictures (positive samples (rainy days, snowy days and foggy days) and negative samples (except the pictures of the three weather types) are collected, the positive and negative sample ratio is 1:3, the picture sizes are unified to 224x224, and the weather recognition model is trained.
In the case of the embodiment of example 4,
the specific operation of the multi-aware policy fusion decision in the present invention is that,
based on a weather type recognition model built by a vgg network, as shown in fig. 3, the final result of the network through softmax prediction is classified into 4 probabilities, namely an output measurement set n= [ N1, N2, N3, N4], wherein N1, N2, N3, N4 respectively represent a rainy day probability, a snowy day probability, a foggy day probability and other weather probabilities, and the sum of N1, N2, N3, N4 is 1;
acquiring real-time meteorological data through a meteorological website to obtain an actual set P= [ P1, P2, P3, P4], wherein P1, P2, P3, P4 respectively represent rainy days, snowy days, foggy days and other weather, and P1, P2, P3, P4 are all represented by 0 or 1;
introducing a multi-perception strategy, carrying out fusion judgment on a measurement data set N and an actual data set P, and outputting a final set AA= [ a1, a2, a3 and a4], wherein a1=n1×p1, a2=n2×p2, a3=n3×p3 and a4=n4×p4, so as to obtain an actual meteorological type of a photographed road condition;
fusion decisions based on multi-perceptive strategies can solve two problems.
1. When the weather type recognition model based on the convolutional neural network is not recognized, auxiliary correction can be carried out by means of real-time weather data, namely, the weather is actually rainy days, but P= [0.3,0.5,0.1,0.1], wherein P= [1, 0] is calculated, A= [0.3,0,0,0], and finally the weather is entered into the rainy day enhancement model.
2. When two or more kinds of rain, snow and fog are mixed, the weather identification model based on the convolutional neural network can be used for judging which bad environment the current video stream is greatly influenced by, and the enhancement is emphasized, so that the quality of the monitoring video is improved, the complex weather problems are solved, namely N= [0.3,0.5,0.1,0.1], P= [1, 0], A= [0.3,0.5,0,0], and finally the monitoring video enters the snow enhancement model.
And when other weather is identified, directly entering the road detection module.
Specific example 5, as shown in figure 4,
the rainy day image enhancement model in the invention is concretely,
the rainy day image enhancement model adopts a frame difference method to calculate the dynamics of rain and an optical model to identify and process the rain by analyzing the movement and the optical characteristics of the rain.
The basic precondition of the algorithm is:
1) The gray value of the rain noise pixel is greater than the gray value of the background pixel.
2) The same position pixels of two consecutive frames of images are not covered by the same rain noise.
Then extracting three continuous frames of images in the video and judging whether the rain noise contaminates the pixels in the second frame of images according toAnd judging, wherein I is the gray value of the pixel point of the continuous three frames of images in the video, and C is the gray value difference judging threshold value. After this condition is satisfied, the pixel point truly contaminated by the rain noise is further deduced. The specific idea is to estimate according to the property that the difference delta I of the pixel gray level value caused by the motion track of the rain noise is linearly related to the background gray level value Ibg polluted by the rain noise.
The method meets the following conditions:wherein α, β are constants. And finally, replacing the corresponding pixel gray values in the frame image by using the average value of the pixel gray values at the corresponding positions of the front frame image and the rear frame image of the frame image in rain and snow noise.
Example 6, as shown in figure 5,
the snow image enhancement model in the invention is concretely,
the snow image enhancement model adopts a k-means clustering method, namely, any coordinate image in the video is subjected to the characteristic of a snow degradation imageAnd clustering gray values of the elements. The algorithm firstly extracts gray values of a certain coordinate pixel point in the video in all frames. And then, adopting k-means clustering to the gray values of the same pixel points. The k-means cluster initially selects two original cluster center pointsAnd->In order to accelerate the clustering speed, the maximum value and the minimum value in the gray values of the pixel points can be selected. Respectively calculating gray values of the rest pixels>To->And->Euclidean distance of (c): />
When (when)</>Time gray value->Snow noise and vice versa. The operation is one-time clustering, and the clustering center point needs to be updated after each time of clustering is completed, and the method comprises the following steps: />In the formula->After the nth clustering, with->Is of cluster centerGray value set +.>Is->The number of elements in->The updated cluster center; clustering stops when the gray values of the centers of the two clusters are stable. And taking the average value of the gray values of the pixels with the gray values smaller than the final clustering center as the background gray. The gray value of the background gray is then replaced with the gray value of the pixel higher than the gray value of the final cluster center. And finally, processing the rest coordinate pixel points through the same operation to finish snow removal.
In the case of the embodiment 7,
the foggy day image enhancement model utilizes an image enhancement algorithm based on dark channel prior defogging. Dark channels mean that in most local areas other than sky, some pixels always have at least one color channel with a very low value. In other words, the minimum value of the light intensity of this region is a small number. We define a mathematical definition of the dark channel, which can be expressed for any input image J by:wherein->Each channel representing a color image, +.>A window centered around pixel x is shown.
According to the above formula, firstly, the minimum value in each pixel RGB component is calculated and stored in a pair of gray level images with the same size as the original image, then the minimum value filtering is carried out on the gray level images, the radius of the filtering is determined by the size of a window, and the window size is usually equal to 2x radius+1;
the theory of dark channel priors states that:
in real life, the low channel value in the dark primary color mainly has three phonemes: a) Shadows of glass windows in automobiles, buildings and cities, or projections of natural landscapes such as leaves, trees, rocks and the like; b) Objects or surfaces with vivid colors, some of the three channels of RGB have very low values (e.g. green grasses/trees/plants, red or yellow flowers/leaves, or blue water surface); c) Darker colored objects or surfaces, such as gray-colored trunks and stones. In general, natural scenes are shaded or colored everywhere, and the dark primary colors of these scenes are always dark.
The specific theoretical formula derivation is as follows:
first, in computer vision and computer graphics, fog pattern modeling described by the following equations is widely used:wherein->That is the image we now have (the image to be defogged),is the image we want to restore, A is the global atmospheric light component, < >>Is transmittance. The condition known at present is +.>Requiring goal->Obviously, this is an equation with innumerable solutions, and therefore some prior is required.
The above formula is further slightly processed and modified to the following formula:wherein the superscript C indicates the meaning of the three channels R/G/B.
First assume transmittance within each windowIs constant, define it as +.>And the value of A is given, and then the two minimum operations are carried out on the two sides of the above formula, so as to obtain the following formula:
where J is the haze free image to be solved, according to the dark primary prior theory described above: />Thus, it can be deduced that:the method comprises the following steps of: />This is the transmittance +.>Is used for the prediction of the number of the blocks.
In real life, even if it is a sunny day and clouds, some particles exist in the air, so that the far objects can feel the influence of fog, and in addition, the existence of fog can feel the existence of depth of field for human beings, so that it is necessary to keep a certain degree of fog when defogging, and this can be achieved by introducing a fog in the formula of [0,1]The factor in between, the above equation is modified as:wherein->=0.95。
The above reasoning assumes that the global daylighting a value is known, and in practice we can obtain this value from the foggy image by means of a dark channel map. The method comprises the following specific steps:
(1) The first 0.1% pixel is taken from the dark channel map as the luminance size.
(2) In these positions, the value of the corresponding point with the highest brightness is found in the original hazy image I as the a value.
Next, recovery of the haze-free image is performed: from the formulaObtaining the product.
Experiments show that after the actual foggy road video stream is processed by a dark channel method, the video stream is darkened as a whole, so that histogram equalization is adopted to carry out image secondary enhancement. Histogram equalization is a common image enhancement technique. Assume that there is one primary image. Its histogram will then be tilted towards the lower end of the gray scale, all image details being compressed to the dark end of the histogram. If the gray scale at the dark end can be "stretched" to produce a more evenly distributed histogram, the image will be much clearer.
Algorithm steps:
(1) Obtaining histogram distribution of the original image;
(2) Calculating the cumulative probability distribution of the original image histogram;
(3) Mapping, the formula can be expressed as:wherein A is original picture, H is histogram, L is gray level, ++>The number of pixels.
And after the dark channel prior defogging and histogram equalization algorithm, defogging and enhancing the target video stream are completed.

Claims (6)

1. A method for monitoring and improving a road video in an adverse weather environment is characterized by comprising the following steps,
step 1: constructing a rainy day image enhancement model, a foggy day image enhancement model and a snowy day image enhancement model, and combining the three models as a multi-model image enhancement unit of image data;
step 2: acquiring original image data of a road in the existing bad weather environment, and inputting the original image data into a multi-model image enhancement unit to obtain enhanced image data;
step 3: taking the original image data and the enhanced image data as training data to train a weather identification model;
step 4: acquiring real-time image data of a road under a current meteorological environment, inputting the real-time image data into a meteorological recognition model, and obtaining a measurement set N= [ N1, N2, N3, N4], wherein N1, N2, N3, N4 respectively represent rainy day probability, snowy day probability, foggy day probability and other weather probability, and the sum of N1, N2, N3, N4 is 1;
step 5: obtaining current meteorological data to obtain an actual set P= [ P1, P2, P3, P4], wherein P1, P2, P3, P4 respectively represent rainy days, snowy days, foggy days and other weather, and P1, P2, P3 and P4 are all represented by 0 or 1;
step 6: introducing a multi-perception strategy, carrying out fusion judgment on a measurement data set N and an actual data set P, and outputting a final set A= [ a1, a2, a3 and a4], wherein a1=n1×p1, a2=n2×p2, a3=n3×p3 and a4=n4×p4, so as to obtain an actual meteorological type of a photographed road condition;
step 7: according to the actual meteorological type, inputting real-time image data of a road in the current meteorological environment into a corresponding image enhancement model, and outputting enhanced real-time image data;
wherein the snow image enhancement model comprises,
step B1: acquiring gray values of a coordinate pixel point in a video stream of a road in a current meteorological environment in all frames;
step B2: selecting the maximum value and the minimum value in the gray values of the pixel points as the center points of the original clustersAnd->K-means clustering is carried out on gray values in all frames;
step B3: the clustering center point needs to be updated after each clustering,in the formula->After the nth clustering, with->For the gray value set of the cluster center, +.>Is->The number of elements in->The updated cluster center;
step B4: until the gray values of the two clustering centers are stable, stopping clustering, otherwise, repeating the clustering;
step B5: taking the average value of the gray values of the pixels with the gray values smaller than the final clustering center as background gray, and replacing the gray value of the pixels with the gray values higher than the final clustering center with the background gray;
step B6: repeating the steps B2-B5 for all the pixel points;
step B7: the snow image enhancement processing ends.
2. The method for improving road video monitoring under bad weather environment according to claim 1, wherein the step 3 is specifically that,
step 301: constructing a weather identification model based on a convolutional neural network;
step 302: acquiring image data in four weather environments, namely a rainy day, a snowy day, a foggy day and other weather, wherein the image data in the rainy day, the snowy day and the foggy day are taken as positive samples, the image data in the other weather are taken as negative samples, and the ratio of the positive samples to the negative samples is 1:3;
step 303: labeling all image data, and inputting the label into a weather identification model for training;
step 304: and obtaining a trained weather identification model.
3. The method for improving road video monitoring in bad weather environment according to claim 1, wherein the step 4 is specifically,
step 401: acquiring real-time image data with the size of 224x224x3, and inputting the real-time image data into a weather identification model;
step 402: performing convolution twice through 64 convolution kernels of 3x3 and performing ReLU, wherein the size is 224x224x64;
step 403: as a maximum pooling of size 2x2, the size becomes 112x112x64;
step 404: performing convolution twice through 128 convolution kernels of 3x3 and performing ReLU, wherein the size is 112x112x128;
step 405: as a maximum pooling of size 2x2, the size becomes 56x56x128;
step 406: three convolutions are performed through 256 convolution kernels of 3x3 and the size of the convolution kernel is changed into 56x56x256;
step 407: as a maximum pooling of size 2x2, the size becomes 28x28x256;
step 408: three convolutions were performed with 512 3x3 convolution kernels + ReLU, changing the size to 28x28x512;
step 409: as a maximum pooling of size 2x2, the size becomes 14x14x512;
step 410: three convolutions were performed with 512 3x3 convolution kernels + ReLU, changing the size to 14x14x512;
step 411: as a maximum pooling of size 2x2, the size becomes 7x7x512;
step 412: full connection +ReLU is carried out with two layers of 1x1x4096, one layer of 1x1x50 and one layer of 1x1x 4;
step 413: the measurement set N, n= [ N1, N2, N3, N4] is output by softmax.
4. The method for improving road video monitoring in bad weather environment according to claim 1, further comprising the step of 8: the enhanced real-time image data is input to the road detection module.
5. The method for improving road video monitoring in bad weather environment according to claim 1, wherein the rainy day image enhancement model comprises,
step A1: any frame of image in a video stream of a road under the current meteorological environment is obtained to be used as a first frame of image;
step A2: judging whether the first frame image meets the linear constraint of the background brightness, if so, carrying out the step A3, and if not, repeating the step A1;
step A3: extracting continuous three-frame images from the first frame image, and judging coordinate pixel points of the second frame image polluted by rain noise;
step A4: replacing the corresponding pixel gray values in the frame images with the average value of the pixel gray values at the corresponding positions of the front frame image and the rear frame image of the frame images;
step A5: and (5) ending the image enhancement processing in rainy days.
6. The method of claim 1, wherein the foggy weather image enhancement model comprises,
step C1: processing the actual foggy road video stream by a dark channel method;
step C2: performing secondary enhancement on the image by adopting histogram equalization;
step C3: and (5) finishing the foggy day image enhancement processing.
CN202111080366.0A 2021-09-15 2021-09-15 Road video monitoring and lifting method under bad weather environment Active CN113792793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111080366.0A CN113792793B (en) 2021-09-15 2021-09-15 Road video monitoring and lifting method under bad weather environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111080366.0A CN113792793B (en) 2021-09-15 2021-09-15 Road video monitoring and lifting method under bad weather environment

Publications (2)

Publication Number Publication Date
CN113792793A CN113792793A (en) 2021-12-14
CN113792793B true CN113792793B (en) 2024-01-23

Family

ID=78878426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111080366.0A Active CN113792793B (en) 2021-09-15 2021-09-15 Road video monitoring and lifting method under bad weather environment

Country Status (1)

Country Link
CN (1) CN113792793B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346538A (en) * 2014-11-26 2015-02-11 中国测绘科学研究院 Earthquake hazard evaluation method based on control of three disaster factors
KR20150081906A (en) * 2014-01-07 2015-07-15 한국도로공사 Taking a photograph system when bumped by car and method for controlling thereof
CN107749067A (en) * 2017-09-13 2018-03-02 华侨大学 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
CN112308799A (en) * 2020-11-05 2021-02-02 山东交通学院 Offshore road complex environment visibility optimization screen display method based on multiple sensors
CN112330558A (en) * 2020-11-05 2021-02-05 山东交通学院 Road image recovery early warning system and method based on foggy weather environment perception

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150081906A (en) * 2014-01-07 2015-07-15 한국도로공사 Taking a photograph system when bumped by car and method for controlling thereof
CN104346538A (en) * 2014-11-26 2015-02-11 中国测绘科学研究院 Earthquake hazard evaluation method based on control of three disaster factors
CN107749067A (en) * 2017-09-13 2018-03-02 华侨大学 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
CN112308799A (en) * 2020-11-05 2021-02-02 山东交通学院 Offshore road complex environment visibility optimization screen display method based on multiple sensors
CN112330558A (en) * 2020-11-05 2021-02-05 山东交通学院 Road image recovery early warning system and method based on foggy weather environment perception

Also Published As

Publication number Publication date
CN113792793A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN108615226B (en) Image defogging method based on generation type countermeasure network
CN110555465B (en) Weather image identification method based on CNN and multi-feature fusion
CN110866879B (en) Image rain removing method based on multi-density rain print perception
CN110310241B (en) Method for defogging traffic image with large air-light value by fusing depth region segmentation
Kuanar et al. Night time haze and glow removal using deep dilated convolutional network
CN109410129A (en) A kind of method of low light image scene understanding
CN104134068B (en) Monitoring vehicle characteristics based on sparse coding represent and sorting technique
CN110807384A (en) Small target detection method and system under low visibility
CN105701783A (en) Single image defogging method based on ambient light model and apparatus thereof
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN110097522A (en) A kind of single width Method of defogging image of outdoor scenes based on multiple dimensioned convolutional neural networks
CN111598814B (en) Single image defogging method based on extreme scattering channel
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN115019340A (en) Night pedestrian detection algorithm based on deep learning
CN113158747A (en) Night snapshot identification method for black smoke vehicle
Culibrk Neural network approach to Bayesian background modeling for video object segmentation
CN113792793B (en) Road video monitoring and lifting method under bad weather environment
CN116385293A (en) Foggy-day self-adaptive target detection method based on convolutional neural network
CN111161159A (en) Image defogging method and device based on combination of priori knowledge and deep learning
CN113936022A (en) Image defogging method based on multi-modal characteristics and polarization attention
Li et al. Advanced multiple linear regression based dark channel prior applied on dehazing image and generating synthetic haze
CN113657183A (en) Vehicle 24 color identification method under smooth neural network based on multilayer characteristics
CN112907616B (en) Pedestrian detection method based on thermal imaging background filtering
CN112116533B (en) Vehicle logo detection method in haze weather

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant