CN109086682B - Intelligent video black smoke vehicle detection method based on multi-feature fusion - Google Patents

Intelligent video black smoke vehicle detection method based on multi-feature fusion Download PDF

Info

Publication number
CN109086682B
CN109086682B CN201810754422.6A CN201810754422A CN109086682B CN 109086682 B CN109086682 B CN 109086682B CN 201810754422 A CN201810754422 A CN 201810754422A CN 109086682 B CN109086682 B CN 109086682B
Authority
CN
China
Prior art keywords
vehicle
black smoke
tail
calculating
following
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810754422.6A
Other languages
Chinese (zh)
Other versions
CN109086682A (en
Inventor
路小波
陶焕杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810754422.6A priority Critical patent/CN109086682B/en
Publication of CN109086682A publication Critical patent/CN109086682A/en
Application granted granted Critical
Publication of CN109086682B publication Critical patent/CN109086682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an intelligent video black smoke vehicle detection method based on multi-feature fusion, which comprises the following steps: (1) extracting a moving target from the road monitoring video by using a foreground detection algorithm, and identifying a vehicle target; (2) detecting the position of the tail of the vehicle by utilizing integral projection and filtering technology; (3) extracting statistical characteristics, frequency domain characteristics and some manual characteristics of a rear area of the tail part of the vehicle, and fusing to form a characteristic vector; (4) and classifying the extracted feature vectors by using a BP network classifier, and identifying black smoke frames so as to further identify black smoke vehicles. The invention can improve the robustness and more effectively detect the black smoke vehicle.

Description

Intelligent video black smoke vehicle detection method based on multi-feature fusion
Technical Field
The invention relates to the technical field of smoke and fire detection, in particular to an intelligent video black smoke vehicle detection method based on multi-feature fusion.
Background
The construction of the motor vehicle pollution discharge monitoring platform in the region is accelerated, and heavy diesel vehicles and high-emission vehicles are mainly treated. Heavy duty diesel vehicles and high emission vehicles typically exhibit a heavy black smoke in the exhaust of the vehicle, which we generally refer to as black smoke vehicles. The black smoke tail gas discharged by the black smoke vehicle not only pollutes the air, but also damages the human health. Therefore, it is very meaningful to research how to effectively detect the black smoke car.
Current methods of detecting black smoke cars can be divided into three major categories:
(1) the conventional method. For example, people report, periodic road inspection, night patrol manual video monitoring. The traditional method usually consumes a large amount of workers, and the efficiency of the method is very low due to the rapid increase of the holding capacity of motor vehicles, the busy traffic and the like;
(2) a semi-intelligent method. Such as installing a vehicle exhaust analysis device, sensor detection, etc. The method improves the detection efficiency of the black smoke vehicle to a certain extent, reduces the pollution of the black smoke vehicle, but the purchase and maintenance of the equipment need the support of a large amount of financial resources, and the installation of a tail gas analysis device for each vehicle is difficult to implement;
(3) provided is an intelligent video monitoring method. The method utilizes a computer vision technology to automatically detect the black smoke cars from a mass of road monitoring videos. The method belongs to remote monitoring, does not hinder traffic, can realize all-antenna online watching, is suitable for various road environments such as double lanes, multiple lanes and the like, is convenient to install, is suitable for large-range distribution and control of urban roads, is easier to form an online monitoring network for high-pollution black smoke vehicles, and improves law enforcement efficiency. However, such methods are still in the beginning of research.
The invention provides an intelligent video monitoring method, which fully considers the actual characteristics of the black smoke vehicle detection problem, detects the vehicle tail part by utilizing integral projection and human filtering technologies, more accurately locks a candidate region, and fuses the statistical characteristics, frequency domain characteristics and some manual characteristics of the rear region of the vehicle tail part to further improve the robustness and more effectively detect the black smoke vehicle.
Disclosure of Invention
The invention aims to solve the technical problem of providing an intelligent video black smoke vehicle detection method based on multi-feature fusion, which can improve robustness and detect black smoke vehicles more effectively.
In order to solve the technical problem, the invention provides an intelligent video black smoke vehicle detection method based on multi-feature fusion, which comprises the following steps:
(1) extracting a moving target from the road monitoring video by using a foreground detection algorithm, and identifying a vehicle target;
(2) detecting the position of the tail of the vehicle by utilizing integral projection and filtering technology;
(3) extracting statistical characteristics, frequency domain characteristics and some manual characteristics of a rear area of the tail part of the vehicle, and fusing to form a characteristic vector;
(4) and classifying the extracted feature vectors by using a BP network classifier, and identifying black smoke frames so as to further identify black smoke vehicles.
Preferably, the foreground detection algorithm in step (1) includes the following steps:
(11) initialization of the background I Using the following equationback(t),
Figure BDA0001726375120000021
Wherein, I (t) represents the t frame image, and N represents the image frame number adopted by the initialization background;
(12) calculating the foreground object I using the formulafore(t),
βt=mean(|I(t)-Iback(t)|)
P=threshold(|I(t)-Iback(t)|,βt+ε)
Ifore(t)=dilate(erode(P))
Wherein, threshold (I, beta)t+ ε) is a number oftA binarization algorithm with + epsilon as a threshold, mean (I) is an algorithm for calculating the average of all pixels of the image I, and enode (I) and dilate (I) are morphological erosion and dilation operations, respectively;
(13) the background model is updated using the following equation,
Figure BDA0001726375120000022
wherein, the threshold value alpha is an adjusting coefficient for controlling the background precision;
(14) go to step (12) to calculate Ifore(t+1)。
Preferably, the identification of the vehicle target in step (1) means that the vehicle target can be regarded as the vehicle target when the following two criteria are satisfied:
rule one is as follows: the area of the moving object is larger than a certain threshold value;
rule two: the aspect ratio of the circumscribed rectangular frame of the moving object is within a certain range.
Preferably, the step (2) of detecting the position of the tail of the vehicle by using the integral projection and filtering technology comprises the following steps:
(21) calculating a vehicle target image IobjHorizontal integral projection E of1(x) I.e. by
Figure BDA0001726375120000031
Wherein, Iobj(x, y) is the coordinates of the vehicle target image at point (x, y), w is the width of the vehicle target image, and operation norm () is a normalization process;
(22) by randomly filtering the vehicle target image and calculating the horizontal integral projection of the filtered image, i.e.
Figure BDA0001726375120000032
Operation rangefile () is a random filtering process;
(23) for the horizontal integral projection curve E1(x) And E2(x) Performing weighted fusion, i.e.
F(x)=λ1E1(x)+λ2E2(x) And λ12=1
Wherein λ is1And λ2Are respectively E1(x) And E2(x) The weight coefficient of (a);
(24) calculating the position coordinate x of the tail part of the vehicle through one of the following two modesrear
Figure BDA0001726375120000033
Where Δ x is a parameter related to the calculation of the coordinates of the tail of the vehicle.
Preferably, the step (3) of extracting the texture features of the rear area of the vehicle tail comprises the following steps:
(31) rear area I for determining position of tail of vehiclerearThe area takes the tail part of the vehicle as a starting line, extends 60 pixels backwards, and is set as the width of the vehicle target;
(32) calculate region I using the following equationrearThe gray level co-occurrence matrix P of (a),
Figure BDA0001726375120000034
Figure BDA0001726375120000035
wherein P (i, j, d, θ) represents a pixel value of the gray level co-occurrence matrix P at a position (i, j) where the direction is θ pixels and the distance is d, w and h are the width and height of the vehicle target image, respectively, and round () is a function representing rounding;
(33) normalizing the gray level co-occurrence matrix P to obtain
Figure BDA00017263751200000410
(34) Computing a series of statistical features based on gray level co-occurrence matrices, i.e.
feature-Angular Second Moment (ASM), which is the feature that ASM (d, theta) represents the angle theta and the distance d,
Figure BDA0001726375120000041
wherein L × L represents a normalized gray level co-occurrence matrix
Figure BDA0001726375120000042
The size of (d);
characteristic two Entry (ENT), wherein ENT (d, theta) represents characteristic two ENT with angle theta and distance d,
Figure BDA0001726375120000043
a characteristic three Contrast (CON), wherein CON (d, theta) represents the characteristic three CON with the angle theta and the distance d,
Figure BDA0001726375120000044
a characteristic four Correlation (COR) in which COR (d, theta) represents a characteristic four COR having an angle theta and a distance d,
Figure BDA0001726375120000045
Figure BDA0001726375120000046
Figure BDA0001726375120000047
Figure BDA0001726375120000048
Figure BDA0001726375120000049
a feature five Inverse Difference (IDM) in which IDM (d, theta) represents a feature five IDM having an angle theta and a distance d,
Figure BDA0001726375120000051
(35) different normalized gray level co-occurrence matrices are obtained using four directions θ of 0 °,45 °,90 °,135 ° and two pixel distances d of 2,3
Figure BDA0001726375120000052
And calculating five statistical characteristics of ASM, ENT, CON, COR and IDM for each gray level co-occurrence matrix, and connecting the five statistical characteristics in different directions and different distances in series to obtain the statistical characteristics based on the gray level co-occurrence matrix.
Preferably, the extracting the frequency domain feature of the rear area of the vehicle tail in the step (3) includes the following steps:
(36) the rear area I of the tail part of the vehiclerearDividing into 1x2 small blocks, performing two-layer wavelet decomposition on each small block, and recording wavelet coefficient images in the horizontal direction, the vertical direction and the diagonal direction of the ith (i-1, 2.) layer as Hi,ViAnd Di
(37) The wavelet energy of the i-th (i-1, 2.) layer, the k-th (i-1, 2.) patch is calculated in the following manner,
Figure BDA0001726375120000053
Figure BDA0001726375120000054
Figure BDA0001726375120000055
wherein, wiAnd hiEach represents HiWidth and height of (d);
(38) and (4) connecting the frequency domain features obtained in the step (37) in series to identify the black smoke vehicle.
Preferably, the extracting of some artificial features of the rear area of the vehicle tail in the step (3) includes:
(1) matching degree: calculating the area I of the tail of the vehiclerearRegion corresponding to background
Figure BDA0001726375120000056
Degree of matching FmatchI.e. by
Figure BDA0001726375120000057
Wherein, Irear(I, j) represents an image IrearThe pixel value at position (i, j),
Figure BDA0001726375120000058
representing images
Figure BDA0001726375120000059
The pixel value at location (i, j);
(2) mean value: calculating the area I of the tail of the vehiclerearPixel mean of (2), i.e.
Figure BDA00017263751200000510
Wherein N is0Is region IrearThe total number of pixels of;
(3) variance: calculating the area I of the tail of the vehiclerearPixel mean of (2), i.e.
Figure BDA0001726375120000061
(4) The ratio is: the ratio feature F is calculated as followsratio,
Figure BDA0001726375120000062
Wherein H represents the distance from the vehicle tail to the bottom of the circumscribed rectangular frame of the vehicle target, and H represents the distance from the vehicle tail to the top of the current frame image.
Preferably, the identifying black smoke cars in the step (4) comprises the following steps:
(41) classifying all vehicle target pictures in the current frame image by using a trained BP network classifier, and if at least one vehicle target picture is identified as a black smoke car picture, identifying the current frame as a black smoke frame;
(42) if K frames are identified as black smoke frames in every continuous 100 frames and K satisfies the following formula, then the black smoke vehicle is considered to exist in the current video sequence,
K>α
where α is an adjustment factor that controls recall and accuracy.
The invention has the beneficial effects that: (1) the law enforcement efficiency is improved, and the defect that the traditional manual monitoring black cigarette vehicle is low in efficiency is overcome; the intelligent video monitoring method provided by the invention utilizes a computer vision technology to automatically detect black smoke vehicles from a mass of road monitoring videos, video related data are automatically uploaded to an environmental protection department, and evidences such as license plates, vehicle passing places, vehicle passing time and the like of the black smoke vehicles are retained; the method belongs to remote monitoring, does not hinder traffic, can realize all-antenna online watching, is suitable for various road environments such as double lanes, multiple lanes and the like, is convenient to install, is suitable for large-range distribution and control of urban roads, is easier to form an online monitoring network for high-pollution black smoke vehicles, and improves law enforcement efficiency; (2) the false alarm rate is reduced; according to the technical scheme, the vehicle tail part is detected by utilizing the integral projection and filtering technology, so that the candidate area for identifying black smoke is reduced, on the other hand, the technology integrates the statistical characteristics, the frequency domain characteristics, some manual characteristics and the like of the rear area of the vehicle tail part, the robustness is further improved, the false alarm rate is reduced, and the false detection caused by leaf shaking, white cloud movement and the like is avoided.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a vehicle object detected by the present invention.
Fig. 3(a) is a schematic diagram of a non-black smoke vehicle and its projection fusion curve f (x) detected by the present invention.
Fig. 3(b) is a schematic diagram of a black smoke car and its projection fusion curve f (x) detected by the present invention.
FIG. 4 is a schematic illustration of the area behind the rear position of the vehicle of the present invention.
Fig. 5 is a schematic diagram of matching degree features in the artificial features of the present invention.
FIG. 6 is a schematic illustration of a scale feature in the artificial feature of the invention.
Detailed Description
The invention provides an intelligent video black smoke vehicle detection method based on multi-feature fusion, which is shown in a flow chart of fig. 1 and specifically comprises the following steps:
step 1: extracting a moving target from the road monitoring video by using a foreground detection algorithm, and identifying a vehicle target;
step 2: detecting the position of the tail of the vehicle by utilizing integral projection and filtering technology;
and step 3: extracting statistical characteristics, frequency domain characteristics and some manual characteristics of a rear area of the tail part of the vehicle, and fusing to form a characteristic vector;
and 4, step 4: and classifying the extracted feature vectors by using a BP network classifier, and identifying black smoke frames so as to further identify black smoke vehicles.
The foreground detection algorithm in the step 1 adopts the following process:
step 1.1: initialization of the background I Using the following equationback(t),
Figure BDA0001726375120000071
Wherein, I (t) represents the t frame image, and N represents the image frame number adopted by the initialization background;
step 1.2: calculating the foreground object I using the formulafore(t),
βt=mean(|I(t)-Iback(t)|)
P=threshold(|I(t)-Iback(t)|,βt+ε)
Ifore(t)=dilate(erode(P))
Wherein, threshold (I, beta)t+ ε) is a number oftA binarization algorithm with + epsilon as a threshold, mean (I) is an algorithm that calculates the average of all the pixels of the image I,. enode (I) and dilate (I) are morphological erosion and dilation operations, respectively;
step 1.3: the background model is updated using the following equation,
Figure BDA0001726375120000081
wherein, the threshold value alpha is an adjusting coefficient for controlling the background precision;
step 1.4: go to step 1.2 to calculate Ifore(t+1)。
The vehicle target identification in the step 1 means that the vehicle target can be regarded as a vehicle target when the following two criteria are met simultaneously:
rule one is as follows: the area of the moving object is larger than a certain threshold value;
rule two: the aspect ratio of the circumscribed rectangular frame of the moving object is within a certain range.
Fig. 2 shows the result of vehicle object detection for a certain frame.
The step 2 of detecting the position of the tail of the vehicle by adopting the integral projection and filtering technology comprises the following steps:
step 2.1: calculating a vehicle target image IobjHorizontal integral projection E of1(x) I.e. by
Figure BDA0001726375120000082
Wherein, Iobj(x, y) is the coordinates of the vehicle target image at point (x, y), w is the width of the vehicle target image, and operation norm () is a normalization process;
step 2.2: by randomly filtering the vehicle target image and calculating the horizontal integral projection of the filtered image, i.e.
Figure BDA0001726375120000083
Operation rangefile () is a random filtering process;
step 2.3: for the horizontal integral projection curve E1(x) And E2(x) Performing weighted fusion, i.e.
F(x)=λ1E1(x)+λ2E2(x) And λ12=1
Wherein λ is1And λ2Are respectively E1(x) And E2(x) The weight coefficient of (a);
fig. 3(a) shows a non-black smoke vehicle and its projected blend curve f (x), and fig. 3(b) shows a black smoke vehicle and its projected blend curve f (x), it can be seen that the abscissa at the right-hand groove of the curve is exactly equal to the ordinate of the vehicle tail.
Step 2.4: calculating the position coordinate x of the tail part of the vehicle through one of the following two modesrear
Figure BDA0001726375120000084
Or
Figure BDA0001726375120000085
Where Δ x is a parameter related to the calculation of the coordinates of the tail of the vehicle.
The step 3 of extracting the texture features of the rear area of the tail part of the vehicle comprises the following steps:
step 3.1: rear area I for determining position of tail of vehiclerearThe area takes the tail part of the vehicle as a starting line, extends 60 pixels backwards, and is set as the width of the vehicle target;
step 3.2: calculate region I using the following equationrearThe gray level co-occurrence matrix P of (a),
Figure BDA0001726375120000091
Figure BDA0001726375120000092
wherein P (i, j, d, θ) represents a pixel value of the gray level co-occurrence matrix P at a position (i, j) where the direction is θ pixels and the distance is d, w and h are the width and height of the vehicle target image, respectively, and round () is a function representing rounding;
step 3.3: normalizing the gray level co-occurrence matrix P to obtain
Figure BDA0001726375120000098
Step 3.4: computing a series of statistical features based on gray level co-occurrence matrices, i.e.
feature-Angular Second Moment (ASM), which is the feature that ASM (d, theta) represents the angle theta and the distance d,
Figure BDA0001726375120000093
wherein L × L represents a normalized gray level co-occurrence matrix
Figure BDA0001726375120000094
The size of (d);
characteristic two Entry (ENT), wherein ENT (d, theta) represents characteristic two ENT with angle theta and distance d,
Figure BDA0001726375120000095
a characteristic three Contrast (CON), wherein CON (d, theta) represents the characteristic three CON with the angle theta and the distance d,
Figure BDA0001726375120000096
a characteristic four Correlation (COR) in which COR (d, theta) represents a characteristic four COR having an angle theta and a distance d,
Figure BDA0001726375120000097
Figure BDA0001726375120000101
Figure BDA0001726375120000102
Figure BDA0001726375120000103
Figure BDA0001726375120000104
a feature five Inverse Difference (IDM) in which IDM (d, theta) represents a feature five IDM having an angle theta and a distance d,
Figure BDA0001726375120000105
step 3.5: four directions theta are 0 deg., 45 deg. and 90 deg135 ° and two pixel distances d 2,3 obtain different normalized gray level co-occurrence matrices
Figure BDA0001726375120000109
And calculating five statistical characteristics of ASM, ENT, CON, COR and IDM for each gray level co-occurrence matrix, and connecting the five statistical characteristics in different directions and different distances in series to obtain the statistical characteristics based on the gray level co-occurrence matrix.
The step 3 of extracting the frequency domain characteristics of the rear area of the tail part of the vehicle comprises the following steps:
step 3.6: the rear area I of the tail part of the vehiclerearDividing into 1x2 small blocks, performing two-layer wavelet decomposition on each small block, and recording wavelet coefficient images in the horizontal direction, the vertical direction and the diagonal direction of the ith (i-1, 2.) layer as Hi,ViAnd Di
Step 3.7: the wavelet energy of the i-th (i-1, 2.) layer, the k-th (i-1, 2.) patch is calculated in the following manner,
Figure BDA0001726375120000106
Figure BDA0001726375120000107
Figure BDA0001726375120000108
wherein, wiAnd hiEach represents HiWidth and height of (d);
step 3.8: and (4) connecting the frequency domain characteristics obtained in the step 3.7 in series to identify the black smoke vehicle.
The step 3 of extracting some artificial features of the rear area of the tail part of the vehicle comprises the following steps:
(1) matching degree: calculating the area I of the tail of the vehiclerearRegion corresponding to background
Figure BDA0001726375120000111
Degree of matching FmatchI.e. by
Figure BDA0001726375120000112
Wherein, Irear(I, j) represents an image IrearThe pixel value at position (i, j),
Figure BDA0001726375120000113
representing images
Figure BDA0001726375120000114
The pixel value at location (i, j);
fig. 5 shows a schematic diagram of a matching degree feature among the artificial features.
(2) Mean value: calculating the area I of the tail of the vehiclerearPixel mean of (2), i.e.
Figure BDA0001726375120000115
Wherein N is0Is region IrearThe total number of pixels of;
(3) variance: calculating the area I of the tail of the vehiclerearPixel mean of (2), i.e.
Figure BDA0001726375120000116
(4) The ratio is: the ratio feature F is calculated as followsratio
Figure BDA0001726375120000117
H represents the distance from the vehicle tail to the bottom of a circumscribed rectangular frame of the vehicle target, and H represents the distance from the vehicle tail to the top of the current frame image;
fig. 6 shows a schematic diagram of a scale feature in an artificial feature.
The black smoke vehicle identification in the step 4 comprises the following steps:
step 4.1: classifying all vehicle target pictures in the current frame image by using a trained BP network classifier, and if at least one vehicle target picture is identified as a black smoke car picture, identifying the current frame as a black smoke frame;
step 4.2: if K frames are identified as black smoke frames in every continuous 100 frames and K satisfies the following formula, the current video sequence is considered to have black smoke cars.
K>α
Where α is an adjustment factor that controls recall and accuracy.

Claims (5)

1. An intelligent video black smoke vehicle detection method based on multi-feature fusion is characterized by comprising the following steps:
(1) extracting a moving target from the road monitoring video by using a foreground detection algorithm, and identifying a vehicle target;
(2) detecting the position of the tail of the vehicle by utilizing integral projection and filtering technology;
(3) extracting statistical characteristics, frequency domain characteristics and some artificial characteristics of a rear area of the tail part of the vehicle, and fusing to form a characteristic vector; the method for extracting the statistical characteristics of the rear area of the tail part of the vehicle comprises the following steps:
(31) rear area I for determining position of tail of vehiclerearThe area takes the tail part of the vehicle as a starting line, extends 60 pixels backwards, and is set as the width of the vehicle target;
(32) calculate region I using the following equationrearThe gray level co-occurrence matrix P of (a),
Figure FDA0003055744760000011
Figure FDA0003055744760000012
wherein P (i, j, d, θ) represents a pixel value of the gray level co-occurrence matrix P at a position (i, j) where the direction is θ pixels and the distance is d, w and h are the width and height of the vehicle target image, respectively, and round () is a function representing rounding;
(33) normalizing the gray level co-occurrence matrix P to obtain
Figure FDA0003055744760000013
(34) Computing a series of statistical features based on gray level co-occurrence matrices, i.e.
feature-ASM, denoted ASM (d, θ) as feature-ASM with angle θ and distance d,
Figure FDA0003055744760000014
wherein L × L represents a normalized gray level co-occurrence matrix
Figure FDA0003055744760000015
The size of (d);
a characteristic two ENT, wherein ENT (d, theta) represents a characteristic two ENT with an angle theta and a distance d,
Figure FDA0003055744760000016
a characteristic three CON, where CON (d, theta) represents an angle theta and a distance d,
Figure FDA0003055744760000017
a characteristic four COR, denoted COR (d, theta) for an angle theta and a distance d,
Figure FDA0003055744760000021
Figure FDA0003055744760000022
Figure FDA0003055744760000023
Figure FDA0003055744760000024
Figure FDA0003055744760000025
a characteristic five IDM, wherein IDM (d, theta) represents a characteristic five IDM with an angle theta and a distance d,
Figure FDA0003055744760000026
(35) different normalized gray level co-occurrence matrices are obtained using four directions θ of 0 °,45 °,90 °,135 ° and two pixel distances d of 2,3
Figure FDA00030557447600000210
Calculating five statistical characteristics of ASM, ENT, CON, COR and IDM for each gray level co-occurrence matrix, and connecting the five statistical characteristics in different directions and different distances in series to obtain statistical characteristics based on the gray level co-occurrence matrix;
the method for extracting the frequency domain characteristics of the rear area of the tail part of the vehicle comprises the following steps:
(36) the rear area I of the tail part of the vehiclerearDividing into 1 × 2 small blocks, performing two-layer wavelet decomposition on each small block, and recording wavelet coefficient images of the i-th layer in the horizontal direction, the vertical direction and the diagonal direction as Hi,ViAnd DiWherein i is 1, 2;
(37) calculating the wavelet energy of the ith layer and the kth small block in the following way, wherein i is 1,2, and k is 1, 2;
Figure FDA0003055744760000027
Figure FDA0003055744760000028
Figure FDA0003055744760000029
wherein, wiAnd hiEach represents HiWidth and height of (d);
(38) connecting the frequency domain features obtained in the step (37) in series to identify the black smoke car;
extracting some artificial features of the rear area of the vehicle's tail includes:
(1) matching degree: calculating the area I of the tail of the vehiclerearRegion corresponding to background
Figure FDA0003055744760000031
Degree of matching FmatchI.e. by
Figure FDA0003055744760000032
Wherein, Irear(I, j) represents an image IrearThe pixel value at position (i, j),
Figure FDA0003055744760000033
representing images
Figure FDA0003055744760000034
The pixel value at location (i, j);
(2) mean value: calculating the area I of the tail of the vehiclerearPixel mean of (2), i.e.
Figure FDA0003055744760000035
Wherein N is0Is region IrearThe total number of pixels of;
(3) variance: calculating the area I of the tail of the vehiclerearPixel mean of (2), i.e.
Figure FDA0003055744760000036
(4) The ratio is: the ratio feature F is calculated as followsratio
Figure FDA0003055744760000037
H represents the distance from the vehicle tail to the bottom of a circumscribed rectangular frame of the vehicle target, and H represents the distance from the vehicle tail to the top of the current frame image;
(4) and classifying the extracted feature vectors by using a BP network classifier, and identifying black smoke frames so as to further identify black smoke vehicles.
2. The intelligent video black smoke vehicle detection method based on multi-feature fusion as claimed in claim 1, wherein the foreground detection algorithm in step (1) comprises the following steps:
(11) initialization of the background I Using the following equationback(t),
Figure FDA0003055744760000038
Wherein, I (t) represents the t frame image, and N represents the image frame number adopted by the initialization background;
(12) calculating the foreground object I using the formulafore(t),
βt=mean(|I(t)-Iback(t)|)
P=threshold(|I(t)-Iback(t)|,βt+ε)
Ifore(t)=dilate(erode(P))
Wherein, threshold (I, beta)t+ ε) is a number oftA binarization algorithm with + epsilon as a threshold, mean (I) is an algorithm for calculating the average of all pixels of the image I, and enode (I) and dilate (I) are morphological erosion and dilation operations, respectively;
(13) the background model is updated using the following equation,
Figure FDA0003055744760000041
wherein, the threshold value alpha is an adjusting coefficient for controlling the background precision;
(14) go to step (12) to calculate Ifore(t+1)。
3. The intelligent video black smoke vehicle detection method based on multi-feature fusion as claimed in claim 1, wherein the identification of the vehicle target in step (1) means that the vehicle target can be regarded as the vehicle target if the following two criteria are satisfied simultaneously:
rule one is as follows: the area of the moving object is larger than a certain threshold value;
rule two: the aspect ratio of the circumscribed rectangular frame of the moving object is within a certain range.
4. The intelligent video black smoke vehicle detection method based on multi-feature fusion as claimed in claim 1, wherein the step (2) of detecting the position of the tail of the vehicle by adopting integral projection and filtering technology comprises the following steps:
(21) calculating a vehicle target image IobjHorizontal integral projection E of1(x) I.e. by
Figure FDA0003055744760000042
Wherein, Iobj(x, y) is the coordinates of the vehicle target image at point (x, y), w is the width of the vehicle target image, and operation norm () is a normalization process;
(22) by randomly filtering the vehicle target image and calculating the horizontal integral projection of the filtered image, i.e.
Figure FDA0003055744760000043
Operation rangefile () is a random filtering process;
(23) for the horizontal integral projection curve E1(x) And E2(x) Performing weighted fusion, i.e.
F(x)=λ1E1(x)+λ2E2(x) And λ12=1
Wherein λ is1And λ2Are respectively E1(x) And E2(x) The weight coefficient of (a);
(24) calculating the position coordinate x of the tail part of the vehicle through one of the following two modesrear
Figure FDA0003055744760000051
Or
Figure FDA0003055744760000052
Where Δ x is a parameter related to the calculation of the coordinates of the tail of the vehicle.
5. The intelligent video black smoke vehicle detection method based on multi-feature fusion as claimed in claim 1, wherein the black smoke vehicle identification in the step (4) comprises the following steps:
(41) classifying all vehicle target pictures in the current frame image by using a trained BP network classifier, and if at least one vehicle target picture is identified as a black smoke car picture, identifying the current frame as a black smoke frame;
(42) if K frames are identified as black smoke frames in every continuous 100 frames and K satisfies the following formula, then the black smoke vehicle is considered to exist in the current video sequence,
K>α
where α is an adjustment factor that controls recall and accuracy.
CN201810754422.6A 2018-07-11 2018-07-11 Intelligent video black smoke vehicle detection method based on multi-feature fusion Active CN109086682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810754422.6A CN109086682B (en) 2018-07-11 2018-07-11 Intelligent video black smoke vehicle detection method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810754422.6A CN109086682B (en) 2018-07-11 2018-07-11 Intelligent video black smoke vehicle detection method based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN109086682A CN109086682A (en) 2018-12-25
CN109086682B true CN109086682B (en) 2021-07-27

Family

ID=64837559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810754422.6A Active CN109086682B (en) 2018-07-11 2018-07-11 Intelligent video black smoke vehicle detection method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN109086682B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580401A (en) * 2019-09-29 2021-03-30 杭州海康威视数字技术股份有限公司 Vehicle detection method and device
CN112699801B (en) * 2020-12-30 2022-11-11 上海船舶电子设备研究所(中国船舶重工集团公司第七二六研究所) Fire identification method and system based on video image
CN112784709B (en) * 2021-01-06 2023-06-20 华南理工大学 Efficient detection and identification method for remote multiple targets
CN113378629A (en) * 2021-04-27 2021-09-10 阿里云计算有限公司 Method and device for detecting abnormal vehicle in smoke discharge
CN113657305B (en) * 2021-08-20 2023-08-04 深圳技术大学 Video-based intelligent detection method for black smoke vehicle and ringeman blackness level

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2312475A1 (en) * 2004-07-09 2011-04-20 Nippon Telegraph and Telephone Corporation Sound signal detection and image signal detection
CN103116746A (en) * 2013-03-08 2013-05-22 中国科学技术大学 Video flame detecting method based on multi-feature fusion technology
CN103871077A (en) * 2014-03-06 2014-06-18 中国人民解放军国防科学技术大学 Extraction method for key frame in road vehicle monitoring video
CN106022231A (en) * 2016-05-11 2016-10-12 浙江理工大学 Multi-feature-fusion-based technical method for rapid detection of pedestrian

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2312475A1 (en) * 2004-07-09 2011-04-20 Nippon Telegraph and Telephone Corporation Sound signal detection and image signal detection
CN103116746A (en) * 2013-03-08 2013-05-22 中国科学技术大学 Video flame detecting method based on multi-feature fusion technology
CN103871077A (en) * 2014-03-06 2014-06-18 中国人民解放军国防科学技术大学 Extraction method for key frame in road vehicle monitoring video
CN106022231A (en) * 2016-05-11 2016-10-12 浙江理工大学 Multi-feature-fusion-based technical method for rapid detection of pedestrian

Also Published As

Publication number Publication date
CN109086682A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN109086682B (en) Intelligent video black smoke vehicle detection method based on multi-feature fusion
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN109670404B (en) Road ponding image detection early warning method based on hybrid model
CN109829403B (en) Vehicle anti-collision early warning method and system based on deep learning
CN108596129B (en) Vehicle line-crossing detection method based on intelligent video analysis technology
CN103093249B (en) A kind of taxi identification method based on HD video and system
CN103077423B (en) To run condition detection method based on crowd's quantity survey of video flowing, local crowd massing situation and crowd
CN109190455B (en) Black smoke vehicle identification method based on Gaussian mixture and autoregressive moving average model
CN106447674B (en) Background removing method for video
CN113850123A (en) Video-based road monitoring method and device, storage medium and monitoring system
CN109191495B (en) Black smoke vehicle detection method based on self-organizing background difference model and multi-feature fusion
CN109191492B (en) Intelligent video black smoke vehicle detection method based on contour analysis
CN113191339B (en) Track foreign matter intrusion monitoring method and system based on video analysis
CN111597905A (en) Highway tunnel parking detection method based on video technology
CN110852164A (en) YOLOv 3-based method and system for automatically detecting illegal building
CN109271904B (en) Black smoke vehicle detection method based on pixel adaptive segmentation and Bayesian model
CN108520528B (en) Mobile vehicle tracking method based on improved difference threshold and displacement matching model
CN112435276A (en) Vehicle tracking method and device, intelligent terminal and storage medium
CN114463684A (en) Urban highway network-oriented blockage detection method
CN111783700A (en) Automatic recognition early warning method and system for road foreign matters
CN115909223A (en) Method and system for matching WIM system information with monitoring video data
CN109919068B (en) Real-time monitoring method for adapting to crowd flow in dense scene based on video analysis
CN109325426B (en) Black smoke vehicle detection method based on three orthogonal planes time-space characteristics
CN115223106A (en) Sprinkler detection method fusing differential video sequence and convolutional neural network
CN108960181B (en) Black smoke vehicle detection method based on multi-scale block LBP and hidden Markov model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant