CN109448397B - Group fog monitoring method based on big data - Google Patents

Group fog monitoring method based on big data Download PDF

Info

Publication number
CN109448397B
CN109448397B CN201811380273.8A CN201811380273A CN109448397B CN 109448397 B CN109448397 B CN 109448397B CN 201811380273 A CN201811380273 A CN 201811380273A CN 109448397 B CN109448397 B CN 109448397B
Authority
CN
China
Prior art keywords
fog
image
background
model
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811380273.8A
Other languages
Chinese (zh)
Other versions
CN109448397A (en
Inventor
冯海霞
李炜
张萌萌
张萌
张立东
杨作林
沈松峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jiaotong University
Original Assignee
Shandong Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jiaotong University filed Critical Shandong Jiaotong University
Priority to CN201811380273.8A priority Critical patent/CN109448397B/en
Publication of CN109448397A publication Critical patent/CN109448397A/en
Application granted granted Critical
Publication of CN109448397B publication Critical patent/CN109448397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Emergency Alarm Devices (AREA)

Abstract

The invention discloses a group fog monitoring method based on big data, which utilizes big data technology to process and analyze mass video data (data in continuous time periods of a plurality of cameras) and establishes a group fog monitoring method based on time sequence.

Description

Group fog monitoring method based on big data
Technical Field
The invention relates to a group fog monitoring method based on big data, and belongs to the technical field of traffic monitoring.
Background
The cloud is called a traffic safety 'killer', and particularly on a highway, is more prone to causing serious safety accidents. In 11/15/2017, the new high-speed Yingshang section causes multiple car rear-end collisions due to fog clusters, so that 18 people die and 21 people are injured. 18 traffic accidents occur in the section of high-speed Zhou Taikang of 11 and 8 days in 2017 due to mist. Data published by the department of public security traffic bureau website in 2016 are displayed: the fog conglomerates occur more than 3 times per year on the section 2567 of the highway, wherein the fog conglomerates occur more than 10 times per year on the section 920 of the highway, at a certain section of high speed at sea and at a certain section of high speed at hong kong and ao.
The monitoring and early warning system of the group fog at home and abroad is mainly divided into two types: one type is based on visibility meter monitoring data and the other type is based on image data, and the first type is mainly used at present. Most states in the united states have established monitoring systems for highway fog, which are based essentially on visibility meter data, such as california locating a fog warning system in the 13 mile section of highway 99 th, california, with a variable information sign and a visibility meter every 800 meters along the highway; a fog early warning system constructed in Tennessee state is provided with 9 forward scattering type visibility meters and 14 microwave radar vehicle detectors in a fog detection area of 5 kilometers. An attempt is also made to establish a weather monitoring system for an expressway in various regions in China, 196 weather monitoring stations are distributed on the expressway in Anhui province, each weather station is provided with a visibility meter, and the distance is 15 kilometers. The monitoring distance of meteorological visibility equipment is too large, the current Huning high speed which is most densely distributed is also 10km, the generation range of the fog is small, and the visibility meter data cannot meet the detection of the fog. In another group of cloud monitoring systems established based on image data, the core technology of such systems is a fast fog detection algorithm based on image processing, and the current fog detection algorithm based on images (videos) is mainly based on a single image and is mostly based on the contrast of an extracted image. Chen Chong and the like research a video visibility detection method by means of a camera on an expressway, and test on the expressway to obtain good effect; the Libo and the like realize the visibility measurement of the infinite artificial marker by utilizing the contrast of the total four neighborhoods of the image; the method comprises the steps of A, collecting visibility monitoring method by extracting the ROI of the road surface, and reflecting the brightness change of the road surface; road waves and the like research a visibility detection method for image color space characteristics in foggy weather; the patents of 'a road meteorological detection system based on video', von haussian and the like 'applied by zhui et al, a highway group fog real-time monitoring system and method based on a GIS system', a digital camera-based group fog real-time early warning system and method applied by Zhang et al, and the like are also based on single image data processing,
at present, most of the monitoring systems are in a research stage, mass video data provided by a camera are not fully applied, and the operation in production is rare.
Disclosure of Invention
The invention aims to solve the technical problem of providing a group fog monitoring method based on big data, which monitors the outburst condition of group fog by monitoring video images of a plurality of cameras in a continuous time period in a road section and has more engineering practicability.
In order to solve the technical problem, the technical scheme adopted by the invention is as follows: a group fog monitoring method based on big data comprises the following steps: s01), acquiring video data of each camera of the monitoring road section, and preprocessing the video data; s02), establishing a dynamic time sequence for the image data provided by each camera, wherein the images of the photographic images form dynamic time sequence images along with time; s03), removing the moving target by using a Gaussian mixture model for monitoring the moving target, extracting the background information of the image, and establishing a time sequence model of a background picture; s04), analyzing the change rule of the background picture when the cluster fog is sudden, establishing a fog monitoring model based on the background picture element time sequence, and judging the real-time data transmitted by the camera by using the fog monitoring model; s05), immediately starting cluster fog interpretation after one camera is interpreted as fog, and starting fog early warning if N1 adjacent cameras all monitor fog and interpret as fog; if more than N2 adjacent cameras monitor non-fog conditions, judging the non-fog conditions to be cluster fog, analyzing and predicting the flow direction of the cluster fog, and starting cluster fog early warning; n1 and N2 are both positive integers, and N1> N2.
Further, S31), the first frame f of the time-series image1As an initial background, K Gaussian models are used for representing each pixel point f in the image1(x, y) and K is in a value range of 3-5, and the mean value mu of each Gaussian model is determined according to the distribution of the image gray level histogramjAnd standard deviation σj,j∈[1,K]The gray value of each pixel point can be expressed as the superposition of K Gaussian distribution functions, i.e.
Figure BDA0001871637640000021
Wherein, eta (mu)j,σj) Is the jth gaussian distribution, ω j is its weight; s32), from the second frame image fi(x,y),i>1, estimating whether each pixel point belongs to the background, namely judgingWhether equation 2 holds: l fi(x,y)-μj|≤2.5σj(formula 2), if formula 2 is true, pixel fi(x, y) is a background point, otherwise, is a foreground point, and a new Gaussian model is generated according to the foreground point; s33), updating the weight of each model in the current image according to formula 3,
Figure BDA0001871637640000022
where α is the learning rate, if the current point is background, MK,i1, otherwise MK,i=0,
Figure BDA0001871637640000023
In order to be the model weight before the update,
Figure BDA0001871637640000024
the updated model weight; s34), keeping the mean value and standard deviation of the Gaussian model of the foreground point unchanged, and updating the mean value and standard deviation of the Gaussian model of the background point according to the current image; s35), sorting all Gaussian models, arranging the models with heavy weight and small standard deviation in front, and discarding the models sorted after K, thereby obtaining an updated background image; s36), repeating the steps S32-S35 to obtain the background image of each frame of image, and establishing a dynamic time sequence image of the background image; s37), dividing the background picture into 4 regions based on the time series image of the background image, and establishing a time variation curve of the contrast of the 4 regions.
Further, a fog monitoring model based on a background pixel time sequence is established, the contrast ratio X1 of a current image, the blurring ratio X2 of the current image, the contrast ratio X3 of a top frame image in the time sequence and the blurring ratio X4 of the top frame image in the time sequence are comprehensively considered, when fog bursts, the contrast ratio X1 of pixel values in the background image is sharply reduced, more than 75% of the whole image of the current image is a fuzzy region, the descending amplitude of X3 and X1 of 3 regions in 4 regions exceeds 300%, and less than 25% of the whole image of the top frame image in the time sequence is a fuzzy region.
Further, the preprocessing performed on the video data includes: and rapidly checking and verifying the image data, deleting repeated information, correcting existing errors and unifying the image formats of all the cameras.
Furthermore, cameras in the monitoring road section are arranged according to a specific direction and a specific distance, and the flowing direction and the flowing speed of the cluster fog are predicted and analyzed according to the sequence of the fog monitored by the cameras.
Further, after one camera is judged to be foggy, the cluster fog judgment is started immediately to comprehensively judge 5 adjacent cameras in front and at the back, and if more than 4 adjacent cameras are judged to be foggy, the fog early warning is started; if more than 2 adjacent cameras monitor the non-fog condition, the fog is judged to be cluster fog.
The invention has the beneficial effects that: the invention utilizes big data technology to process and analyze mass video data (data in continuous time periods of a plurality of cameras) and establishes a group fog monitoring method based on time sequence, and the method fully utilizes mass image information and change rule of background pixels when group fog is sudden, so that the monitoring result is more prepared and has more engineering practicability.
Drawings
FIG. 1 is a flow chart of the process described in example 1.
Detailed Description
The invention is further defined in the following description with reference to the figures and the specific examples.
Example 1
The embodiment discloses a cloud monitoring method based on big data, as shown in fig. 1, including the following steps:
s01), acquiring video data of each camera of the monitoring road section, and preprocessing the video data;
in this embodiment, the preprocessing performed on the video data includes: rapidly checking and verifying image data, deleting repeated information, eliminating image errors caused by the problems of spots and the like of the cameras, and unifying the image formats of all the cameras into a universal jpg format;
in this embodiment, the real-time data of each camera is stored in a distributed manner and calculated in parallel.
S02), extracting one image every 5 seconds from the video image of the camera, and forming a time sequence by the continuous 5 images; the images of the photographic images form dynamic time series images over time;
s03), removing the moving target by using the Gaussian mixture model monitored by the moving target, extracting the background information of the image, and establishing a time sequence model of the background image, wherein the time sequence model comprises the following specific steps:
firstly, a first frame f1 of a time sequence image is used as an initial background, and K Gaussian models are used for representing each pixel point f in the image1And (x, y), wherein the value range of K is 3-5. Determining the mean value mu of each Gaussian model according to the distribution of image gray level histogramjAnd standard deviation σj,j∈[1,K]The gray value of each pixel point can be expressed as the superposition of K Gaussian distribution functions, i.e.
Figure BDA0001871637640000031
Wherein, eta (mu)j,σj) Is the jth Gaussian distribution, ωjIs its weight.
From the second frame image fi(x,y),i>1, estimating whether each pixel belongs to the background, namely judging whether a formula 2 is satisfied:
|fi(x,y)-μj|≤2.5σj(formula 2) of the reaction mixture,
if equation 2 holds, pixel point fi(x, y) is the background point, otherwise it is the foreground point. And generating a new Gaussian model according to the foreground points.
The weight of each model in the current image is updated according to equation 3,
Figure BDA0001871637640000032
where α is the learning rate, if the current point is background, MK,i1, otherwise MK,i=0,
Figure BDA0001871637640000033
In order to be the model weight before the update,
Figure BDA0001871637640000034
the updated model weight;
the mean value and the standard deviation of the Gaussian models of the foreground points are kept unchanged, and the mean value and the standard deviation of the Gaussian models of the background points are updated according to the current image.
And sequencing all Gaussian models, wherein the models with large weight and small standard deviation are arranged in front, and the models sequenced after K are discarded, so that an updated background image is obtained.
Repeating the above operations to obtain the background image of each frame of image, and establishing the dynamic time sequence image of the background image.
And based on the time series images of the background images, the background images are divided into 4 areas, and time variation change curves of the contrast of the 4 areas are established.
S04), analyzing the change rule of the background pixel when the cluster fog is sudden, and establishing a fog monitoring model based on the background pixel time sequence; the fog monitoring model based on the background pixel time sequence is established, and 4 variables are comprehensively considered:
A. when the contrast ratio X1 of the current image is burst, the contrast ratio X1 of the pixel value in the background image is reduced sharply;
B. the ambiguity X2 of the current image identifies the current ambiguity, and more than 75% of the area of the whole image is an ambiguity area when the cluster fog is sudden;
C. the contrast ratio X3 of the previous frame image in the time sequence is that when the fog burst occurs, the descending amplitude of X3 and X1 of 3 zones in 4 zones exceeds 300 percent;
D. in the ambiguity recognition, when the cloud is suddenly generated, the ambiguity X4 of the top frame image in the time series indicates that less than 25% of the whole image is an ambiguity region.
Comprehensively considering the 4 variables, establishing a fog monitoring model based on the background pixel time sequence, and judging a real-time image transmitted by the camera by using the fog monitoring model;
s05), after one camera is judged to be foggy, starting cluster fog judgment immediately to comprehensively judge 5 adjacent cameras in front and back, and starting fog early warning if more than 4 adjacent cameras are judged to be foggy; if more than 2 adjacent cameras monitor non-fog conditions, judging the non-fog conditions to be cluster fog, analyzing and predicting the flow direction of the cluster fog, and starting cluster fog early warning.
In this embodiment, the flow direction and speed of the mist cloud are predicted and analyzed according to the sequence of the mists detected by the cameras, because the cameras are arranged according to a specific direction and distance, if one camera detects the mists, the camera located in the south of the camera detects the mists in sequence, it can be determined that the flow direction of the mist cloud is from north to south, and the flow direction and speed of the mist cloud can be calculated according to the distance and time between the cameras.
The foregoing description is only for the basic principle and the preferred embodiments of the present invention, and modifications and substitutions by those skilled in the art are included in the scope of the present invention.

Claims (5)

1. A group fog monitoring method based on big data is characterized in that: the method comprises the following steps: s01), acquiring video data of each camera of the monitoring road section, and preprocessing the video data; s02), establishing a dynamic time sequence for the image data provided by each camera, wherein the images of the photographic images form dynamic time sequence images along with time; s03), removing the moving target by using a Gaussian mixture model for monitoring the moving target, extracting the background information of the image, and establishing a time sequence model of a background picture; s04), analyzing the change rule of the background picture when the cluster fog is sudden, establishing a fog monitoring model based on the time sequence of the background picture, and judging the real-time data transmitted by the camera by using the fog monitoring model; s05), immediately starting cluster fog interpretation after one camera is interpreted as fog, and starting fog early warning if N1 adjacent cameras all monitor fog and interpret as fog; if more than N2 adjacent cameras monitor non-fog conditions, judging the non-fog conditions to be cluster fog, analyzing and predicting the flow direction of the cluster fog, and starting cluster fog early warning; n1 and N2 are positive integers, and N1 is greater than N2;
step S03 specifically includes: s31), comparing the timeFirst frame f of a sequence of images1As an initial background, K Gaussian models are used for representing each pixel point f in the image1(x, y) and K is in a value range of 3-5, and the mean value mu of each Gaussian model is determined according to the distribution of the image gray level histogramjAnd standard deviation σj,j∈[1,K]The gray value of each pixel point can be expressed as the superposition of K Gaussian distribution functions, i.e.
Figure FDA0002676651980000011
Wherein, eta (mu)j,σj) Is the jth gaussian distribution, ω j is its weight; s32), from the second frame image fi(x,y),i>1, estimating whether each pixel belongs to the background, namely judging whether a formula 2 is satisfied: l fi(x,y)-μj|≤2.5σj(formula 2), if formula 2 is true, pixel fi(x, y) is a background point, otherwise, is a foreground point, and a new Gaussian model is generated according to the foreground point; s33), updating the weight of each model in the current image according to formula 3,
Figure FDA0002676651980000012
where α is the learning rate, if the current point is background, MK,i1, otherwise MK,i=0,
Figure FDA0002676651980000013
In order to be the model weight before the update,
Figure FDA0002676651980000014
the updated model weight; s34), keeping the mean value and standard deviation of the Gaussian model of the foreground point unchanged, and updating the mean value and standard deviation of the Gaussian model of the background point according to the current image; s35), sorting all Gaussian models, arranging the models with heavy weight and small standard deviation in front, and discarding the models sorted after K, thereby obtaining an updated background image; s36), repeating the steps S32-S35 to obtain the background image of each frame of image, and establishing a dynamic time sequence image of the background image; s37), a time-series image based on the background image,the background picture is divided into 4 areas, and a time variation curve of the contrast of the 4 areas is established.
2. The big-data based blob fog monitoring method of claim 1, wherein: a fog monitoring model based on a background pixel time sequence is established, the contrast ratio X1 of a current image, the ambiguity X2 of the current image, the contrast ratio X3 of a top frame image in the time sequence and the ambiguity X4 of the top frame image in the time sequence are comprehensively considered, when fog bursts, the contrast ratio X1 of a pixel value in the background image is sharply reduced, more than 75% of the whole image of the current image is a fuzzy region, the descending amplitude of X3 and X1 of 3 regions in 4 regions exceeds 300%, and less than 25% of the whole image of the top frame image in the time sequence is a fuzzy region.
3. The big-data based blob fog monitoring method of claim 1, wherein: the processing of the video data includes: and rapidly examining and checking the image data, deleting repeated information, correcting errors, unifying image formats of all cameras, and rapidly monitoring and judging and processing fog clusters of the video data of each camera.
4. The big-data based blob fog monitoring method of claim 1, wherein: cameras in the monitoring road section are arranged according to a specific direction and a specific distance, and the flowing direction and the flowing speed of the cluster fog are predicted and analyzed according to the sequence of the fog monitored by the cameras.
5. The big-data based blob fog monitoring method of claim 1, wherein: after one camera is judged to be foggy, starting group fog judgment immediately to comprehensively judge 5 adjacent cameras in front and back, and starting fog early warning if more than 4 adjacent cameras are judged to be foggy; if more than 2 adjacent cameras monitor the non-fog condition, the fog is judged to be cluster fog.
CN201811380273.8A 2018-11-20 2018-11-20 Group fog monitoring method based on big data Active CN109448397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811380273.8A CN109448397B (en) 2018-11-20 2018-11-20 Group fog monitoring method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811380273.8A CN109448397B (en) 2018-11-20 2018-11-20 Group fog monitoring method based on big data

Publications (2)

Publication Number Publication Date
CN109448397A CN109448397A (en) 2019-03-08
CN109448397B true CN109448397B (en) 2020-11-13

Family

ID=65552657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811380273.8A Active CN109448397B (en) 2018-11-20 2018-11-20 Group fog monitoring method based on big data

Country Status (1)

Country Link
CN (1) CN109448397B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110309704B (en) * 2019-04-30 2022-01-25 泸州市气象局 Method, system and terminal for detecting extreme weather in real time
CN111341118B (en) * 2020-02-28 2021-07-30 长安大学 System and method for early warning of mist on grand bridge
CN112419745A (en) * 2020-10-20 2021-02-26 中电鸿信信息科技有限公司 Highway group fog early warning system based on degree of depth fusion network
CN112668503B (en) * 2020-12-30 2022-06-28 日照市气象局 Method for monitoring visibility of luminous target object video group fog
CN113129408A (en) * 2021-04-08 2021-07-16 重庆电子工程职业学院 Group fog monitoring method based on big data
CN113706889B (en) * 2021-08-02 2022-09-13 浪潮通信信息***有限公司 Highway agglomerate fog measuring system and method based on target detection and analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060101990A (en) * 2005-03-22 2006-09-27 주식회사 화흥도로안전씨스템 Fog preventing system
JP2007057331A (en) * 2005-08-23 2007-03-08 Denso Corp In-vehicle system for determining fog
CN103871251A (en) * 2014-03-13 2014-06-18 山东交通学院 Digital photography-based agglomerate fog real time early warning system and method
CN106097744A (en) * 2016-08-15 2016-11-09 山东交通学院 A kind of expressway fog real-time monitoring system and method based on generalized information system
CN106254827A (en) * 2016-08-05 2016-12-21 安徽金赛弗信息技术有限公司 A kind of group's mist Intelligent Recognition method for early warning and device thereof
CN107895379A (en) * 2017-10-24 2018-04-10 天津大学 The innovatory algorithm of foreground extraction in a kind of video monitoring
CN108765453A (en) * 2018-05-18 2018-11-06 武汉倍特威视***有限公司 Expressway fog recognition methods based on video stream data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI423166B (en) * 2009-12-04 2014-01-11 Huper Lab Co Ltd Method for determining if an input image is a foggy image, method for determining a foggy level of an input image and cleaning method for foggy images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060101990A (en) * 2005-03-22 2006-09-27 주식회사 화흥도로안전씨스템 Fog preventing system
JP2007057331A (en) * 2005-08-23 2007-03-08 Denso Corp In-vehicle system for determining fog
CN103871251A (en) * 2014-03-13 2014-06-18 山东交通学院 Digital photography-based agglomerate fog real time early warning system and method
CN106254827A (en) * 2016-08-05 2016-12-21 安徽金赛弗信息技术有限公司 A kind of group's mist Intelligent Recognition method for early warning and device thereof
CN106097744A (en) * 2016-08-15 2016-11-09 山东交通学院 A kind of expressway fog real-time monitoring system and method based on generalized information system
CN107895379A (en) * 2017-10-24 2018-04-10 天津大学 The innovatory algorithm of foreground extraction in a kind of video monitoring
CN108765453A (en) * 2018-05-18 2018-11-06 武汉倍特威视***有限公司 Expressway fog recognition methods based on video stream data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视频图像对比度的团雾检测算法;刘建磊等;《光电子技术》;20150630;第35卷(第2期);第91-95页 *

Also Published As

Publication number Publication date
CN109448397A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN109448397B (en) Group fog monitoring method based on big data
US11380105B2 (en) Identification and classification of traffic conflicts
DE112013001858B4 (en) Multiple-hint object recognition and analysis
CN112215306B (en) Target detection method based on fusion of monocular vision and millimeter wave radar
KR20210080459A (en) Lane detection method, apparatus, electronic device and readable storage medium
CN105788269A (en) Unmanned aerial vehicle-based abnormal traffic identification method
CN108765453B (en) Expressway agglomerate fog identification method based on video stream data
CN111626170A (en) Image identification method for railway slope rockfall invasion limit detection
CN107341810A (en) A kind of automatic vehicle identification method, apparatus and electronic equipment
CN111144301A (en) Road pavement defect quick early warning device based on degree of depth learning
CN109191492A (en) A kind of intelligent video black smoke vehicle detection method based on edge analysis
CN111539360A (en) Safety belt wearing identification method and device and electronic equipment
CN107705330B (en) Visibility intelligent estimation method based on road camera
DE102009048739B3 (en) Automatic forest fire detection method involves triggering alarm, if fractal dimensions of grey values of current image and cluster surface of binarized image, and respective axis intercept lie within preset value
US11748664B1 (en) Systems for creating training data for determining vehicle following distance
CN116229396B (en) High-speed pavement disease identification and warning method
US11328586B2 (en) V2X message processing for machine learning applications
CN103927523A (en) Fog level detection method based on longitudinal gray features
Habib et al. Lane departure detection and transmission using Hough transform method
CN113139488B (en) Method and device for training segmented neural network
CN112816483A (en) Group fog recognition early warning method and system based on fog value analysis and electronic equipment
CN108648461A (en) Bend early warning implementation method based on video frequency speed-measuring
CN110874598B (en) Highway water mark detection method based on deep learning
CN112560790A (en) Method for intelligently identifying visibility based on camera video image
CN111275027A (en) Method for realizing detection and early warning processing of expressway in foggy days

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant