CN106650600A - Forest smoke and fire detection method based on video image analysis - Google Patents

Forest smoke and fire detection method based on video image analysis Download PDF

Info

Publication number
CN106650600A
CN106650600A CN201610902197.7A CN201610902197A CN106650600A CN 106650600 A CN106650600 A CN 106650600A CN 201610902197 A CN201610902197 A CN 201610902197A CN 106650600 A CN106650600 A CN 106650600A
Authority
CN
China
Prior art keywords
pixel
image
rect
value
moving region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610902197.7A
Other languages
Chinese (zh)
Inventor
路小波
蔡敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201610902197.7A priority Critical patent/CN106650600A/en
Publication of CN106650600A publication Critical patent/CN106650600A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a forest smoke and fire detection algorithm based on video image analysis which comprises eight steps to detect the smoke and haze in a video: preprocessing the video images; performing background modeling; carrying out morphological treatment; segmenting the motions, tracking the regions, determining the regions continuously; extracting the characteristics and determining the smoke and haze. According to the method of the invention, targeted to the characteristics of the forest smoke and haze video images, the method makes improvement in the background modeling technology in currently available video image analysis method by using the random clustering to acquire the initial model from the neighboring domain of the pixel points and then, through some samples in the neighboring domain of the pixel points, the background points are updated. This overcomes the problem of misjudgment caused by the excessive concentration of the selected samples and further accelerates the efficiency for smoke and haze detection. Requiring a small amount of computation, the method can be used to directly determine the smoke and haze from the video acquisition end so as to increase the computation efficiency while ensuring the detection accuracy. Therefore, the detection cost can be reduced; better timeliness can be achieved; and therefore, the method can find wider applications.

Description

A kind of forest rocket detection method based on video image analysis
Technical field
The present invention relates to image procossing and forest fire detection field, more particularly to it is a kind of based on the gloomy of video image analysis Woods firework detecting method.
Background technology
It is also most fearful forestry disaster that fire is the most dangerous enemy of forest, can be to the ecological environment of forest area, life Thing activity causes serious harm, seriously threatens the property and life security of people, and more early discovery fire more may be each The loss of aspect is minimized.Smog occurs the product of early stage as naked light, for fire alarm has important effect.
At present widely used smoke detection system is using sensors such as temperature-sensitive original paper, sense cigarette element, infrared photo-sensitive cells To carry out smoke detection, but forest area trees are dense, vast area, and physical features is complicated, the installation of sensor, difficult in maintenance, Detection reliability there is no guarantee that.
The method that the forest fire smoke detection method recognized based on intelligent video utilizes target detection in computer vision, Master data is obtained by IP Camera, wide coverage and low cost is monitored.
At this stage, the forest rocket detecting system based on video analysis is mainly carried out by optical flow method or gauss hybrid models Motion detection, then carries out Color Statistical in different color spaces and the features of shape to smog is analyzed to realize cigarette The Intelligent Measurement of mist.But in the complicated forest environment of integrated environment, because optical flow method is needed by optical flow constraint equation solution The motion vector of each pixel, gauss hybrid models are needed by iterative manner solving model parameter, thus these methods When background modeling is carried out to forest environment, arithmetic speed is slow, strong influence detection speed.In existing detection method Also have carries out the algorithm of motion detection by Vibe technologies, but the Vibe algorithms for being adopted are needed in the eight neighborhood of pixel 20 sample values in sample set are randomly selected, the phenomenon of selection is inevitably duplicated in sample, and repeat to choose Sample can increase the probability of pixel error classification, and then affect the accuracy rate of smog identification.
At present, although there is also the context update technology being improved to vibe methods, but these technologies are all by vibe Combine with other background modeling methods, be all, than larger change, amount of calculation to be increased, it is impossible to while solving accuracy rate Contradictory relation and amount of calculation between.
The content of the invention
Technical problem:It is that the existing smog recognition methods based on video image analysis cannot protected that the present invention is targeted Reduce the technical problem of computing in the case of card higher accuracy.
Technical scheme:In order to solve the deficiency of prior art presence, it is an object of the invention to provide a kind of be based on video The forest rocket detection method of graphical analysis, the method for the invention comprehensive utilization EViBe background modelings and dynamic static nature knot The smog recognition methods of conjunction, it is characterised in that follow the steps below:
Step 1:Video image is pre-processed:According to the frame number in video, the frame in video is intercepted in chronological order every time Image, converts the image into the order after gray level image successively according to step 2 to step 8 and is processed;
Step 2:Background modeling:Using according to the EViBe methods after Vibe algorithm improvements to each pixel according to step The order of rapid 201 to step 204 carries out background modeling:
Step 201:Judge that whether institute's truncated picture is the first frame in video, if not the first frame, then jump to step Rapid 202;If the first frame is then initialized to background model, detailed process is as follows:
Represented as the abscissa of pixel in image with x, with the ordinate that y represents vegetarian refreshments in image, represented with P (x, y) In image coordinate for (x, y) pixel, be each pixel P (x, y) in image set up a sample set M (P (x, Y))={ v1, v2..., vi..., vN, sample set M (P (x, y)) initial value is the neighborhood of pixel P (x, y) for randomly selecting The gray value of N number of pixel in U (P (x, y)), the value of N is less than the number of neighborhood U (P (x, y)) interior pixel, and wherein N represents sample The number of element, v in this set M (P (x, y))iRepresent i-th element in sample set M (P (x, y)), 1≤i≤N;In image In each pixel P (x, y) set up after sample set, jump to step 202;
Step 202:Background dot judges:With the gray value that V (P (x, y)) represents pixel P (x, y), R represents default ash Angle value difference, will be centered on the gray value V (P (x, y)) of pixel P (x, y), ash of default gray value difference R as radius Angle value Range Representation is background gray scale interval S corresponding to pixel P (x, y)R(V (P (x, y))), if pixel P (x, y) At least # in sample set M (P (x, y))minIndividual element is in background gray levels scope S corresponding to pixel P (x, y)R(V(P (x, y))) in, then vegetarian refreshments P (x, y) is labeled as into background dot, otherwise, pixel P (x, y) is labeled as into foreground point, and by prospect The foreground point counter of pixel P (x, y) adds 1;Wherein, #minRepresent default match point number;All pictures in image are marked Step 203 is jumped to after vegetarian refreshments;(radius herein is for the unified wording in different colours space, herein, due to this Invention is focused in gray space and processed, thus R represents simple pixel value difference emphatically.When by method of the present invention It is similarly used when processing in the color spaces such as RGB, Lab as the Euclidean distance of color value).
Step 203:Update background model:If β be time sampling coefficient, to be marked as background dot pixel P (x, Y), the either element v in sample set M (P (x, y)) is replaced with gray value V (P (x, y)) with the probability of 1/ βi, while with 1/ β Probability arbitrarily pixel P (x ', y ') is replaced in the neighborhood U (P (x, y)) of pixel P (x, y) with gray value V (P (x, y)) Sample set M (P (x ', y ')) in either element;
Step 204:Follow-up each two field picture is all repeated to judge according to the order of step 202 to step 203, if δ Judge frame number for default moving point, and the pixel that foreground point is all judged as in continuous δ frames be updated to into moving point, Other pixels are not dealt with;Remove the moving point in each frame and set up the background image of each frame, and jump to step 3;
Step 3:Morphological scale-space:First operation filter is carried out out to the background image of each frame with 4 × 4 Rectangle structure cell Except salt-pepper noise, then closed operation is carried out to the image with 10 × 10 Rectangle structure cell and make interruption in background image up, so After jump to step 4;
Step 4:Motion segmentation:The profile in detection moving point region, and external minimum rectangle rect of each profile is calculated, As the height rect of external minimum rectangle border rectheightNot less than height threshold heith, and external minimum rectangle border rect Width rectwidthNot less than width threshold value widthWhen, currently detected external minimum rectangle parameter is recorded, will calculate To all external minimum rectangle numbering be external minimum rectangle rect1_1To external minimum rectangle rectk_n, wherein rect1_1Table Show the 1st external minimum rectangle in the 1st frame, rectk_nN-th external minimum rectangle in kth frame is represented, is then jumped to 5;
Step 5:Area tracking:Each external minimum rectangle rect is calculated respectivelyj+1_nOutside each in former frame Connect minimum rectangle rectj_1..., rectj_mArea coincidence factorWherein interrect represent two it is external The area overlapped between minimum rectangle, maxrect represents that two external minimum rectangles compare the face of larger external minimum rectangle Product;If area coincidence factor ratio >=0.5 between two external minimum rectangles, then it is assumed that the two external minimum rectangle phases With and record the matching relationship between the two external minimum rectangles and the two external minimum rectangles, then jump to step 6;
Step 6:Region continuity judges:According to the matching relationship obtained in step 5, will all exist in continuous five frame in External minimum rectangle region with relation is continuous moving region, and jumps to step 7, otherwise jumps to step 2;
Step 7:Feature extraction:The high-frequency energy of the image section corresponding to moving region is extracted by two-dimensional wavelet transformation Amount;Also, the round degree of characteristics of moving region is calculated according to the profile girth Per and contour area Squ of moving region Compactness=Per2/4πSqu;Meanwhile, moving region is set to into a window, by window respectively to upper left, upper, upper right, A left side, original position, the right side, lower-left, under, nine directions in bottom right translate a pixel, the picture in original image is replaced with pixel in window Vegetarian refreshments, and calculate respectively by window to upper left, upper, upper right, a left side, original position, the right side, lower-left, under, scheme after the direction movement of bottom right nine The gray value of each pixel and the difference of the gray value of the pixel of correspondence position in former frame, then calculate window as in To upper left, upper, upper right, a left side, original position, the right side, lower-left, under, all pixels point gray value in image after the direction movement of bottom right nine The quadratic sum of difference, judges the window moving direction for causing the quadratic sum of gray value difference minimum, and this moving direction is set to into cigarette The direction of primary motion of mist;
Step 8:Smog judges:The smog for training is called to recognize according to high-frequency energy, the round degree of characteristics and the direction of motion Model judges whether correspondence moving region is smoke region, if then being marked with red boxes, otherwise returns to step 2.
Wherein, the smog identification model in described step 8 is by using supporting vector grader (Support Vector Machine, SVM) test video is trained and is obtained, in SVM classifier, SVM types are set to the classification of C classes supporting vector Machine (C_SVC), kernel function chooses Radial basis kernel function, and maximum iteration time is 100.
Wherein, in described step 7, the high frequency of the image section corresponding to moving region is extracted by two-dimensional wavelet transformation Energy is comprised the following steps that:
Step 7.1.1:Adjustment image size:If the pixel value of the height of the image corresponding to moving region and width is all Even number, then be not adjusted to moving region image, otherwise by increasing a row pixel or one-row pixels, by moving region image Height and the pixel value of width be adjusted to even number;
Step 7.1.2:Lifting wavelet transform:Separate according to odd even in the vertical direction of moving region after the adjustment respectively, Prediction and the order for updating are calculated, and are comprised the following steps that:
Odd even is separated:The pixel of the moving region image after adjustment is divided into into even column E={ Ep| p=1,2,3 ..., R/2 } and odd column O={ Oq| q=1,2,3 ..., r/2, wherein r includes pixel by the moving region image after adjustment Columns, EpRepresent pth row even column, OqRepresent q row odd columns;
Prediction:Calculate the predicted value of odd columnDuring p=r/2And calculate the difference between the actual value of odd column and the predicted value of odd column
Update:Choose more row operatorU during q=11=U2, then image is low Frequency part E '=E+U;FRadius represents frequency band coefficient, and value isFinally obtain the radio-frequency component in vertical directionLow-frequency component E "=E ' × fRadius in vertical direction;
By the radio-frequency component O " and low-frequency component E " in vertical direction in the vertical direction for obtaining according toOrder spliced, formed vertical direction up conversion after result.To vertical direction up conversion Result afterwards carries out in the horizontal direction corresponding with vertical direction also according to odd even separation, prediction and the order for updating Calculate, complete the radio-frequency component in horizontal direction after the calculating in horizontal direction according to the mode corresponding with vertical direction Spliced with the radio-frequency component in horizontal direction, formed the moving region image Pic after wavelet transformation.Here, horizontal direction On calculating process it is as follows:
Odd even is separated:The image after vertical direction conversion will be carried outPixel be divided into Even number line EE={ EEp| p=1,2,3 ..., rr/2 } and odd column OO={ OOq| q=1,2,3 ..., rr/2 }, wherein rr is Moving region image after adjustment includes the line number of pixel, EEpRepresent pth row even number line, OOqRepresent q row odd-numbered lines;
Prediction:Calculate the predicted value of odd columnAnd calculate strange Difference OO ' between the actual value of ordered series of numbers and the predicted value of odd column=OO-OOP={ OO 'q| q=1,2,3 ..., rr/2 }
Update:Choose more row operatorThe then low frequency portion of image Divide EE '=EE+UU;FRadius represents frequency band coefficient, and value isFinally obtain the radio-frequency component in horizontal directionWith low-frequency component the EE "=EE ' × fRadius in horizontal direction;
After calculating on horizontal direction is completed, by the OO for finally giving " and EE " according to Order spliced, formed wavelet transformation after moving region image Pic;
Step 7.1.3:PicU, vRepresent that coordinate is the pixel of the pixel of (u, v) in the image of moving region after wavelet transformation Value, W represents the width of the moving region image after adjustment, and H represents the height of the moving region image after adjustment, then wavelet transformation The high-frequency energy value of image Pic afterwards is
Wherein, in described step 201, the neighborhood is selected specifically to 24 neighborhoods, and N values are 20;In step 202, in advance If gray value difference R take 20, default match point number #minFor 2;In step 203, time sampling factor beta is 16;Step In 204, counter default threshold value δ in foreground point is 50;In step 4, height threshold heithWith width threshold value widthIt is 8.
Wherein, in described step 5, represented with the structure FRect comprising external minimum rectangle rect and numbering Flag The numbering of corresponding external minimum rectangle rect and its correspondence rectangle in previous frame, the initial value of Flag is -1, if step 5 Two external minimum rectangles rect of middle judgementj+1_nWith rectj_mMatching is then by rectj+1_nCorresponding Flag values are changed to what is matched External minimum rectangle rectj_mLabel m;Otherwise the value of Flag do not processed.
Beneficial effect:Compare with existing invention, the invention has the advantages that:
1) for the time dependent situation of illumination under forest environment:Forest rocket detection belongs to outdoor smoke detection, light Cause the changeable of background image according to condition is changeful, only constantly background model is updated and just can guarantee that motion detection Accuracy, therefore the speed to background modeling has higher requirements.At present conventional method for testing motion has optical flow method and Gauss Mixed model, but optical flow method needs motion vector, the gauss hybrid models by each pixel of optical flow constraint equation solution Need by iterative model parameter, these method amounts of calculation are relatively large so that context update speed is slow.Due to forest The accuracy that the speed of context update is recognized with smog is had higher requirements under environment, this patent is using each in video Two field picture is individually processed, while during improved ViBe algorithms are applied to the smog identification scene under forest environment, only Simple difference and comparison operation need to be carried out to image just can rapidly carry out context update.
2) Vibe algorithms gather 20 by stochastic clustering in the first frame of video from the eight neighborhood of each pixel Individual Sample Establishing initialization model, and new sample value and sample set are compared to judge background dot, can carry out quick Motion detection;But need to be randomly selected from eight neighborhood in the sample set of initialized pixel point for existing Vibe algorithms 20 samples, so, each sample at least is repeated to choose 2 times, and sample is chosen and excessively concentrated, thus can increase pixel classifications and go out The probability of existing mistake.This method increases pixel quantity in neighborhood, it is ensured that it is not in repetition that sample is chosen, especially from pixel 24 neighborhoods in choose that 20 samples are such to be selected to be more suitable in forest environment the detection to smog.This method is adopted simultaneously Strategy is randomly selected, the repetition that sample is avoided while quick detection is kept is chosen, and improves the accuracy of motion detection. (8 neighborhoods and 24 neighborhoods are this area standard terminology)
3) leaf shake is caused to cause situation about judging by accident, this method to cut out using the moving point in step 2 for air flow Sentence frame number, in continuous multiple frames image, only when same pixel is all detected as motor point in continuous δ frames, just sentence Determine it to belong to leaf jitter conditions and be set to background dot, i.e. be only all judged as that the pixel of foreground point is clicked through to reaching continuous δ frame ins The renewal of row moving point.So further reduce and smog is failed to judge.
4) for flying bird, automobile etc., other easily cause the disturbed motion object of erroneous judgement, due to these chaff interferences mostly with Fixed attitude is translated in a certain direction, and the motion mode of smog is slow with air flow around ignition point Diffusion.And while, the diffusion of smog can cause to block to forest background, image be carried out after wavelet transformation, shield portions Radio-frequency component can be produced and fallen sharply, and smog edge lines are unsmooth, smog is overall in rising trend under hot gas promotion.Therefore, To reduce the erroneous judgement that rigid motion object causes, this patent carries out area tracking to moving target, with reference to the continuous of moving target The radio-frequency component of the image after wavelet transformation in property condition and continuous multiple frames, the round degree of characteristics compactness, direction of motion etc. Dynamic static nature sets up smog identification model, used as the foundation that smog judges.To improve the accuracy for judging, also enter in this method One step is trained using SVMs to a large amount of known videos, to set up more structurally sound smog identification model.Entering When row smog judges, by the non-linear relation between high-frequency energy, three features of the round degree of characteristics and the direction of motion, further The generalization ability of model is improved, the accuracy for judging is improved.
Other features and advantages of the present invention will be illustrated in the following description, also, the partly change from specification Obtain it is clear that or being understood by implementing the present invention.
Description of the drawings
Accompanying drawing is used for providing a further understanding of the present invention, and constitutes a part for specification, and with the present invention's Embodiment together, for explaining the present invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the flow chart of whole system;
Fig. 2 is motion detection flow chart;
Fig. 3 is area tracking particular flow sheet;
Fig. 4 is high-frequency energy feature extraction flow chart;
Fig. 5 is to carry out Smoke Detection result figure using the present invention.
Fig. 6 is the schematic diagram of 4 × 4 Rectangle structure cells that operation is carried out out during Morphological scale-space of the present invention;
Fig. 7 is the schematic diagram of 10 × 10 Rectangle structure cells that closed operation is carried out during Morphological scale-space of the present invention.
Specific embodiment
The preferred embodiments of the present invention are illustrated below in conjunction with accompanying drawing, it will be appreciated that preferred reality described herein Apply example and be merely to illustrate and explain the present invention, be not intended to limit the present invention.
Fig. 1 is the overall flow of the forest rocket detection method based on video image analysis of the present invention, and this method is pressed Smoke Detection is carried out to video image according to following steps:
Step 1:Video image is pre-processed:According to the frame number in video, the frame in video is intercepted in chronological order every time Image, converts the image into the order after gray level image successively according to step 2 to step 8 and is processed.Pass through in the present embodiment Web camera is monitored to forest area, and the video that web camera shoots is taken by wireless network transmissions to Video processing Business device carries out Smoke Detection.Can be by the wood land in artificial means intercepting video, to arrange further to mitigate amount of calculation Except the interference that cloud is recognized to the later stage.
Step 2:2 carry out background modeling referring to the drawings:Using according to the EViBe methods after Vibe algorithm improvements according to step The order of 201 to step 204 carries out background modeling:
Step 201:Judge that whether institute's truncated picture is the first frame in video, if not the first frame, then jump to step Rapid 202;If the first frame is then initialized to background model, detailed process is as follows:
With the abscissa that x represents pixel in image, with the ordinate that y represents pixel in image, figure is represented with P (x, y) Coordinate is the pixel of (x, y) as in, is that each pixel P (x, y) in image sets up a sample set M (P (x, y)) ={ v1, v2..., vi..., vN, sample set M (P (x, y)) initial value is the neighborhood U (P of pixel P (x, y) for randomly selecting (x, y)) in N number of pixel gray value, less than the number of neighborhood U (P (x, y)) interior pixel, wherein N represents sample for the value of N The number of element, v in set M (P (x, y))iRepresent i-th element in sample set M (P (x, y)), 1≤i≤N;Its initial value For the gray value of N=20 pixel in 24 neighborhoods of pixel P (x, y) for randomly selecting;Each pixel in the picture Point P (x, y) has all been set up after sample set, jumps to step 202;
Step 202:Background dot judges:With the gray value that V (P (x, y)) represents pixel P (x, y), R represents default ash Angle value difference, will be centered on the gray value V (P (x, y)) of pixel P (x, y), ash of default gray value difference R as radius Angle value Range Representation is background gray scale interval S corresponding to pixel P (x, y)R(V (P (x, y))), if pixel P (x, y) At least # in sample set M (P (x, y))minIndividual element is in background gray scale interval S corresponding to pixel P (x, y)R(V (P (x, Y) in)), then vegetarian refreshments P (x, y) is labeled as into background dot, otherwise, pixel P (x, y) is labeled as into foreground point, and prospect is plain The foreground point counter of point P (x, y) adds 1;Wherein, #minRepresent default match point number;All pixels point in image is marked After jump to step 203;
Radius herein is, for the unified wording in different colours space, to process in gray space in the present embodiment Image, thus radius R actually refers to the pixel value difference in one-dimensional gray space.When the method for the invention apply RGB, When processing in the color spaces such as Lab, because pixel value is actually by three-dimensional array representation, thus with color value under this environment Euclidean distance represent that backcolor scope is more particularly suitable.
Step 203:Update background model:If β is time sampling coefficient, to all pixel P for being marked as background dot (x, y), the either element v in sample set M (P (x, y)) is replaced with the probability of 1/ β with gray value V (P (x, y))i, while with The probability of 1/ β gray value V (P (x, y)) replaces arbitrarily pixel P in the spatial neighborhood U (P (x, y)) of pixel P (x, y) Either element in the sample set M (P (x ', y ')) of (x ', y ');
Step 204:Follow-up each two field picture is all repeated to judge according to the order of step 202 to step 203, if δ Judge frame number for default moving point, and the pixel that foreground point is all judged as in continuous δ frames be updated to into moving point, Other pixels are not dealt with;Remove the moving point in each frame and set up the background image of each frame, and jump to step 3.
The background modeling method of the present invention, will be existing clear and definite to certain for existing background modeling technology The training and modeling of pixel model is replaced with sets up a sample set for each background dot, by the pixel value of current pixel point It is compared with the point in its sample set, then judges the point for background dot when both values are close to.Such improvement can subtract It is few in prior art by the motion vector of optical flow constraint equation solution each pixel or by iterative model Parameter and the huge amount of calculation that produces.Because this method amount of calculation is substantially reduced, thus context update speed can be accelerated, and then Improve the accuracy of detection.Herein, in step 202, default gray value difference R takes 20, default match point number #minFor 2; In step 203, time sampling factor beta is 16;In step 204, leaf shake is caused to cause situation about judging by accident for air flow, This method by setting up counter δ, in continuous multiple frames image, only when same pixel is all detected in continuous δ frames For motor point when, just judge that it belongs to leaf jitter conditions and is set to background dot, so as to further reduce the erroneous judgement to smog.Before Counter default threshold value δ in sight spot is 50.
Step 3:Morphological scale-space:First operation filter is carried out out to the background image of each frame with 4 × 4 Rectangle structure cell Except salt-pepper noise, then closed operation is carried out to the image with 10 × 10 Rectangle structure cell and make interruption in foreground image up, so After jump to step 4.Wherein, 4 × 4 Rectangle structure cell, and 10 × 10 Rectangle structure cell respectively such as accompanying drawing 6 and accompanying drawing 7 It is shown.
Step 4:Motion segmentation:The profile in detection foreground point region, and the external minimum rectangle border rect of profile is calculated, As the height rect of external minimum rectangle border rectheightNot less than height threshold heith, and external minimum rectangle border rect Width rectwidthNot less than width threshold value widthWhen, currently detected external minimum rectangle parameter is recorded, will calculate To all external minimum rectangle numbering be external minimum rectangle rect1_1To external minimum rectangle rectk_n, wherein rect1_1Table Show the 1st external minimum rectangle in the 1st frame, rectk_nN-th external minimum rectangle in kth frame is represented, is then jumped to 5.Height threshold hei hereinthWith width threshold value widthIt is taken as 8.
Step 5:Area tracking:Each external minimum rectangle rect is calculated respectivelyj+1_nOutside each in former frame Connect minimum rectangle rectj_1..., rectj_mArea coincidence factorWherein interrect represent two it is external The area overlapped between minimum rectangle, maxrect represents the face of external minimum rectangle larger between two external minimum rectangles Product;If 0.5 of the area coincidence factor ratio between two external minimum rectangles, then it is assumed that the two external minimum rectangle phases With and record the matching relationship between the two external minimum rectangles and the two external minimum rectangles, then jump to step 6。
Further, with reference to accompanying drawing 3, can be with the structure comprising external minimum rectangle rect and numbering Flag herein FRect represents the numbering of corresponding external minimum rectangle rect and its correspondence rectangle in previous frame, the initial value of Flag for- 1, if judging two external minimum rectangles rect in step 5j+1_nWith rectj_mMatching is then by rectj+1_nCorresponding Flag values change For corresponding numbering m;Otherwise the value of Flag do not processed.
Step 6:Region continuity judges:According to the matching relationship obtained in step 5, will all exist in continuous five frame in External minimum rectangle region with relation is judged to moving region, and jumps to step 7, otherwise jumps to step 2.
Step 7:Feature extraction:The high-frequency energy of the image section corresponding to moving region is extracted by two-dimensional wavelet transformation Amount;Also, the round degree of characteristics of moving region is calculated according to the profile girth Per and contour area Squ of moving region Compactness=Per2/4πSqu;Meanwhile, moving region is set to into a window, by window respectively to upper left, upper, upper right, A left side, original position, the right side, lower-left, under, nine directions in bottom right translate a pixel, the picture in original image is replaced with pixel in window Vegetarian refreshments, and calculate respectively by window to upper left, upper, upper right, a left side, original position, the right side, lower-left, under, scheme after the direction movement of bottom right nine The gray value of each pixel and the difference of the gray value of the pixel of correspondence position in former frame, then calculate window as in To upper left, upper, upper right, a left side, original position, the right side, lower-left, under, all pixels point gray value in image after the direction movement of bottom right nine The quadratic sum of difference, judges the window moving direction for causing the quadratic sum of gray value difference minimum, and this moving direction is set to into cigarette The direction of primary motion of mist, and by upper left, upper, upper right, a left side, motionless, right, lower-left, under, the direction of bottom right nine is respectively with numeral 1~9 It is numbered;
Step 8:Smog judges:The smog for training is called to recognize according to high-frequency energy, the round degree of characteristics and the direction of motion Model judges whether correspondence moving region is smoke region, if then being marked with red boxes, otherwise returns to step 2.
To reduce the erroneous judgement that the rigid motion objects such as flying bird, automobile cause, this method carries out meeting when smog judges in step 8 According to the area tracking carried out to moving target, after the condition of continuity of moving target and the wavelet transformation of continuous N frame ins Image radio-frequency component, compactness, the dynamic static nature such as the direction of motion sets up smog identification model, judges as smog Foundation.To improve the accuracy for judging, also further using SVMs videos known to a large amount of are carried out in this method Training, to set up more structurally sound smog identification model.When carrying out smog and judging, by high-frequency energy, the round degree of characteristics and Non-linear relation between three features of the direction of motion, further improves the generalization ability of model, improves the accuracy for judging.
Thus, the smog identification model in step 8 is further, can be with by using supporting vector grader (Support Vector Machine, SVM) is trained to test video and obtains, and in SVM classifier, SVM types are arranged For C class support vector classifications (C_SVC), kernel function selection Radial basis kernel function, maximum iteration time is 100.
In described step 7, the high-frequency energy of the image section corresponding to moving region is extracted by two-dimensional wavelet transformation Comprise the following steps that:
Step 7.1.1:Adjustment image size:If the pixel value of the height of the image corresponding to moving region and width is all Even number, then be not adjusted to moving region image, otherwise by increasing a row pixel or one-row pixels, by moving region image Height and the pixel value of width be adjusted to even number;
Step 7.1.2:Lifting wavelet transform:Separate according to odd even in the vertical direction of moving region after the adjustment respectively, Prediction and the order for updating are calculated, and are comprised the following steps that:
Odd even is separated:The pixel of the moving region image after adjustment is divided into into even column E={ Ep| p=1,2,3 ..., R/2 } and odd column O={ Oq| q=1,2,3 ..., r/2 }, wherein r includes pixel by the moving region image after adjustment Columns, EpRepresent pth row even column, OqRepresent q row odd columns;
Prediction:Calculate the predicted value of odd columnDuring p=r/2And calculate the difference O ' between the actual value of odd column and the predicted value of odd column=O-OP={ O 'q| q=1, 2,3 ..., r/2 }
Update:Choose more row operatorU during q=11=U2, then image is low Frequency part E '=E+U;FRadius represents frequency band coefficient, and value isFinallyE "=E ' × fRadius;
By the O for obtaining " and E " according toOrder spliced, formed vertical direction on become Result after changing, separates in the horizontal direction also according to odd even, predicts and updates to the result after vertical direction up conversion Order carries out the calculating corresponding with vertical direction, completes after the calculating in horizontal direction according to corresponding with vertical direction Mode the radio-frequency component on the radio-frequency component and horizontal direction in horizontal direction is spliced, formed wavelet transformation after fortune Dynamic area image Pic.Calculating process in horizontal direction is as follows:
Odd even is separated:The image after vertical direction conversion will be carried outPixel be divided into Even number line EE={ EEp| p=1,2,3 ..., rr/2 } and odd column OO={ OOq| q=1,2,3 ..., rr/2 }, wherein rr is Moving region image after adjustment includes the line number of pixel, EEpRepresent pth row even number line, OOqRepresent q row odd-numbered lines;
Prediction:Calculate the predicted value of odd columnAnd calculate strange Difference OO ' between the actual value of ordered series of numbers and the predicted value of odd column=OO-OOP={ OO 'q| q=1,2,3 ..., rr/2 }
Update:Choose more row operatorThe then low frequency portion of image Divide EE '=EE+UU;FRadius represents frequency band coefficient, and value isFinallyEE "=EE ' × fRadius;
After calculating on horizontal direction is completed, by the OO for finally giving " and EE " according to Order spliced, formed wavelet transformation after moving region image Pic;
Step 7.1.3:PicU, vRepresent that coordinate is the pixel of the pixel of (u, v) in the image of moving region after wavelet transformation Value, W represents the width of the moving region image after adjustment, and H represents the height of the moving region image after adjustment, then wavelet transformation The high-frequency energy value of image Pic afterwards is
In application test, a large amount of smog videos are tested, Fig. 5 is in the smog video that frame sign is 320*240 One frame image recognition result containing smog, it can be seen that the present invention can accurately identify smog.
The side of motion detection is further carried out although also having in existing detection method and carrying out background modeling by Vibe technologies Method, but 20 sample values that its Vibe algorithms needs for being adopted is randomly selected in sample set in the eight neighborhood of pixel, The phenomenon of selection is inevitably duplicated in sample, and repetition chooses sample and can increase the probability of pixel error classification, enters And affect the accuracy rate of smog identification.And the Vibe algorithms designed by the present invention in the first frame of video by stochastic clustering from 20 Sample Establishing initialization models are gathered in 24 neighborhoods of each pixel, and new sample value is compared with sample set Relatively judging background dot, while adopt randomly selecting strategy, the repetition that sample is avoided while quick detection is kept is chosen, Further increase the accuracy of motion detection.
One of ordinary skill in the art will appreciate that:The foregoing is only the preferred embodiments of the present invention, and without In the present invention is limited, although being described in detail to the present invention with reference to the foregoing embodiments, for those skilled in the art For, it still can modify to the technical scheme that foregoing embodiments are recorded, or which part technical characteristic is entered Row equivalent.All any modification, equivalent substitution and improvements within the spirit and principles in the present invention, made etc., all should include Within protection scope of the present invention.

Claims (5)

1. a kind of forest rocket detection method based on video image analysis, it is characterised in that follow the steps below:
Step 1:Video image is pre-processed:According to the frame number in video, the two field picture in video is intercepted in chronological order every time, Convert the image into the order after gray level image successively according to step 2 to step 8 to be processed;
Step 2:Background modeling:Using according to the EViBe methods after Vibe algorithm improvements to each pixel according to step 201 Order to step 204 carries out background modeling:
Step 201:Judge that whether institute's truncated picture is the first frame in video, if not the first frame, then jump to step 202;If the first frame is then initialized to background model, detailed process is as follows:
With the abscissa that x represents pixel in image, with the ordinate that y represents pixel in image, represented in image with P (x, y) Coordinate for (x, y) pixel, be each pixel P (x, y) in image set up a sample set M (P (x, y))= {v1, v2..., vi..., vN, sample set M (P (x, y)) initial value is the neighborhood U (P of pixel P (x, y) for randomly selecting (x, y)) in N number of pixel gray value, less than the number of neighborhood U (P (x, y)) interior pixel, wherein N represents sample for the value of N The number of element, v in set M (P (x, y))iRepresent i-th element in sample set M (P (x, y)), 1≤i≤N;In the picture Each pixel P (x, y) set up after sample set, jump to step 202;
Step 202:Background dot judges:With the gray value that V (P (x, y)) represents pixel P (x, y), R represents default gray value Difference, will be centered on the gray value V (P (x, y)) of pixel P (x, y), gray value of default gray value difference R as radius Range Representation is background gray scale interval S corresponding to pixel P (x, y)R(V (P (x, y))), if the sample of pixel P (x, y) At least # in set M (P (x, y))minIndividual element is in background gray levels scope S corresponding to pixel P (x, y)R(V (P (x, Y) in)), then vegetarian refreshments P (x, y) is labeled as into background dot, otherwise, pixel P (x, y) is labeled as into foreground point, and by prospect picture The foreground point counter of vegetarian refreshments P (x, y) adds 1;Wherein, #minRepresent default match point number;All pixels in image are marked Step 203 is jumped to after point;
Step 203:Update background model:If β is time sampling coefficient, pixel P (x, y) to being marked as background dot, with The probability of 1/ β replaces the either element v in sample set M (P (x, y)) with gray value V (P (x, y))i, while with the probability of 1/ β The sample of any pixel P (x ', y ') in the neighborhood U (P (x, y)) of pixel P (x, y) is replaced with gray value V (P (x, y)) Either element in set M (P (x ', y '));
Step 204:Follow-up each two field picture is all repeated to judge according to the order of step 202 to step 203, if δ is pre- If moving point judge's frame number, and the pixel that foreground point is all judged as in continuous δ frames is updated to into moving point, to it He does not deal with pixel;Remove the moving point in each frame and set up the background image of each frame, and jump to step 3;;
Step 3:Morphological scale-space:First operation is carried out out to the background image of each frame with 4 × 4 Rectangle structure cell and filter green pepper Salt noise, then carries out closed operation to the image and makes interruption in background image up with 10 × 10 Rectangle structure cell, then jumps Go to step 4;
Step 4:Motion segmentation:The profile in detection moving point region, and external minimum rectangle rect of each profile is calculated, when outer Meet the height rect of minimum rectangle border rectheightNot less than height threshold heith, and the width of external minimum rectangle border rect Degree rectwidthNot less than width threshold value widthWhen, currently detected external minimum rectangle parameter is recorded, will be calculated All external minimum rectangle numberings are external minimum rectangle rect1_1To external minimum rectangle rectk_n, wherein rect1_1Represent the The 1st external minimum rectangle in 1 frame, rectk_nN-th external minimum rectangle in kth frame is represented, 5 are then jumped to;
Step 5:Area tracking:Each external minimum rectangle rect is calculated respectivelyj+1_nIt is external with each in former frame most Little rectangle rectj_1..., rectj_mArea coincidence factorWherein interrect represents two external minimums The area overlapped between rectangle, maxrect represents the area of external minimum rectangle larger between two external minimum rectangles;If Area coincidence factor ratio >=0.5 between two external minimum rectangles, then it is assumed that the two external minimum rectangles match and remember The matching relationship between the two external minimum rectangles and the two external minimum rectangles is recorded, step 6 is then jumped to;
Step 6:Region continuity judges:According to the matching relationship obtained in step 5, will all there is matching in continuous five frame in and close The external minimum rectangle region of system is judged to moving region, and jumps to step 7, otherwise jumps to step 2;
Step 7:Feature extraction:The high-frequency energy of the image section corresponding to moving region is extracted by two-dimensional wavelet transformation;And And, the round degree of characteristics Compactness=of moving region is calculated according to the profile girth Per and contour area Squ of moving region Per2/4πSqu;Meanwhile, moving region is set to into a window, by window respectively to upper left, upper, upper right, a left side, original position, the right side, Lower-left, under, nine directions in bottom right translate a pixel, the pixel in original image is replaced with pixel in window, and count respectively Calculate by window to upper left, upper, upper right, a left side, original position, the right side, lower-left, under, each pixel in image after the direction movement of bottom right nine The gray value of point and the difference of the gray value of the pixel of correspondence position in former frame, then calculate window to upper left, the upper, right side Upper, left side, original position, the right side, lower-left, under, the direction of bottom right nine move after in image all pixels point gray value difference quadratic sum, Judge the window moving direction for causing the quadratic sum of gray value difference minimum, this moving direction is set to into the main motion side of smog To;
Step 8:Smog judges:The smog identification model for training is called according to high-frequency energy, the round degree of characteristics and the direction of motion Judge whether correspondence moving region is smoke region, if then being marked with red boxes, otherwise returns to step 2.
2. the forest rocket detection algorithm of video image analysis is based on as claimed in claim 1, it is characterised in that described step Smog identification model in rapid 8 is trained to test video by using SVM classifier and is obtained, in SVM classifier, SVM Type is set to C class svm classifier machines, and kernel function chooses Radial basis kernel function, and maximum iteration time is 100.
3. the forest rocket detection algorithm of video image analysis is based on as claimed in claim 2, it is characterised in that described step In rapid 7, by comprising the following steps that for the high-frequency energy of the image section corresponding to two-dimensional wavelet transformation extraction moving region:
Step 7.1.1:Adjustment image size:If the pixel value of the height of the image corresponding to moving region and width is all even Number, then be not adjusted to moving region image, otherwise by increasing a row pixel or one-row pixels, by moving region image The pixel value of height and width is adjusted to even number;
Step 7.1.2:Lifting wavelet transform:Separate according to odd even in the vertical direction of moving region after the adjustment respectively, predict Calculated with the order for updating, comprised the following steps that:
Odd even is separated:The pixel of the moving region image after adjustment is divided into into even column E={ Ep| p=1,2,3 ..., r/2 } With odd column O={ Oq| q=1,2,3 ..., r/2 }, wherein r is by row of the moving region image after adjustment comprising pixel Number, EpRepresent pth row even column, OqRepresent q row odd columns;
Prediction:Calculate predicted value the OP={ (E of odd columnp+Ep+1)/2 | p=1,2,3 ..., r/2-1 }, during p=r/2And calculate the difference O ' between the actual value of odd column and the predicted value of odd column=O-OP={ O 'q| q=1, 2,3 ..., r/2 };
Update:Choose more row operatorU during q=11=U2, then the low frequency portion of image Divide E '=E+U;FRadius represents frequency band coefficient, and value isFinally obtain the radio-frequency component in vertical direction Low-frequency component E "=E ' × fRadius in vertical direction;
By the radio-frequency component O " and low-frequency component E " in vertical direction in the vertical direction for obtaining according toOrder spliced, formed vertical direction up conversion after result, to vertical direction up conversion Result afterwards carries out in the horizontal direction corresponding with vertical direction also according to odd even separation, prediction and the order for updating Calculate, complete the radio-frequency component in horizontal direction after the calculating in horizontal direction according to the mode corresponding with vertical direction Spliced with the radio-frequency component in horizontal direction, formed the moving region image Pic after wavelet transformation;
Step 7.1.3:PicU, vRepresent after wavelet transformation coordinate in the image of moving region for the pixel of (u, v) pixel value, W The width of the moving region image after adjustment is represented, H represents the height of the moving region image after adjustment, then after wavelet transformation The high-frequency energy value elder generation of image Pic
4. the forest rocket detection algorithm of video image analysis is based on as claimed in claim 3, it is characterised in that described step In rapid 201, the neighborhood is specially 24 neighborhoods, and N values are 20;In step 202, default gray value difference R takes 20, default Match point number #minFor 2;In step 203, time sampling factor beta is 16;In step 204, the default threshold value of foreground point counter δ is 50;In step 4, height threshold heithWith width threshold value widthIt is 8.
5. the forest rocket detection algorithm of video image analysis is based on as claimed in claim 3, it is characterised in that described step In rapid 5, with the structure FRect comprising external minimum rectangle rect and numbering Flag corresponding external minimum rectangle rect is represented And the numbering of its correspondence rectangle in previous frame, the initial value of Flag is -1, if judging two external minimum rectangles in step 5 rectj+1_nWith rectj_mMatching is then by rectj+1_nCorresponding Flag values are changed to external minimum rectangle rect for being matchedj_mMark Number m;Otherwise the value of Flag do not processed.
CN201610902197.7A 2016-10-17 2016-10-17 Forest smoke and fire detection method based on video image analysis Pending CN106650600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610902197.7A CN106650600A (en) 2016-10-17 2016-10-17 Forest smoke and fire detection method based on video image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610902197.7A CN106650600A (en) 2016-10-17 2016-10-17 Forest smoke and fire detection method based on video image analysis

Publications (1)

Publication Number Publication Date
CN106650600A true CN106650600A (en) 2017-05-10

Family

ID=58856840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610902197.7A Pending CN106650600A (en) 2016-10-17 2016-10-17 Forest smoke and fire detection method based on video image analysis

Country Status (1)

Country Link
CN (1) CN106650600A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274374A (en) * 2017-07-03 2017-10-20 辽宁科技大学 A kind of smoke monitoring method based on computer vision technique
CN108009556A (en) * 2017-12-23 2018-05-08 浙江大学 A kind of floater in river detection method based on fixed point graphical analysis
CN108596267A (en) * 2018-05-03 2018-09-28 Oppo广东移动通信有限公司 A kind of image rebuilding method, terminal device and computer readable storage medium
CN108921215A (en) * 2018-06-29 2018-11-30 重庆邮电大学 A kind of Smoke Detection based on local extremum Symbiotic Model and energy spectrometer
CN108985192A (en) * 2018-06-29 2018-12-11 东南大学 A kind of video smoke recognition methods based on multitask depth convolutional neural networks
CN110059613A (en) * 2019-04-16 2019-07-26 东南大学 A kind of separation of video image pyrotechnics and detection method based on rarefaction representation
CN110070106A (en) * 2019-03-26 2019-07-30 罗克佳华科技集团股份有限公司 Smog detection method, device and electronic equipment
CN110207783A (en) * 2019-06-28 2019-09-06 湖南江河机电自动化设备股份有限公司 A kind of sensed water level method based on video identification
CN110309808A (en) * 2019-07-09 2019-10-08 北京林业大学 A kind of adaptive smog root node detection method under a wide range of scale space
CN110852174A (en) * 2019-10-16 2020-02-28 天津大学 Early smoke detection method based on video monitoring
CN111144312A (en) * 2019-12-27 2020-05-12 ***通信集团江苏有限公司 Image processing method, apparatus, device and medium
CN111402346A (en) * 2020-02-27 2020-07-10 辽宁百思特达半导体科技有限公司 Intelligent building system based on BIM and management method thereof
CN111462451A (en) * 2019-11-01 2020-07-28 武汉纺织大学 Straw burning detection alarm system based on video information
CN111639620A (en) * 2020-06-08 2020-09-08 深圳航天智慧城市***技术研究院有限公司 Fire disaster analysis method and system based on visible light image recognition
CN112115875A (en) * 2020-09-21 2020-12-22 北京林业大学 Forest fire smoke root detection method based on dynamic and static combination area stacking strategy
CN112669361A (en) * 2020-12-11 2021-04-16 山东省科学院海洋仪器仪表研究所 Method for rapidly decomposing underwater image of seawater
CN113051970A (en) * 2019-12-26 2021-06-29 佛山市云米电器科技有限公司 Oil smoke concentration identification method, range hood and storage medium
CN114972111A (en) * 2022-06-16 2022-08-30 慧之安信息技术股份有限公司 Dense crowd counting method based on GAN image restoration
CN116563743A (en) * 2022-12-09 2023-08-08 南京图格医疗科技有限公司 Detection method based on deep learning and smoke removal system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625789A (en) * 2008-07-07 2010-01-13 北京东方泰坦科技有限公司 Method for monitoring forest fire in real time based on intelligent identification of smoke and fire
CN101770644A (en) * 2010-01-19 2010-07-07 浙江林学院 Forest-fire remote video monitoring firework identification method
US20110122245A1 (en) * 2009-11-23 2011-05-26 Ashok Kumar Sinha FOREST FIRE CONTROL SYSTEMS (FFiCS) WITH SCANNER AND OPTICAL /INFRARED RADIATION DETECTOR (SOIRD) AND OPTIONALLY ALSO INCLUDING A SCANNER WITH ACCURATE LOCATION CALCULATOR (SALC) AND A SUPER-EFFICIENT SATELLITE/WIRELESS ANTENNA SYSTEM (SSWAS)

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625789A (en) * 2008-07-07 2010-01-13 北京东方泰坦科技有限公司 Method for monitoring forest fire in real time based on intelligent identification of smoke and fire
US20110122245A1 (en) * 2009-11-23 2011-05-26 Ashok Kumar Sinha FOREST FIRE CONTROL SYSTEMS (FFiCS) WITH SCANNER AND OPTICAL /INFRARED RADIATION DETECTOR (SOIRD) AND OPTIONALLY ALSO INCLUDING A SCANNER WITH ACCURATE LOCATION CALCULATOR (SALC) AND A SUPER-EFFICIENT SATELLITE/WIRELESS ANTENNA SYSTEM (SSWAS)
CN101770644A (en) * 2010-01-19 2010-07-07 浙江林学院 Forest-fire remote video monitoring firework identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MIN CAI等: "Intelligent Video Analysis-based Forest Smoke Detection Algorithms", 《12TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY》 *
余烨等: "Evibe:一种改进的Vibe运动目标检测算法", 《仪器仪表学报》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274374A (en) * 2017-07-03 2017-10-20 辽宁科技大学 A kind of smoke monitoring method based on computer vision technique
CN108009556A (en) * 2017-12-23 2018-05-08 浙江大学 A kind of floater in river detection method based on fixed point graphical analysis
CN108009556B (en) * 2017-12-23 2021-08-24 浙江大学 River floating object detection method based on fixed-point image analysis
CN108596267B (en) * 2018-05-03 2020-08-28 Oppo广东移动通信有限公司 Image reconstruction method, terminal equipment and computer readable storage medium
CN108596267A (en) * 2018-05-03 2018-09-28 Oppo广东移动通信有限公司 A kind of image rebuilding method, terminal device and computer readable storage medium
CN108921215A (en) * 2018-06-29 2018-11-30 重庆邮电大学 A kind of Smoke Detection based on local extremum Symbiotic Model and energy spectrometer
CN108985192A (en) * 2018-06-29 2018-12-11 东南大学 A kind of video smoke recognition methods based on multitask depth convolutional neural networks
CN110070106A (en) * 2019-03-26 2019-07-30 罗克佳华科技集团股份有限公司 Smog detection method, device and electronic equipment
CN110059613A (en) * 2019-04-16 2019-07-26 东南大学 A kind of separation of video image pyrotechnics and detection method based on rarefaction representation
CN110059613B (en) * 2019-04-16 2021-08-10 东南大学 Video image smoke and fire separation and detection method based on sparse representation
CN110207783A (en) * 2019-06-28 2019-09-06 湖南江河机电自动化设备股份有限公司 A kind of sensed water level method based on video identification
CN110309808A (en) * 2019-07-09 2019-10-08 北京林业大学 A kind of adaptive smog root node detection method under a wide range of scale space
CN110852174A (en) * 2019-10-16 2020-02-28 天津大学 Early smoke detection method based on video monitoring
CN111462451A (en) * 2019-11-01 2020-07-28 武汉纺织大学 Straw burning detection alarm system based on video information
CN111462451B (en) * 2019-11-01 2022-04-26 武汉纺织大学 Straw burning detection alarm system based on video information
CN113051970A (en) * 2019-12-26 2021-06-29 佛山市云米电器科技有限公司 Oil smoke concentration identification method, range hood and storage medium
CN111144312A (en) * 2019-12-27 2020-05-12 ***通信集团江苏有限公司 Image processing method, apparatus, device and medium
CN111144312B (en) * 2019-12-27 2024-03-22 ***通信集团江苏有限公司 Image processing method, device, equipment and medium
CN111402346A (en) * 2020-02-27 2020-07-10 辽宁百思特达半导体科技有限公司 Intelligent building system based on BIM and management method thereof
CN111639620A (en) * 2020-06-08 2020-09-08 深圳航天智慧城市***技术研究院有限公司 Fire disaster analysis method and system based on visible light image recognition
CN111639620B (en) * 2020-06-08 2023-11-10 深圳航天智慧城市***技术研究院有限公司 Fire analysis method and system based on visible light image recognition
CN112115875A (en) * 2020-09-21 2020-12-22 北京林业大学 Forest fire smoke root detection method based on dynamic and static combination area stacking strategy
CN112115875B (en) * 2020-09-21 2024-05-24 北京林业大学 Forest fire smoke root detection method based on dynamic and static combined region lamination strategy
CN112669361A (en) * 2020-12-11 2021-04-16 山东省科学院海洋仪器仪表研究所 Method for rapidly decomposing underwater image of seawater
CN114972111A (en) * 2022-06-16 2022-08-30 慧之安信息技术股份有限公司 Dense crowd counting method based on GAN image restoration
CN114972111B (en) * 2022-06-16 2023-01-10 慧之安信息技术股份有限公司 Dense crowd counting method based on GAN image restoration
CN116563743A (en) * 2022-12-09 2023-08-08 南京图格医疗科技有限公司 Detection method based on deep learning and smoke removal system
CN116563743B (en) * 2022-12-09 2023-12-01 南京图格医疗科技有限公司 Detection method based on deep learning and smoke removal system

Similar Documents

Publication Publication Date Title
CN106650600A (en) Forest smoke and fire detection method based on video image analysis
CN107862705A (en) A kind of unmanned plane small target detecting method based on motion feature and deep learning feature
CN107016357B (en) Video pedestrian detection method based on time domain convolutional neural network
CN104408482B (en) A kind of High Resolution SAR Images object detection method
CN103971114B (en) Forest fire detection method based on air remote sensing
CN107665498B (en) Full convolution network aircraft detection method based on typical example mining
CN106355188A (en) Image detection method and device
CN110689021A (en) Real-time target detection method in low-visibility environment based on deep learning
CN107092890A (en) Naval vessel detection and tracking based on infrared video
CN109949229A (en) A kind of target cooperative detection method under multi-platform multi-angle of view
CN109858547A (en) A kind of object detection method and device based on BSSD
CN107067412A (en) A kind of video flame smog detection method of Multi-information acquisition
Zhang et al. Isolation forest for anomaly detection in hyperspectral images
CN106815567B (en) Flame detection method and device based on video
CN113962900A (en) Method, device, equipment and medium for detecting infrared dim target under complex background
CN114463624A (en) Method and device for detecting illegal buildings applied to city management supervision
CN109766775A (en) A kind of vehicle detecting system based on depth convolutional neural networks
CN109409285A (en) Remote sensing video object detection method based on overlapping slice
CN113378912A (en) Forest area illegal reclamation land block detection method based on deep learning target detection
CN117475353A (en) Video-based abnormal smoke identification method and system
CN110334703B (en) Ship detection and identification method in day and night image
CN117011614A (en) Wild ginseng reed body detection and quality grade classification method and system based on deep learning
CN107729903A (en) SAR image object detection method based on area probability statistics and significance analysis
CN115018886B (en) Motion trajectory identification method, device, equipment and medium
CN113706815B (en) Vehicle fire identification method combining YOLOv3 and optical flow method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170510

RJ01 Rejection of invention patent application after publication