GB2428473A - Fire detection by processing video images - Google Patents

Fire detection by processing video images Download PDF

Info

Publication number
GB2428473A
GB2428473A GB0514706A GB0514706A GB2428473A GB 2428473 A GB2428473 A GB 2428473A GB 0514706 A GB0514706 A GB 0514706A GB 0514706 A GB0514706 A GB 0514706A GB 2428473 A GB2428473 A GB 2428473A
Authority
GB
United Kingdom
Prior art keywords
input image
current input
fire
background estimation
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0514706A
Other versions
GB0514706D0 (en
Inventor
Simon Dominic Haynes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Europe Ltd
Original Assignee
Sony United Kingdom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony United Kingdom Ltd filed Critical Sony United Kingdom Ltd
Priority to GB0514706A priority Critical patent/GB2428473A/en
Publication of GB0514706D0 publication Critical patent/GB0514706D0/en
Publication of GB2428473A publication Critical patent/GB2428473A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Fire Alarms (AREA)

Abstract

A fire detection method for identifying, in a current input image, an area indicative of the presence of fire, there being a sequence of two or more input images. The method comprises the steps of: forming a background estimation for the current input image from a plurality of the input images; and comparing the current input image 200 with the background estimation 202 to identify an area of the current input image indicative of the presence of fire in the current input image, the identified area being equivalent to the corresponding area in the background estimation with the addition of a characteristic of a flickering flame of a fire. The input images may be visible images from a video camera.

Description

1 2428473 Fire Detection This invention relates to fire detection.
It is known to perform fire detection in a variety of ways. In particular, it is known to use video processing to identify regions of video images representing fire (or flames). Most of these video processing techniques rely on the use of infrared video cameras to detect hot areas in a scene being captured by the video camera, with subsequent spectral matching being used to identify regions of fire and ignore non-fire heat emitting bodies. Another video processing technique for detecting regions of fire/flames in captured video images uses an algorithm that looks for possible fire characteristics (namely a bright static area associated with a bright dynamic area) and tries to match any such detected fire characteristics with stored known fire characteristics.
According to an aspect of the invention, there is provided a fire detection method for identifying, in a current input image, an area indicative of the presence of fire, there being a sequence of two or more input images, the method comprising the steps of: forming a background estimation for the current input image from a plurality of the input images; and comparing the current input image with the background estimation to identify an area of the current input image indicative of the presence of fire in the current input image, the identified area being equivalent to the corresponding area in the background estimation with the addition of a characteristic of a flickering flame of a fire.
Embodiments of the invention have an advantage that they do not rely on the use of special (e.g. infrared) video cameras - instead, standard video cameras (such as those of a pre-existing closed circuit television (CCTV) system) may be used.
Furthermore, embodiments of the invention form an estimate of what constitutes the background of the scene being captured by a video camera (i.e. what would be behind a fire/flame). A comparison of this background estimate with a current input image provides a more sophisticated fire/flame detection system, as it utilises more of the available data than, say, looking for possible fire characteristics. Furthermore, embodiments of the invention do not rely on the detection of a bright static area associated with a bright dynamic area, which may not always be easily detectable as separate entities or may not always actually be present in the capture video images.
Further respective features and aspects of the invention are defined in the appended claims.
Embodiments of the invention will be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 schematically illustrates a fire (or flame) detection system according to an embodiment of the invention; Figure 2 schematically illustrates an overview of the video processing performed to detect fire/flames; Figure 3 is a schematic flow chart of the video processing performed to detect fire/flames; and Figure 4 schematically illustrates a method for maintaining and updating a
background estimate.
Figure 1 schematically illustrates a fire (or flame) detection system according to an embodiment of the invention. Three video cameras 100A, bOB, 100C are connected to a video processing unit 102 which analyses the video captured by the video cameras 100 to determine whether the scene 103 that one of the video cameras is arranged to capture contains fire (or flames) 105. If the video processing unit 102 determines that the scene 103 contains fire 105, then the video processing unit 102 triggers an alarm 104. The alarm 104 may be an audible alann, a visual alarm or an audible and visual alarm. The fire detection system shown in Figure 1 may be arranged such that a human operator is alerted to the possibility of fire 105 being present in scene 103 being captured by one of the video cameras 100, in response to which the human operator performs a visual verification himself prior to setting off another (main) alarm, for example to call the emergency services. Additionally, or alternatively, the video processing unit 102 may be connected to a fire extinguisher system 106. The fire extinguisher system 106 may be a fully automatic fire extinguisher system or may be under the control of a human user. The fire extinguisher system 106 may use information provided to it by the video processing unit 102 concerning the location of the fire 105 within the scene 103.
The video cameras 100 shown in Figure 1 may be any ordinary video cameras and need not necessarily be special video cameras such as ultraviolet video cameras or infrared video cameras, i.e. the video cameras 100 may be video cameras that capture light in the visible spectrum. As such, the video cameras 100 may be video cameras of a closed circuit television (CCTV) system that already exists for surveillance purposes, the video outputs of the video cameras 100 being routed to the video processing unit 102 as well as to a pre-existing video surveillance unit (not shown in Figure 1).
It will be appreciated that the fire detection system shown in Figure 1 may make use of any number of video cameras 100.
Figure 2 schematically illustrates an overview of the video processing performed by the video processing unit 102 to detect fire. A current input image 200 from one of the video cameras 100 is received by the video processing unit 102. The video processing unit 102 maintains an estimate 202 of the background of the current input image 200. The background estimate 202 is updated on a regular basis, for example for every input image 200 received by the video processing unit 102. The background estimate 202 is an estimation of the scene 103 as viewed by the video camera 100 when the fire 105 is not present. Therefore, when the fire 105 is not present in the scene 103 being captured by the video camera 100, the current input image 200 should be approximately the same as the background estimate 202.
When fire 105 is present in the scene 103 being captured by the video camera 100, the current input image 200 will be approximately the same as the background estimate 202 except that some areas of the background estimate 202 will be replaced by an area representing the fire 105. The video processing unit 102 therefore compares the current input image 200 with the background estimate 202 to try to detect areas of the background estimate 202 that have been covered by an area representing fire 105. l'his results in a prediction 204 of where fire 105 may be present in the current input image 200.
Figure 3 is a schematic flow chart of the processing performed by the video processing unit 102. The video processing unit 102 makes use of a current input image j (corresponding to the current input image 200 of Figure 2), a background estimate (corresponding to the background estimate 202 of Figure 2) and the last input image (such as the input image 200 immediately preceding the current input image 200).
Processing begins at a step S300 at which the background estimate is updated. A method of updating the background estimate will be described in more detail later with reference to Figure 4.
At a step S302, the current input image and the background estimate are correlated with each other to form a correlation map c. Preferably, the correlation is calculated using the high frequency components of the current input image j and the background estimate The correlation map c is formed according to Equation 1 below.
I f(!hp*i!hp) Equation!: CL31 coii\1 Jf(Jffihp * !t!ihp) I (kll?iip iPihp) where = - I = low pass filter Here, it is assumed that the current input image fr! and the background estimate are colour images with three colour planes (such as RGB or YCbCr), the summation being across the three colour planes. However, it will be appreciated that a different number of colour planes could be used. For example, black-and-white lUminan inges (i.e. one colour plane) could be used instead, in which case, Equation! would be replaced by Equation 1' below: f(!hpit!h) Equation 1: \IfQt!ihp t?ih) f(ithp it!hp) For clarity, it will be assumed, for the rest of the description, that three colour planes are being used, although it will be appreciated that embodiments of the invention are not restricted to using three colour planes.
In Equation 1, the correlation map c is calculated on a pixel-by-pixel basis (using corresponding pixel values in the current input image j and the background estimate The correlation map c is used by the video processing unit 102 for subsequent processing stages to be described shortly. Additionally, the correlation map c will be required for performing fire/flame detection for the next input image j and is therefore stored by the video processing unit 102. Consequently, for the current input image m, a correlation map c', generated at the step 302 when performed on the last input image ii, will be available for processing the current input image.
At a step S304, a first stage of flickering flame detection is performed. This is achieved by looking for regions in the current input image fr! (i) that correlate well with the background estimate and (ii) for which the corresponding region in the last input image fr! did not correlate well its corresponding background estimate.
This is performed on a pixel-by-pixel basis to produce a flame-flickermap r using Equation 2 below.
Equation 2: r = max(c - c',O) At a step S306, a further step of flame flicker detection is performed. Areas of the current input image that (i) were bright in the corresponding area of the last input image, but (ii) are no longer bright in the current input image, are determined. A brightness-drop-map b is generated on a pixel-by-pixel basis using Equation 3 below.
Equation 3: b = col=I > 0.8 & = > 0.5 Equation 3 represents a binary decision for each of the pixels of the current input image fri, and as such, values of the brightness-drop-map b are either 0 or 1. In Equation 3, the values 0.8 and 0.5 are empirically determined values and it will be appreciated that other values may be used as appropriate according to the requirements of the fire/flame detection system being employed.
At a step S308, the flame-flicker-map r and the brightness-drop-map b are combined, on a pixel-by-pixel basis, to generate an initial flameprobability-map f according to Equation 4 below.
Equation4: f__r*b/(w*l0 l) Again, it will be appreciated that the values 10 and 1 in Equation 4 are empirically determined and that other values may be used as appropriate according to the requirements of the fire/flame detection system being employed.
In Equation 4, w represents a weighting for each pixel. The purpose of the weighting w is to cancel out repeated regular covering and uncovering of the background estimate for example by trees and fans. This is in contrast to flames of a fire 150, which tend to flicker in a non- deterministic way, uncovering and covering different areas of the background estimate each time. Therefore, at a step S310, the weighting w is updated on a pixel-by-pixel basis according to Equation below.
I abs(ll-im) Equation 5: WUged = * 3 W 0.7 As will be appreciated, Equation 5 sets a weighting value in WJa1ed to be higher when a bright object reveals the background estimate kL. After this, that weighting value in w slowly decays away so that a history of changes is recorded. It will be appreciated that the value 0.7 in Equation 5 is empirically determined and other values may be used instead of 0.7 according to particular requirements of the fire/flame detection system being employed.
Preferred embodiments constructively sum the initial flame-probabilitymapf over a local area, i.e. spatially spread the values of the initial flame-probability-mapf This is performed at a step S312. In one embodiment, this is performed by a simple two dimensional filtering. However, in preferred embodiments, a morphological dilation operation is applied. This is performed by applying a circular disk to each point within the initial flame-probability-map f the effect of which is to grow each point within the initial flame-probabilitymapf outwards according to the radius of the circle. In this way, each point of the initial flameprobability-mapf is spread over a circle centred at that point. These spreading circles simply add to each other to form a morphologically dilated flame-probability-map m.
This spatial spreading combined with the application of the weighting w provides a mechanism for suppressing deterministic changes, whilst allowing non- deterministic changes to be detected. Changes in exactly the same location would increase the weighting w for that location, thereby reducing the contribution by any future changes in this exact location. However, changes that occur within a neighbourhood will not be reduced by the weighting w since they are not in precisely the same location, but the morphological dilation operation will spread these changes, allowing them to be constructively summed.
As well as being spatially correlated, flames are temporary correlated.
Therefore at a step S3 14, a final flame-probability-map p is generated by using a modified infinite impulse response filter approach as given in Equation 6 below.
Equation 6: ,,s = J 0.98 * P + 0.02 * m; m> o 1(1_0.98*dgIate(r))*p+O.98*dilate(r)*m; m ≤ p where dilate(r) represents the morphological dilation operation being performed on the flame-flickermap r.
Equation 6 allows a value of the flame-probability-map p to build up slowly, but fall away very quickly. If a potential flame is detected, the probability p builds up slowly. However, if the corresponding region of the background estimate is revealed, but this is not bright enough to be a flame, then the probability p can be reduced very quickly. Again, it will be appreciated that the multiplication constants used in Equation 6 are empirically determined and may be varied according to the requirements of the particular fire/flame detection system being employed.
The values of the flame-probability-map p may then be compared to a threshold probability so that if one or more (or at least a sufficient number) of these values exceeds a threshold probability, then the video processing unit 102 activates the alarm 104 and/or the fire extinguishing system 106.
Figure 4 schematically illustrates a method for maintaining and updating the background estimate The fire/flame detection described with reference to Figure 3 basically looks for an area in the current input image j representing a flickering flame. Regions of interest in the current input image j are therefore those that flicker at a certain rate which is neither "too fast" nor "too slow". Therefore, a short memory of representations of previous input images fr is maintained, from which the current background estimate is generated. As shown in Figure 4, four representations of previous input images are stored, History[1.. .4], although it will be appreciated that any number of such representations may be stored. Initially, at time to, History[ 1. . .4] are initialised to a current input image. Subsequently, for each newly received current input image fri, History[4] is replaced by History[3]; History[3] is replaced by History[2]; History[2] is replaced by History[ 1]; and History[ 1] is replaced by the current input image j. Following this, each pixel in each of History[2. . .4] is set to the smaller of that pixel value and the corresponding pixel in the current input image j. The current background estimate is then set to History[4]. In this way, the current background estimate ignores quickly changing features whilst still
accurately estimating the true background.
Preferred embodiments perform one or more extra stages of processing in order to help improve the fire/flame detection results. One of these stages includes masking (or excluding) certain pixels from the fire/flame detection calculations. For example, in order to remove the adverse effects that saturated pixel values can have on the fire/flame detection calculations, pixel values taking a maximum or a minimum possible value are excluded from the fire/flame detection calculation. It will be appreciated that pixel values at or near the maximum or the minimum possible pixel value could also be excluded. Other pixels could also be excluded for other reasons.
For example, the background estimate could be analysed to determine areas of low detail, these areas being excluded from the fire/flame detection calculation. It will be appreciated that the masking could be performed based on pixel values either in the
current input image ft or the background estimate
Another extra processing stage which preferred embodiments apply is gamma correction. This is performed to remove all gamma effects from the current input image im so that the processing is performed in the linear light domain. Gamma correction is performed according to Equation 7 below.
Equation 7: iTi0 = Another processing stage which preferred embodiments apply is contrast correction. It is often the case that the video camera 100 performs automatic contrast adjustment, for example when the sun moves behind a cloud. The general form of the equation for correcting contrast is given in Equation 8 below.
Equation 8: ün0 = * An estimate for the contrast adjustment parameter kconfrasg is generated from the current input image I and the background estimate according to Equation 9 below.
qua ion. - In Equation 9, the summation where col ranges from 1 to 3 is across the colour planes; the other summations are across all pixels in the correlation map c. Preferred embodiments also reject pixels where is not approximately equal across all 3 colour planes.
The reason for including the correlation map c in Equation 9 is that this weights areas of the current input image more heavily where it correlates with the background estimate This prevents becoming overly affected by new objects appearing in the scene 103.
Finally, the fire detection results produced by embodiments of the invention may be combined with smoke detection probabilities output by a smoke detection system. An example of a suitable smoke detection system is provided in co-pending application number * * * * * * * (agent's reference P022739GB). This smoke detection system outputs a probability map for whether a current input image fr represents smoke. This probability map may be combined with the flame-probability-map p to provide an overall smoke-and-flame-probability-map (for example by simple multiplication of the two probability maps).
The fire/flame detection performed by the video processing apparatus 102 may be undertaken in software, hardware or a combination of hardware and software.
Insofar as the embodiments of the invention described above are implemented, at least in part, using software controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a storage medium by which such a computer program is stored, are envisaged as aspects of the invention.

Claims (24)

1. A fire detection method for identifying, in a current input image, an area indicative of the presence of fire, there being a sequence of two or more input images, the method comprising the steps of: forming a background estimation for the current input image from a plurality of the input images; and comparing the current input image with the background estimation to identify an area of the current input image indicative of the presence of fire in the current input image, the identified area being equivalent to the corresponding area in the background estimation with the addition of a characteristic of a flickering flame of a fire.
2. A method according to claim 1, in which the step of comparing the current input image with the background estimation comprises the step of: identifying an area of the current input image that matches the corresponding area of the background estimation to a greater degree than the corresponding area of a preceding input image matches the corresponding area of the respective preceding
background estimation.
3. A method according to claim 2, in which the step of comparing the current input image with the background estimation further comprises the step of: determining whether the identified area has a reduced brightness level relative to the corresponding area in the preceding input image.
4. A method according to any one of the preceding claims, comprising the step of: forming a probability map in dependence upon the comparison of the current input image and the background estimation, each value of the probability map indicating a probability that a corresponding location in the current input image is indicative of the presence of fire.
5. A method according to claim 4, in which the probability map is dependent upon one or more weighting values, the weighting values being updated for each input image.
6. A method according to claim 4 or 5, in which the step of forming a probability map comprises the step of: combining a value at a location in the probability map with one or more other probability map values in a region of the probability map containing that location.
7. A method according to claim 6, in which the region is a substantially circular region centred on the location in the probability map.
8. A method according to any one of claims 4 to 7, in which the step of forming a probability map comprises the step of: combining the probability map with a probability map corresponding to a preceding input image.
9. A method according to claim 8, in which the step of combining comprises using an infinite impulse response filter on the probability map values.
10. A method according to any one of the preceding claims, in which the step of forming the background estimation comprises the steps of: storing, for the current input image and at least one preceding input image, a
corresponding historical background estimation;
setting a value in an historical background estimation to a corresponding value in the current input image if the corresponding value in the current input image represents a darker value than that value in the historical background estimation; and forming the background estimation as one of the historical background estimations.
11. A method according to claim 10, in which the background estimation is formed from the historical background estimation corresponding to the input image that precedes the input images corresponding to the other historical background estimations.
12. A method according to any one of the preceding claims, comprising the step of: removing, from the current input image, non-linear response effects introduced into the current input image when the current input image was generated.
13. A method according to any one of the preceding claims, comprising the step of: balancing the contrast of the current input image and the background estimation.
14. A method according to any one of the preceding claims, in which the input images represent light in the visible spectrum.
15. A method according to any one of the preceding claims, comprising the step of: receiving the sequence of two or more input images from a video camera.
16. A method according to any one of claims 4 to 15, comprising the step of: triggering an alarm if one or more of the probability map values exceeds a threshold value.
17. A method substantially as hereinbefore described with reference to Figures 2 to 4 of the accompanying drawings.
18. A fire detector operable to identify, in a current input image, an area indicative of the presence of fire, there being a sequence of two or more input images, the detector comprising: an estimator operable to form a background estimation for the current input image from a plurality of the input images; and a comparator operable to compare the current input image with the background estimation to identify an area of the current input image indicative of the presence of fire in the current input image, the identified area being equivalent to the corresponding area in the background estimation with the addition of a characteristic of a flickering flame of a fire.
19. A fire detector substantially as hereinbefore described with reference to Figures 2 to 4 of the accompanying drawings.
20. A fire detection system comprising: a video camera; and a fire detector according to claim 18 or 19 operable to receive the sequence of two or more input images from the video camera.
21. Computer software comprising program code for carrying out a method according to any one of claims 1 to 17.
22. A providing medium for providing computer software according to claim 21.
23. A medium according to claim 22, wherein the medium is a storage medium.
24. A medium according to claim 22, wherein the medium is a transmission medium.
GB0514706A 2005-07-18 2005-07-18 Fire detection by processing video images Withdrawn GB2428473A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0514706A GB2428473A (en) 2005-07-18 2005-07-18 Fire detection by processing video images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0514706A GB2428473A (en) 2005-07-18 2005-07-18 Fire detection by processing video images

Publications (2)

Publication Number Publication Date
GB0514706D0 GB0514706D0 (en) 2005-08-24
GB2428473A true GB2428473A (en) 2007-01-31

Family

ID=34897391

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0514706A Withdrawn GB2428473A (en) 2005-07-18 2005-07-18 Fire detection by processing video images

Country Status (1)

Country Link
GB (1) GB2428473A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7804522B2 (en) 2005-07-18 2010-09-28 Sony United Kingdom Limited Image analysis for smoke detection
GB2472646A (en) * 2009-08-14 2011-02-16 Alan Frederick Boyd CCTV system arranged to detect the characteristics of a fire
WO2017190882A1 (en) * 2016-05-04 2017-11-09 Robert Bosch Gmbh Detection device, method for detection of an event, and computer program
CN107577997A (en) * 2017-08-21 2018-01-12 国家电网公司 The discrimination method that mountain fire is invaded in a kind of electric transmission line channel

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030141980A1 (en) * 2000-02-07 2003-07-31 Moore Ian Frederick Smoke and flame detection
US20040175040A1 (en) * 2001-02-26 2004-09-09 Didier Rizzotti Process and device for detecting fires bases on image analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030141980A1 (en) * 2000-02-07 2003-07-31 Moore Ian Frederick Smoke and flame detection
US20040175040A1 (en) * 2001-02-26 2004-09-09 Didier Rizzotti Process and device for detecting fires bases on image analysis

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7804522B2 (en) 2005-07-18 2010-09-28 Sony United Kingdom Limited Image analysis for smoke detection
GB2472646A (en) * 2009-08-14 2011-02-16 Alan Frederick Boyd CCTV system arranged to detect the characteristics of a fire
WO2017190882A1 (en) * 2016-05-04 2017-11-09 Robert Bosch Gmbh Detection device, method for detection of an event, and computer program
CN107577997A (en) * 2017-08-21 2018-01-12 国家电网公司 The discrimination method that mountain fire is invaded in a kind of electric transmission line channel

Also Published As

Publication number Publication date
GB0514706D0 (en) 2005-08-24

Similar Documents

Publication Publication Date Title
US7804522B2 (en) Image analysis for smoke detection
Celik Fast and efficient method for fire detection using image processing
KR102045871B1 (en) System For Detecting Fire Based on Artificial Intelligence And Method For Detecting Fire Based on Artificial Intelligence
KR101168760B1 (en) Flame detecting method and device
US7574039B2 (en) Video based fire detection system
CN112561946B (en) Dynamic target detection method
CN113792827B (en) Target object recognition method, electronic device, and computer-readable storage medium
KR20190141577A (en) Method, device and system for determining whether pixel positions in an image frame belong to a background or a foreground
CN111489342A (en) Video-based flame detection method and system and readable storage medium
KR101693959B1 (en) Fire detection System and Method using Features of Spatio-temporal Video Blocks
GB2428473A (en) Fire detection by processing video images
Khan et al. Machine vision based indoor fire detection using static and dynamic features
CN110023957B (en) Method and apparatus for estimating drop shadow region and/or highlight region in image
KR101336240B1 (en) Method and apparatus for image processing using saved image
EP3543954B1 (en) Method of processing a video
CN114125280B (en) Camera exposure control method, device, equipment and storage medium
NO330182B1 (en) Flame detection method and apparatus
JP2016110263A (en) Smoke detection device and smoke detection method
CN112166598B (en) Image processing method, system, movable platform and storage medium
CN115170894B (en) Method and device for detecting smoke and fire
CN108961293B (en) Background subtraction method, device, equipment and storage medium
WO2020129176A1 (en) Image processing system, image processing method, and image processing program
JP2005252479A (en) Surveillance camera block detector
Chondro et al. Detecting abnormal massive crowd flows: Characterizing fleeing en masse by analyzing the acceleration of object vectors
KR102194511B1 (en) Representative video frame determination system and method using same

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)