WO2017084094A1 - 烟雾检测装置、方法以及图像处理设备 - Google Patents

烟雾检测装置、方法以及图像处理设备 Download PDF

Info

Publication number
WO2017084094A1
WO2017084094A1 PCT/CN2015/095178 CN2015095178W WO2017084094A1 WO 2017084094 A1 WO2017084094 A1 WO 2017084094A1 CN 2015095178 W CN2015095178 W CN 2015095178W WO 2017084094 A1 WO2017084094 A1 WO 2017084094A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
smoke
candidate region
current
attribute information
Prior art date
Application number
PCT/CN2015/095178
Other languages
English (en)
French (fr)
Inventor
白向晖
Original Assignee
富士通株式会社
白向晖
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 白向晖 filed Critical 富士通株式会社
Priority to CN201580084015.3A priority Critical patent/CN108140291A/zh
Priority to PCT/CN2015/095178 priority patent/WO2017084094A1/zh
Priority to EP15908590.1A priority patent/EP3379509A4/en
Priority to JP2018525692A priority patent/JP6620888B2/ja
Publication of WO2017084094A1 publication Critical patent/WO2017084094A1/zh
Priority to US15/978,817 priority patent/US10846867B2/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/435Computation of moments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/10Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means

Definitions

  • the present invention relates to the field of graphic image technology, and in particular, to a smoke detecting device, a method, and an image processing device.
  • smoke detection is required in video surveillance. For example, when a fire occurs in a certain part of the building, if the smoke is automatically detected in the area through the video image, the fire alarm can be performed as soon as possible to reduce the damage caused by the fire.
  • Embodiments of the present invention provide a smoke detecting apparatus, method, and image processing apparatus capable of quickly and accurately detecting smoke through a video image, thereby improving detection accuracy of video-based smoke detection in illumination changes and complex environments.
  • a smoke detecting device comprising:
  • a background image modeling unit that performs background image modeling on the current image to acquire a foreground image and a background image of the current image
  • a candidate region acquiring unit configured to acquire one or more candidate regions in the current image for detecting a moving object based on the foreground image
  • An attribute information calculation unit that calculates attribute information of a certain candidate region corresponding to the current image and/or the background image
  • the smoke determination determining unit determines whether or not smoke is present in the certain candidate region based on the attribute information.
  • a smoke detecting method comprising:
  • an image processing apparatus wherein the image processing apparatus comprises the smoke detecting apparatus as described above.
  • a computer readable program wherein when the program is executed in an image processing apparatus, the program causes a computer to perform smoke detection as described above in the image processing apparatus method.
  • a storage medium storing a computer readable program, wherein the computer readable program causes a computer to execute a smoke detecting method as described above in an image processing apparatus.
  • An advantageous embodiment of the present invention is to obtain one or more candidate regions based on the foreground image, calculate attribute information of a certain candidate region corresponding to the current image and/or the background image, and determine whether the candidate region exists in the candidate region according to the attribute information. smoke. Thereby, not only can the smoke be detected quickly and accurately by the video image, but also the detection accuracy of the video-based smoke detection in the illumination change and the complex environment can be improved.
  • FIG. 1 is a schematic view of a smoke detecting method according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic diagram of extracting a connected domain according to Embodiment 1 of the present invention.
  • Figure 3 is another schematic view of the smoke detecting method of Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of acquiring a continuous motion area according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic diagram of performing smoke detection on a candidate area according to Embodiment 1 of the present invention.
  • Figure 6 is a schematic view showing the direction of Embodiment 1 of the present invention.
  • FIG. 7 is another schematic diagram of performing smoke detection on a candidate area according to Embodiment 1 of the present invention.
  • FIG. 8 is another schematic diagram of performing smoke detection on a candidate area according to Embodiment 1 of the present invention.
  • FIG. 9 is another schematic diagram of performing smoke detection on a candidate area according to Embodiment 1 of the present invention.
  • FIG. 10 is another schematic diagram of performing smoke detection on a candidate area according to Embodiment 1 of the present invention.
  • FIG. 11 is a schematic diagram of acquiring a remaining motion area according to Embodiment 1 of the present invention.
  • FIG. 12 is another schematic diagram of performing smoke detection on a certain candidate area according to Embodiment 1 of the present invention.
  • Figure 13 is a schematic view of a smoke detecting device according to a second embodiment of the present invention.
  • FIG. 14 is a schematic diagram of a candidate area acquiring unit according to Embodiment 2 of the present invention.
  • Figure 15 is another schematic view of the smoke detecting device of Embodiment 2 of the present invention.
  • FIG. 16 is a schematic diagram of an attribute information calculation unit according to Embodiment 2 of the present invention.
  • FIG 17 is another schematic diagram of an attribute information calculation unit according to Embodiment 2 of the present invention.
  • FIG. 19 is another schematic diagram of an attribute information calculation unit according to Embodiment 2 of the present invention.
  • FIG. 20 is another schematic diagram of an attribute information calculation unit according to Embodiment 2 of the present invention.
  • FIG. 21 is another schematic diagram of an attribute information calculation unit according to Embodiment 2 of the present invention.
  • Figure 22 is a diagram showing an image processing apparatus according to a third embodiment of the present invention.
  • Embodiments of the present invention provide a smoke detecting method.
  • 1 is a schematic diagram of a smoke detecting method according to an embodiment of the present invention. As shown in FIG. 1, the smoke detecting method includes:
  • Step 101 Perform background image modeling on the current image to obtain a foreground image and a background image of the current image;
  • Step 102 Acquire one or more candidate regions in the current image for detecting a moving object based on the foreground image
  • Step 103 Calculate attribute information of a certain candidate region corresponding to the current image and/or the background image;
  • Step 104 Determine whether there is smoke in the candidate area according to the attribute information.
  • a video including a plurality of frames can be obtained using a device such as a camera.
  • a background image modeling method based on a Gaussian Mixture Model (GMM) may be used to perform background modeling on a color current image (or a current frame) of the input video to obtain a foreground image and a background image.
  • GMM Gaussian Mixture Model
  • the present invention is not limited thereto, and any method of background image modeling may be employed.
  • one or more candidate regions may be acquired based on the foreground image.
  • the foreground image may be represented as a binary image to obtain a binarized image of the foreground image; for example, the pixel value of the foreground portion pixel is “1”, and the pixel value of the background portion pixel is “0”.
  • the binarized image may be subjected to median filtering to remove small noise points. Then, a plurality of pixels having the same pixel value (for example, "1") and communicating with each other in the binarized image are used as one connected domain to acquire one or more connected domains representing the moving object in the foreground image. For example, a plurality of connected domains of different sizes can be extracted from a binarized image.
  • FIG. 2 is a schematic diagram of extracting a connected domain according to an embodiment of the present invention.
  • a plurality of pixels having a pixel value of, for example, “1” and connected may form a connected domain.
  • a total of five connected domains can be extracted from the binarized image, and are respectively recorded as a connected domain 201, a connected domain 202, and a connected domain 205.
  • one or more connected domains may be selected to obtain one or more candidate regions.
  • the connected domain whose area is less than or equal to the preset threshold (the first threshold) may be removed, and/or the connected domain whose average color depth is outside the preset range may be removed.
  • the specific value of the first threshold may be preset according to an empirical value, for example, and the present invention does not limit the first threshold.
  • the areas of the connected domains 203 and 204 are all smaller than the first threshold, and the areas of the connected domains 201, 202, and 205 are all greater than the first threshold, then the connected domains 201, 202, and 205 may be As a candidate area.
  • the attribute information may include one or more of the following: saturation information, gray-scale variance information, gradient direction information, gray-scale average information, and motion direction information.
  • saturation information may include one or more of the following: saturation information, gray-scale variance information, gradient direction information, gray-scale average information, and motion direction information.
  • the present invention is not limited thereto, and other attribute information corresponding to the current image and/or the background image may be used.
  • the present invention is described by taking the above attribute information as an example.
  • FIG. 3 is another schematic diagram of a smoke detecting method according to an embodiment of the present invention, further detecting smoke using a continuous motion region based on a candidate region.
  • the smoke detecting method includes:
  • Step 301 Perform background image modeling on the current image to obtain a foreground image and a background image of the current image.
  • Step 302 Acquire one or more candidate regions in the current image for detecting the moving object based on the foreground image.
  • step 303 a certain candidate area is selected.
  • Step 304 Acquire a continuous motion region corresponding to the candidate region according to the position of the candidate region in each of the plurality of image frames.
  • a plurality of (for example, N) image frames consecutive to the current frame may be acquired, and then the corresponding candidate regions in the N+1 frames are combined to construct a continuous motion region corresponding to the candidate region. That is, the continuous motion region is the "motion trajectory" of the candidate region in the N+1 image frames.
  • FIG. 4 is a schematic diagram of acquiring a continuous motion region according to an embodiment of the present invention.
  • the current frame is recorded as an Nth frame, and a candidate region exists in a total of N+1 image frames from the Nth frame to the 0th frame. 401.
  • the position and shape of the candidate region 401 in the 0th frame to the Nth frame may be different.
  • the continuous motion region 402 can be obtained.
  • Step 305 Calculate attribute information of the candidate region corresponding to the current image and/or the background image based on the continuous motion region.
  • Step 306 Determine whether there is smoke in the candidate area according to the attribute information.
  • step 307 it is determined whether there are other candidate regions; if yes, step 303 is performed to select another candidate region, and the other candidate region is further determined.
  • the flow of the smoke detecting method in the embodiment of the present invention is schematically illustrated.
  • the following is an example of the motion direction information, the saturation average value, the grayscale variance value, the gradient direction information average value, and the grayscale average value.
  • the detection of smoke in a candidate area in the invention is further illustrated. For the above how to obtain candidate regions and how to obtain continuous motion regions, you can refer to the above.
  • whether or not smoke is present in the candidate region may be determined according to whether a main motion direction of a certain candidate region in a plurality of image frames is downward.
  • FIG. 5 is a schematic diagram of performing smoke detection on a candidate area according to an embodiment of the present invention. As shown in FIG. 5, the method includes:
  • Step 501 Calculate a motion direction of the candidate region relative to the current image in the plurality of image frames based on the centroid position and the center of gravity position of the candidate region.
  • centroid Mc(Xc, Yc) of the "candidate area" in the current frame can be calculated by the following formula (1)
  • N is the number of pixels included in the "candidate region”
  • p ⁇ component means that the "candidate region" includes the pixel p
  • p.x refers to the x coordinate of the pixel p
  • p.y refers to the y coordinate of the pixel p.
  • Fig. 6 shows a schematic view of the direction of the embodiment of the present invention. As shown in Fig. 6, eight directions can be defined. However, the present invention is not limited thereto, and for example, more or less directions may be defined, and may be specifically defined according to actual conditions.
  • Step 502 counting the frequency of occurrence of each motion direction in a plurality of image frames.
  • Step 503 the direction of motion with the highest frequency of occurrence is taken as the main direction of motion of the candidate region.
  • the moving direction of the "candidate area" in successive frames can be recorded, and the frequency of occurrence of each moving direction can be recorded, and then the moving direction with the highest frequency of occurrence is regarded as the main motion of the "candidate area". direction.
  • Step 504 it is determined whether the main motion direction is downward; in the case that the main motion direction is downward, step 505 is performed;
  • Step 505 determining that there is no smoke in the candidate area.
  • this "candidate area" of the current frame can be taken from the "candidate area". Removed from the list; that is, it is determined that there is no smoke in the candidate area.
  • the main motion direction of the "candidate area” is downward (for example, 6, 7, 8 as shown in FIG. 6)
  • this "candidate area" of the current frame can be taken from the "candidate area”. Removed from the list; that is, it is determined that there is no smoke in the candidate area.
  • the main motion direction is not downward, it may be determined that there is smoke in the candidate region, or in order to make the detection result more accurate, the detection of other items of the candidate region may continue.
  • whether or not smoke is present in the candidate region may be determined according to whether the saturation information of the candidate region in the continuous motion region is less than a preset threshold.
  • FIG. 7 is another schematic diagram of performing smoke detection on a candidate area according to an embodiment of the present invention. As shown in FIG. 7, the method includes:
  • Step 701 Perform color space conversion on the current image, and calculate a saturation color component according to the color component to obtain a saturation map of the current image;
  • Step 702 Calculate a current saturation average value of the candidate region in the continuous motion region based on a saturation map of the current image.
  • the calculation formula of the current saturation average value can be expressed by the following formula (3):
  • Savg is the current saturation average value
  • is the continuous motion region
  • N is the number of pixels of the continuous motion region
  • i is a certain pixel of the continuous motion region
  • S i is the saturation value of the pixel i. .
  • Step 703 determining whether the current saturation average is greater than or equal to a preset threshold (second threshold); performing step 704 if the current saturation average is greater than or equal to the preset threshold;
  • the specific value of the second threshold may be preset according to an empirical value, for example, and the present invention does not limit the second threshold.
  • Step 704 determining that there is no smoke in the candidate area.
  • the current saturation average value is greater than or equal to the second threshold, it indicates that the saturation of the moving object is higher, and the saturation of the general smoke is lower, so that it is determined that there is no smoke in the candidate region. You can remove this "candidate area" of the current frame from the "candidate area” list. Furthermore, in the case where the current saturation average is smaller than the second threshold, it may be determined that there is smoke in the candidate region, or in order to make the detection result more accurate, the detection of other items of the candidate region may continue.
  • whether or not smoke is present in the candidate region may be determined according to a comparison result between current saturation information and background saturation information of a certain candidate region in the continuous motion region.
  • FIG. 8 is another schematic diagram of performing smoke detection on a candidate area according to an embodiment of the present invention. As shown in FIG. 8, the method includes:
  • Step 801 performing color space conversion on the current image, and calculating a saturation color component according to the color component. To obtain a saturation map of the current image;
  • Step 802 Calculate a current saturation average value of the candidate region in the continuous motion region based on a saturation map of the current image.
  • Step 803 performing color space conversion on the background image, and calculating a saturation color component according to the color component to obtain a saturation map of the background image;
  • Step 804 Calculate an average background saturation of the candidate region in the continuous motion region based on the saturation map of the background image.
  • the formula for calculating the background saturation average value may be, for example, equation (3).
  • Step 805 determining whether the current saturation average is greater than or equal to the background saturation average; performing the step 806 if the current saturation average is greater than or equal to the background saturation average;
  • Step 806 determining that there is no smoke in the candidate area.
  • the current saturation average value is greater than or equal to the background saturation average value, it indicates that the overall saturation of the candidate region is higher, and the overall saturation of the region generally having smoke is lower, so It is determined that there is no smoke in the candidate area, and this "candidate area" of the current frame can be removed from the "candidate area” list.
  • the current saturation average is smaller than the background saturation average, it may be determined that there is smoke in the candidate region, or in order to make the detection result more accurate, the detection of other items of the candidate region may continue.
  • whether or not smoke is present in the candidate region may be determined according to gradation variance information of a certain candidate region in the continuous motion region.
  • FIG. 9 is another schematic diagram of performing smoke detection on a candidate area according to an embodiment of the present invention. As shown in FIG. 9, the method includes:
  • Step 901 Calculate a grayscale variance value of the candidate region in the continuous motion region based on the grayscale image of the current image.
  • the calculation formula of the grayscale variance value can be expressed by the following formula (4):
  • is the continuous motion region
  • N is the number of pixels of the continuous motion region
  • i is a certain pixel of the continuous motion region
  • Y i is the gray value of the pixel i
  • Y avg is the gray of the continuous motion region Degree average
  • Var is the grayscale variance value
  • Step 902 determining whether the grayscale variance value is greater than or equal to a preset threshold (third threshold); performing step 903 if the grayscale variance value is greater than or equal to a preset threshold;
  • the specific value of the third threshold may be preset according to an empirical value, for example, and the present invention does not limit the third threshold.
  • Step 903 determining that there is no smoke in the candidate area.
  • the gray-scale variance value is greater than or equal to the third threshold, the texture of the object is higher, and the texture of the general smoke is lower, so that it is determined that there is no smoke in the candidate region, and the current This "candidate area" of the frame is removed from the “candidate area” list. Furthermore, in the case where the grayscale variance value is smaller than the third threshold, it may be determined that there is smoke in the candidate region, or in order to make the detection result more accurate, the detection of other items of the candidate region may be continued.
  • whether or not smoke is present in the candidate region may be determined based on grayscale average information of a candidate region in the continuous motion region.
  • FIG. 10 is another schematic diagram of performing smoke detection on a candidate area according to an embodiment of the present invention. As shown in FIG. 10, the method includes:
  • Step 1001 removing the candidate region from the continuous motion region to obtain the remaining motion region.
  • Figure 11 is a schematic diagram of the acquisition of the remaining motion area, schematically showing the remaining motion area obtained on the basis of Figure 4, in accordance with an embodiment of the present invention.
  • the candidate region 401 of the current image (Nth frame) can be removed from the continuous motion region 402 shown in FIG. 4, thereby obtaining the remaining motion region 1101.
  • Step 1002 Calculate a current gray average value of the remaining candidate regions based on a grayscale image of the current image
  • the calculation formula of the current gray average value can be expressed by the following formula (5):
  • is the remaining motion area
  • N is the number of pixels of the remaining motion area
  • i is a certain pixel of the remaining motion area
  • Y i is the gray value of the pixel i in the current image
  • F avg is the remaining The current grayscale average of the motion area.
  • Step 1003 Calculate an average of the background gray levels of the remaining candidate regions based on the grayscale image of the background image.
  • the calculation formula of the background gray average value can be expressed by the following formula (6):
  • is the remaining motion region
  • N is the number of pixels of the remaining motion region
  • j is a certain pixel of the remaining motion region
  • Y j is the gray value of the pixel j in the background image
  • B avg is the remaining The background grayscale average of the motion area.
  • Step 1004 calculating a difference between the current gray level average value and the background gray level average value
  • Step 1005 it is determined whether the difference is less than or equal to a preset threshold (fourth threshold); if the difference is less than or equal to the preset threshold, step 1006 is performed;
  • the specific value of the fourth threshold may be set in advance based on an empirical value, for example, and the present invention does not limit the fourth threshold.
  • Step 1006 determining that there is no smoke in the candidate area.
  • the difference between the current gray average value and the background gray average value is less than or equal to the fourth threshold, it indicates that the moving object in the candidate area is a rigid object, and the general smoke is diffuse.
  • the divergence feature so it can be determined that there is no smoke in the candidate region, and this "candidate region" of the current frame can be removed from the "candidate region” list.
  • the difference is greater than the fourth threshold, it may be determined that there is smoke in the candidate area, or in order to make the detection result more accurate, the detection of other items of the candidate area may continue.
  • FIG. 12 is another schematic diagram of performing smoke detection on a candidate area according to an embodiment of the present invention. As shown in FIG. 12, the method includes:
  • Step 1201 Calculate a horizontal gradient and a vertical gradient of the pixel according to a gray image of the current image for a certain pixel in the candidate region to obtain a current image gradient direction of the pixel;
  • Step 1202 Calculate a horizontal gradient and a vertical gradient of the pixel based on a grayscale image of the background image to obtain the The background image of the pixel is in the gradient direction.
  • the horizontal gradient of a certain pixel can be calculated as described in the following formula (7):
  • Gx (-1)*f(x-1,y-1)+0*f(x,y-1)+1*f(x+1,y-1)
  • the vertical gradient of the pixel can be calculated as described in equation (8) below:
  • Gy 1*f(x-1,y-1)+2*f(x,y-1)+1*f(x+1,y-1)
  • f is the pixel
  • x is the x coordinate of the pixel f
  • y is the y coordinate of the pixel f.
  • Step 1203 Calculate an angle correlation value between a current image gradient direction of the pixel and a background image gradient direction;
  • the angle may be obtained according to the current image gradient direction and the background image gradient direction, and then the correlation value (for example, the cosine value) of the angle may be calculated, but the embodiment is not limited thereto, and for example, other correlation may be Values (such as cotangent values, etc.) are described below with the cosine value of the included angle as an example.
  • step 1204 the angle correlation value of the plurality of pixels (for example, all pixels) in the candidate region is counted and averaged, and the average angle correlation value of the candidate region is used as the average value of the gradient direction information.
  • Step 1205 determining whether the average value of the gradient direction information is greater than or equal to a preset threshold (fifth threshold); and performing step 1206 if the average value of the gradient direction information is greater than or equal to a preset threshold;
  • the specific value of the fifth threshold value may be set in advance based on an empirical value, for example, and the present invention does not limit the fifth threshold.
  • step 1206 it is determined that there is no smoke in the candidate area.
  • the average value of the gradient direction information is greater than or equal to the fifth threshold, it indicates that the candidate region is not a foreground generated by a real moving object, but a pseudo foreground caused by a change in illumination, and thus may be This "candidate area" of the current frame is removed from the "candidate area” list. Further, in the case where the average value of the gradient direction information is smaller than the fifth threshold, it may be determined that there is smoke in the candidate region, or in order to make the detection result more accurate, the detection of other items of the candidate region may be continued.
  • first to sixth embodiments may be employed.
  • only one of the embodiments may be used, or all of the above six embodiments may be used.
  • a specific detection scheme can be determined according to actual conditions.
  • one or more candidate regions are acquired based on the foreground image, attribute information of a certain candidate region corresponding to the current image and/or the background image is calculated, and whether smoke is present in the candidate region is determined according to the attribute information.
  • the embodiment of the present invention provides a smoke detecting device, which corresponds to the smoke detecting method described in Embodiment 1, wherein the same content is not described again.
  • FIG. 13 is a schematic diagram of a smoke detecting device according to an embodiment of the present invention. As shown in FIG. 13, the smoke detecting device 1300 includes:
  • the background image modeling unit 1301 performs background image modeling on the current image to acquire a foreground image and a background image of the current image;
  • the candidate region acquiring unit 1302 acquires one or more candidate regions in the current image for detecting the moving object based on the foreground image;
  • the attribute information calculation unit 1303 calculates attribute information of a certain candidate region corresponding to the current image and/or the background image
  • the smoke determining unit 1304 determines whether smoke is present in the candidate region according to the candidate region corresponding to the attribute information of the current image and/or the background image.
  • FIG. 14 is a schematic diagram of a candidate region obtaining unit according to an embodiment of the present invention.
  • the candidate region obtaining unit 1302 may include:
  • the binarization map acquiring unit 1401 acquires a binarized image of the foreground image
  • the connected domain acquiring unit 1402 is configured to obtain, as a connected domain, a plurality of pixels having the same pixel value and connected to each other in the binarized image, to acquire one or more connected domains representing the moving object in the foreground image;
  • the connected domain selecting unit 1403 selects the connected domain to acquire one or more candidate regions.
  • the connected domain selecting unit 1403 may be configured to: remove the connected domain whose area is less than or equal to the preset threshold, and/or remove the connected domain whose average color depth is outside the preset range.
  • the present invention is not limited thereto, and the connected domain may be filtered according to other rules.
  • the smoke detecting device 1500 includes: a background image modeling unit 1301, a candidate region acquiring unit 1302, an attribute information calculating unit 1303, and a smoke determining unit. 1304, as described above.
  • the smoke detecting device 1500 may further include:
  • the motion area acquiring unit 1501 acquires a continuous motion area corresponding to the candidate area according to the position of the candidate area in the plurality of image frames including the current image, respectively;
  • the attribute information calculation unit 1303 may be further configured to: calculate the attribute information of the candidate area corresponding to the current image and/or the background image based on the continuous motion area.
  • whether or not smoke is present in the candidate region may be determined according to whether a main motion direction of a certain candidate region in a plurality of image frames is downward.
  • the attribute information calculation unit 1303 is further configured to: obtain a main motion direction of the candidate region in the plurality of image frames; the smoke determining unit 1304 is further configured to: in the main motion direction of the candidate region In the next case, it is determined that there is no smoke in the candidate area.
  • FIG. 16 is a schematic diagram of an attribute information calculation unit according to an embodiment of the present invention. As shown in FIG. 16, the attribute information calculation unit 1303 may include:
  • the motion direction calculation unit 1601 calculates a motion direction of the candidate region relative to the current image in the plurality of image frames based on the centroid position and the gravity center position of the candidate region;
  • a motion direction statistic unit 1602 that counts frequencies at which each motion direction appears in a plurality of image frames
  • the main motion direction determining unit 1603 regards the motion direction with the highest frequency of occurrence as the main motion direction of the candidate region.
  • whether the smoke is present in the candidate region may be determined according to whether the saturation information of the candidate region in the continuous motion region is less than a preset threshold.
  • the information calculation unit 133 may include:
  • the current saturation map obtaining unit 1701 performs color space conversion on the current image, and calculates a saturation color component according to the color component to obtain a saturation map of the current image;
  • the current saturation calculation unit 1702 calculates a current saturation average value of the candidate region in the continuous motion region based on the saturation map of the current image.
  • the smoke determining unit 1304 is further configured to: determine that no smoke exists in the candidate region if the current saturation average value is greater than or equal to the preset threshold.
  • whether or not smoke is present in the candidate region may be determined according to a comparison result between the current saturation information and the background saturation information of the candidate region in the continuous motion region.
  • the attribute information calculation unit 1303 may include a current saturation map acquisition unit 1701 and a current saturation calculation unit 1702, as described above.
  • the attribute information calculation unit 1303 may further include:
  • the background saturation map acquiring unit 1801 performs color space conversion on the background image, and calculates a saturation color component according to the color component to obtain a saturation map of the background image;
  • the background saturation calculation unit 1802 calculates an average background saturation of the candidate region in the continuous motion region based on the saturation map of the background image.
  • the smoke determining unit 1304 is further configured to: determine that no smoke exists in the candidate region if the current saturation average value is greater than or equal to the background saturation average value.
  • whether or not smoke is present in the candidate region may be determined according to gray-scale variance information of a candidate region in the continuous motion region.
  • FIG. 19 is another schematic diagram of the attribute information calculation unit according to the embodiment of the present invention. As shown in FIG. 19, the attribute information calculation unit 1303 may include:
  • the variance value calculation unit 1901 calculates a gray-scale variance value of the candidate region in the continuous motion region based on the grayscale image of the current image.
  • the smoke determining unit 1304 is further configured to: determine that no smoke exists in the candidate region if the grayscale variance value is greater than or equal to a preset threshold.
  • whether or not smoke is present in the candidate region may be determined according to grayscale average information of a candidate region in the continuous motion region.
  • FIG. 20 is another schematic diagram of the attribute information calculation unit according to the embodiment of the present invention. As shown in FIG. 20, the attribute information calculation unit 1303 may include:
  • the motion area adjustment unit 2001 removes the candidate area from the continuous motion area to acquire the remaining motion area
  • the current average value calculating unit 2002 calculates a current gray average value of the remaining candidate regions based on the grayscale image of the current image
  • a background average calculation unit 2003 that calculates a background gray average value of the remaining candidate regions based on the grayscale image of the background image
  • the difference calculation unit 2004 calculates a difference between the current gray average value and the background gray average value.
  • the smoke determining unit 1304 is further configured to: if the difference between the current gray average value and the background gray average value is less than or equal to a preset threshold, determine that there is no smoke in the candidate area.
  • whether there is smoke in the candidate region may be determined according to gradient direction information of a certain candidate region.
  • the attribute information calculation unit 1303 may be further configured to: calculate an average value of the gradient direction information of the candidate region; the smoke determining unit 1304 may be further configured to: when the average value of the gradient direction information is greater than or equal to a preset threshold Next, it is determined that there is no smoke in the candidate area.
  • the attribute information calculation unit 1303 may include:
  • the current gradient calculating unit 2101 calculates, for a certain pixel in the candidate region, a horizontal gradient and a vertical gradient of the pixel based on the grayscale image of the current image to obtain a current image gradient direction of the pixel;
  • the background gradient calculating unit 2102 calculates a horizontal gradient and a vertical gradient of the pixel based on the grayscale image of the background image to obtain a background image gradient direction of the pixel;
  • the angle correlation value calculation unit 2103 calculates an angle correlation value of a current image gradient direction of the pixel and a gradient direction of the background image
  • the gradient average value obtaining unit 2104 calculates and averages the angle correlation values of the plurality of pixels in the candidate region, and uses the average angle correlation value of the candidate region as the gradient direction information average value.
  • the attribute information may include one or more of the following: saturation information, gray-scale variance information, gradient direction information, gray-scale average information, and motion direction information.
  • saturation information may include one or more of the following: saturation information, gray-scale variance information, gradient direction information, gray-scale average information, and motion direction information.
  • the present invention is not limited thereto, and for example, other attribute information may be used for judgment.
  • one or more of the above embodiments may be adopted, and the root may be Determine the specific testing plan according to the actual situation.
  • one or more candidate regions are acquired based on the foreground image, attribute information of a certain candidate region corresponding to the current image and/or the background image is calculated, and whether smoke is present in the candidate region is determined according to the attribute information.
  • An embodiment of the present invention provides an image processing apparatus including the smoke detecting apparatus as described in Embodiment 2.
  • FIG. 22 is a diagram showing an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus 2200 may include a central processing unit (CPU) 100 and a memory 110; the memory 110 is coupled to the central processing unit 100.
  • the memory 110 can store various data; in addition, a program for information processing is stored, and the program is executed under the control of the central processing unit 100.
  • the functionality of the smoke detecting device can be integrated into the central processor 100.
  • the central processing unit 100 can be configured to control the smoke detecting method described in Embodiment 1.
  • the smoke detecting device can be configured separately from the central processing unit 100.
  • the smoke detecting device can be configured as a chip connected to the central processing unit 100, and the function of the smoke detecting device can be realized by the control of the central processing unit 100. .
  • the central processing unit 100 can be configured to perform the following control:
  • the central processing unit 100 may be further configured to: obtain a continuous motion region corresponding to the candidate region according to the position of the candidate region in the plurality of image frames respectively; and calculate the candidate region based on the continuous motion region Attribute information corresponding to the current image and/or background image.
  • the image processing apparatus 2200 may further include: an input/output (I/O) device 120, a display 130, and the like; wherein the functions of the above components are similar to those of the prior art, and are not described herein again. It is to be noted that the image processing apparatus 2200 does not necessarily have to include all the components shown in FIG. 22; The processing device 2200 may also include components not shown in FIG. 22, and reference may be made to the prior art.
  • I/O input/output
  • An embodiment of the present invention provides a computer readable program, wherein when the program is executed in an image processing apparatus, the program causes a computer to execute the smoke detecting method as described in Embodiment 1 in the image processing apparatus.
  • An embodiment of the present invention provides a storage medium storing a computer readable program, wherein the computer readable program causes a computer to execute the smoke detecting method as described in Embodiment 1 in an image processing apparatus.
  • the above apparatus and method of the present invention may be implemented by hardware or by hardware in combination with software.
  • the present invention relates to a computer readable program that, when executed by a logic component, enables the logic component to implement the apparatus or components described above, or to cause the logic component to implement the various methods described above Or steps.
  • the present invention also relates to a storage medium for storing the above program, such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory, or the like.
  • One or more of the functional blocks described in the figures and/or one or more combinations of functional blocks may be implemented as a general purpose processor, digital signal processor (DSP) for performing the functions described herein.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • One or more of the functional blocks described with respect to the figures and/or one or more combinations of functional blocks may also be implemented as a combination of computing devices, eg, a combination of a DSP and a microprocessor, multiple microprocessors One or more microprocessors in conjunction with DSP communication or any other such configuration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Image Analysis (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Alarm Systems (AREA)

Abstract

一种烟雾检测装置、方法以及图像处理设备。该烟雾检测方法包括:对当前图像进行背景图像建模以获取当前图像的前景图像和背景图像(101);基于前景图像获取当前图像中用于检测运动物体的一个或多个候选区域(102);计算某一候选区域对应于当前图像和/或背景图像的属性信息(103);以及根据该属性信息确定该候选区域中是否存在烟雾(104)。由此,不仅能够通过视频图像快速准确地对烟雾进行检测,而且可以提高基于视频的烟雾检测在光照变化以及复杂环境下的检测精度。

Description

烟雾检测装置、方法以及图像处理设备 技术领域
本发明涉及图形图像技术领域,特别涉及一种烟雾检测装置、方法以及图像处理设备。
背景技术
目前,在视频监控中需要对烟雾进行检测。例如当大楼的某一处发生火灾时,如果能通过视频图像自动检测出该区域出现烟雾,则可以尽快进行火灾报警,减少火灾带来的损失。
但是,由于烟雾运动具有弥漫性的特点,基于视频图像对烟雾进行准确的检测比较困难。现有技术中对视频图像进行检测来判断是否存在烟雾的技术方案均存在检测准确性不高、不能快速准确地进行检测的问题。
应该注意,上面对技术背景的介绍只是为了方便对本发明的技术方案进行清楚、完整的说明,并方便本领域技术人员的理解而阐述的。不能仅仅因为这些方案在本发明的背景技术部分进行了阐述而认为上述技术方案为本领域技术人员所公知。
发明内容
本发明实施例提供一种烟雾检测装置、方法以及图像处理设备,能够通过视频图像快速准确地对烟雾进行检测,提高基于视频的烟雾检测在光照变化以及复杂环境下的检测精度。
根据本发明实施例的第一个方面,提供一种烟雾检测装置,其中,所述烟雾检测装置包括:
背景图像建模单元,对当前图像进行背景图像建模以获取所述当前图像的前景图像和背景图像;
候选区域获取单元,基于所述前景图像获取所述当前图像中用于检测运动物体的一个或多个候选区域;
属性信息计算单元,计算某一候选区域对应于所述当前图像和/或所述背景图像的属性信息;以及
烟雾确定确定单元,根据所述属性信息确定所述某一候选区域中是否存在烟雾。
根据本发明实施例的第二个方面,提供一种烟雾检测方法,其中,所述烟雾检测方法包括:
对当前图像进行背景图像建模以获取所述当前图像的前景图像和背景图像;
基于所述前景图像获取所述当前图像中用于检测运动物体的一个或多个候选区域;
计算某一候选区域对应于所述当前图像和/或所述背景图像的属性信息;以及
根据所述属性信息确定所述某一候选区域中是否存在烟雾。
根据本发明实施例的第三个方面,提供一种图像处理设备,其中,所述图像处理设备包括如上所述的烟雾检测装置。
根据本发明实施例的又一个方面,提供一种计算机可读程序,其中当在图像处理设备中执行所述程序时,所述程序使得计算机在所述图像处理设备中执行如上所述的烟雾检测方法。
根据本发明实施例的又一个方面,提供一种存储有计算机可读程序的存储介质,其中所述计算机可读程序使得计算机在图像处理设备中执行如上所述的烟雾检测方法。
本发明实施例的有益效果在于,基于前景图像获取一个或多个候选区域,计算某一候选区域对应于当前图像和/或背景图像的属性信息,以及根据该属性信息确定该候选区域中是否存在烟雾。由此,不仅能够通过视频图像快速准确地对烟雾进行检测,而且可以提高基于视频的烟雾检测在光照变化以及复杂环境下的检测精度。
参照后文的说明和附图,详细公开了本发明的特定实施方式,指明了本发明的原理可以被采用的方式。应该理解,本发明的实施方式在范围上并不因而受到限制。在所附权利要求的精神和条款的范围内,本发明的实施方式包括许多改变、修改和等同。
针对一种实施方式描述和/或示出的特征可以以相同或类似的方式在一个或更多个其它实施方式中使用,与其它实施方式中的特征相组合,或替代其它实施方式中的特征。
应该强调,术语“包括/包含”在本文使用时指特征、整件、步骤或组件的存在,但并不排除一个或更多个其它特征、整件、步骤或组件的存在或附加。
附图说明
参照以下的附图可以更好地理解本发明的很多方面。附图中的部件不是成比例绘制的,而只是为了示出本发明的原理。为了便于示出和描述本发明的一些部分,附图中对应部分可能被放大或缩小。
在本发明的一个附图或一种实施方式中描述的元素和特征可以与一个或更多个其它附图或实施方式中示出的元素和特征相结合。此外,在附图中,类似的标号表示几个附图中对应的部件,并可用于指示多于一种实施方式中使用的对应部件。
图1是本发明实施例1的烟雾检测方法的一示意图;
图2是本发明实施例1的提取出连通域的一示意图;
图3是本发明实施例1的烟雾检测方法的另一示意图;
图4是本发明实施例1的获取连续运动区域的一示意图;
图5是本发明实施例1的对某一候选区域进行烟雾检测的一示意图;
图6是本发明实施例1的方向的一示意图;
图7是本发明实施例1的对某一候选区域进行烟雾检测的另一示意图;
图8是本发明实施例1的对某一候选区域进行烟雾检测的另一示意图;
图9是本发明实施例1的对某一候选区域进行烟雾检测的另一示意图;
图10是本发明实施例1的对某一候选区域进行烟雾检测的另一示意图;
图11是本发明实施例1的获取剩余运动区域的一示意图;
图12是本发明实施例1的对某一候选区域进行烟雾检测的另一示意图;
图13是本发明实施例2的烟雾检测装置的一示意图;
图14是本发明实施例2的候选区域获取单元的一示意图;
图15是本发明实施例2的烟雾检测装置的另一示意图;
图16是本发明实施例2的属性信息计算单元的一示意图;
图17是本发明实施例2的属性信息计算单元的另一示意图;
图18是本发明实施例2的属性信息计算单元的另一示意图;
图19是本发明实施例2的属性信息计算单元的另一示意图;
图20是本发明实施例2的属性信息计算单元的另一示意图;
图21是本发明实施例2的属性信息计算单元的另一示意图;
图22是本发明实施例3的图像处理设备的一示意图。
具体实施方式
参照附图,通过下面的说明书,本发明的前述以及其它特征将变得明显。在说明书和附图中,具体公开了本发明的特定实施方式,其表明了其中可以采用本发明的原则的部分实施方式,应了解的是,本发明不限于所描述的实施方式,相反,本发明包括落入所附权利要求的范围内的全部修改、变型以及等同物。
实施例1
本发明实施例提供一种烟雾检测方法。图1是本发明实施例的烟雾检测方法的一示意图,如图1所示,所述烟雾检测方法包括:
步骤101,对当前图像进行背景图像建模以获取当前图像的前景图像和背景图像;
步骤102,基于前景图像获取当前图像中用于检测运动物体的一个或多个候选区域;
步骤103,计算某一候选区域对应于当前图像和/或背景图像的属性信息;以及
步骤104,根据该属性信息确定该候选区域中是否存在烟雾。
在本实施例中,可以使用摄像头等设备获得包含多个帧的视频。可以采用基于高斯混合模型(GMM,Gaussian Mixture Model)的背景图像建模方法,对输入视频的彩色的当前图像(或称为当前帧)进行背景建模后获取前景图像和背景图像。但本发明不限于此,可以采用背景图像建模的任意方法。
在本实施例中,可以基于前景图像获取一个或多个候选区域。具体地,可以将前景图像以二值图像的方式表示,获取前景图像的二值化图像;例如前景部分像素的像素值为“1”,背景部分像素的像素值为“0”。
在本实施例中,可以对该二值化图像进行中值滤波,去除小的噪声点。然后,将该二值化图像中像素值相同(例如为“1”)且相互连通的多个像素作为一个连通域,以获取该前景图像中表示运动物体的一个或多个连通域。例如可以从一幅二值化图像中提取出若干大小不同的连通域。
图2是本发明实施例的提取出连通域的一示意图,如图2所示,像素值例如为“1”且连通的多个像素可以形成一个连通域。该二值化图像中共可以提取出5个连通域,分别记为连通域201、连通域202、……连通域205。
在本实施例中,可以对一个或多个连通域进行选择以获取一个或多个候选区域。 例如,可以去掉面积小于或等于预设阈值(第一阈值)的连通域,和/或,可以去掉平均颜色深度在预设范围之外的连通域。其中第一阈值的具体数值例如可以根据经验值预先设定,本发明不对该第一阈值进行限制。
在如图2所述的示例中,例如连通域203和204的面积均小于第一阈值,而连通域201、202和205的面积均大于第一阈值,则可以将连通域201、202和205作为候选区域。
在本实施例中,可以对于每一个候选区域判断该候选区域中是否存在烟雾。例如,对于某一个候选区域,可以计算该候选区域对应于当前图像和/或背景图像的属性信息;以及根据该属性信息确定该候选区域中是否存在烟雾。
其中,属性信息可以包括如下的一种或多种:饱和度信息、灰度方差信息、梯度方向信息、灰度平均信息、运动方向信息。但本发明不限于此,还可以使用其他的对应于当前图像和/或背景图像的属性信息,本发明仅以上述属性信息为例进行说明。
由此,不仅能够通过视频图像快速准确地对烟雾进行检测,而且可以提高基于视频的烟雾检测在光照变化以及复杂环境下的检测精度。
图3是本发明实施例的烟雾检测方法的另一示意图,进一步使用基于候选区域的连续运动区域进行烟雾的检测。如图3所示,该烟雾检测方法包括:
步骤301,对当前图像进行背景图像建模以获取当前图像的前景图像和背景图像。
步骤302,基于前景图像获取当前图像中用于检测运动物体的一个或多个候选区域。
步骤303,选择某一个候选区域。
步骤304,根据该候选区域分别在多个图像帧中的位置获取该候选区域对应的连续运动区域;
在本实施例中,可以获取当前帧之前连续的多个(例如N个)图像帧,然后将这N+1个帧中对应的该候选区域进行合并,构造该候选区域对应的连续运动区域,即该连续运动区域为该候选区域在这N+1个图像帧中的“运动轨迹”。
图4是本发明实施例的获取连续运动区域的一示意图,如图4所示,当前帧记为第N帧,该第N帧到第0帧共N+1个图像帧中均存在候选区域401,该候选区域401在第0帧到第N帧中的位置和形状可能均不相同,通过将这些候选区域401合并起来可以得到连续运动区域402。
步骤305,基于该连续运动区域计算该候选区域对应于当前图像和/或背景图像的属性信息。
步骤306,根据该属性信息确定该候选区域中是否存在烟雾。
步骤307,判断是否还有其他候选区域;如果还有则执行步骤303选择另一候选区域,继续对该另一候选区域进行判断。
以上对于本发明实施例的烟雾检测方法的流程进行了示意性说明,以下再以运动方向信息、饱和度平均值、灰度方差值、梯度方向信息平均值、灰度平均值为例,对本发明中某一候选区域的烟雾检测进行进一步说明。对于如何获取候选区域以及如何获取连续运动区域,可以参考上述内容。
在一个实施方式(实施方式1)中,可以根据某一候选区域在多个图像帧中的主运动方向是否向下,确定该候选区域中是否存在烟雾。
图5是本发明实施例的对某一候选区域进行烟雾检测的一示意图,如图5所示,所述方法包括:
步骤501,基于该候选区域的质心位置和重心位置,计算该候选区域在多个图像帧中相对于当前图像的运动方向。
例如,可以通过下式(1)计算当前帧中“候选区域”的质心Mc(Xc,Yc)
Figure PCTCN2015095178-appb-000001
其中,N是“候选区域”包含的像素个数,p∈component是指“候选区域”包含像素p,p.x是指像素p的x坐标,p.y是指像素p的y坐标。
假设当前帧“候选区域”的质心为Mc(Xc,Yc),在该当前帧之前若干帧(例如之前第5帧或者第10帧)中对应的“候选区域”的重心为Mp(Xp,Yp),则计算如下的值:
ΔX=Xc-Xp
ΔY=Yc-Yp
图6示出了本发明实施例的方向的一示意图,如图6所示,可以定义8个方向。但本发明不限于此,例如还可以定义更多或者更少的方向,可以根据实际情况具体进行定义。
如果ΔX>0且ΔY=0,则运动方向为1;
如果ΔX>0且ΔY<0,则运动方向为2;
如果ΔX=0且ΔY<0,则运动方向为3;
如果ΔX<0且ΔY<0,则运动方向为4;
如果ΔX<0且ΔY=0,则运动方向为5;
如果ΔX<0且ΔY>0,则运动方向为6;
如果ΔX=0且ΔY>0,则运动方向为7;
如果ΔX>0且ΔY>0,则运动方向为8。
由此,可以获得该候选区域在每一个图像帧中相对于当前帧的运动方向。
步骤502,统计每个运动方向在多个图像帧中出现的频率。
步骤503,将出现频率最高的运动方向作为该候选区域的主运动方向。
在本实施方式中,可以记录“候选区域”在连续若干帧中的运动方向,并记录每个运动方向出现的频率,然后出现频率最高的那个运动方向被视作该“候选区域”的主运动方向。
步骤504,判断该主运动方向是否为向下;在主运动方向为向下的情况下执行步骤505;
步骤505,确定该候选区域中不存在烟雾。
在本实施方式中,例如如果“候选区域”的主运动方向为向下(例如如图6所示的6,7,8),则可以把当前帧的这个“候选区域”从“候选区域”列表中移除;即确定该候选区域中不存在烟雾。此外,在主运动方向不是向下的情况下,可以确定该候选区域中存在烟雾,或者为了使得检测结果更加精确,可以继续对该候选区域进行其他项目的检测。
在另一个实施方式(实施方式2)中,可以根据某一候选区域在连续运动区域中的饱和度信息是否小于预设阈值,确定该候选区域中是否存在烟雾。
图7是本发明实施例的对某一候选区域进行烟雾检测的另一示意图,如图7所示,所述方法包括:
步骤701,对当前图像进行色彩空间变换,并根据色彩分量计算饱和度色彩分量以获取当前图像的饱和度图;
例如,饱和度的计算公式如下式(2)所示:
Figure PCTCN2015095178-appb-000002
上式仅示例性地说明如何计算某个像素的饱和度。关于饱和度具体如何计算,可以采用现有的任意方法,此处不再赘述。
步骤702,基于当前图像的饱和度图,计算该候选区域在连续运动区域中的当前饱和度平均值。
例如,该当前饱和度平均值的计算公式可以如下式(3)所示:
Figure PCTCN2015095178-appb-000003
其中,Savg为该当前饱和度平均值,Ω为该连续运动区域,N为该连续运动区域的像素数目,i为该连续运动区域的某个像素,Si为该像素i的饱和度值。
上式仅示例性地说明如何计算当前饱和度平均值,但本发明不限于此,还可以根据实际情况进行适当地调整或者变型。
步骤703,判断当前饱和度平均值是否大于或等于预设阈值(第二阈值);在当前饱和度平均值大于或等于预设阈值的情况下执行步骤704;
在本实施方式中,第二阈值的具体数值例如可以根据经验值预先设定,本发明不对该第二阈值进行限制。
步骤704,确定该候选区域中不存在烟雾。
在本实施方式中,例如如果当前饱和度平均值大于或等于第二阈值,则说明该运动物体的饱和度较高,而一般烟雾的饱和度较低,因此可以确定该候选区域中不存在烟雾,可以把当前帧的这个“候选区域”从“候选区域”列表中移除。此外,在当前饱和度平均值小于第二阈值的情况下,可以确定该候选区域中存在烟雾,或者为了使得检测结果更加精确,可以继续对该候选区域进行其他项目的检测。
在另一个实施方式(实施方式3)中,可以根据某一候选区域在连续运动区域中的当前饱和度信息和背景饱和度信息的比较结果,确定该候选区域中是否存在烟雾。
图8是本发明实施例的对某一候选区域进行烟雾检测的另一示意图,如图8所示,所述方法包括:
步骤801,对当前图像进行色彩空间变换,并根据色彩分量计算饱和度色彩分量 以获取当前图像的饱和度图;
步骤802,基于当前图像的饱和度图,计算该候选区域在连续运动区域中的当前饱和度平均值。
步骤803,对背景图像进行色彩空间变换,根据色彩分量计算饱和度色彩分量以获取背景图像的饱和度图;
步骤804,基于背景图像的饱和度图,计算该候选区域在连续运动区域中的背景饱和度平均值。
在本实施方式中,该背景饱和度平均值的计算公式例如也可以使用公式(3)。
步骤805,判断该当前饱和度平均值是否大于或等于该背景饱和度平均值;在该当前饱和度平均值大于或等于该背景饱和度平均值的情况下执行步骤806;
步骤806,确定该候选区域中不存在烟雾。
在本实施方式中,例如如果当前饱和度平均值大于或等于背景饱和度平均值,则说明该候选区域的整体饱和度较高,而一般具有烟雾的区域的整体饱和度会较低,因此可以确定该候选区域中不存在烟雾,可以把当前帧的这个“候选区域”从“候选区域”列表中移除。此外,在当前饱和度平均值小于背景饱和度平均值的情况下,可以确定该候选区域中存在烟雾,或者为了使得检测结果更加精确,可以继续对该候选区域进行其他项目的检测。
在另一个实施方式(实施方式4)中,可以根据某一候选区域在连续运动区域中的灰度方差信息,确定该候选区域中是否存在烟雾。
图9是本发明实施例的对某一候选区域进行烟雾检测的另一示意图,如图9所示,所述方法包括:
步骤901,基于当前图像的灰度图,计算该候选区域在连续运动区域中的灰度方差值。
例如,该灰度方差值的计算公式可以如下式(4)所示:
Figure PCTCN2015095178-appb-000004
Figure PCTCN2015095178-appb-000005
其中,Ω为该连续运动区域,N为该连续运动区域的像素数目,i为该连续运动 区域的某个像素,Yi为该像素i的灰度值,Yavg为该连续运动区域的灰度平均值;Var为该灰度方差值。
上式仅示例性地说明如何计算该灰度方差值,但本发明不限于此,还可以根据实际情况进行适当地调整或者变型。此外,关于灰度图或灰度值具体如何计算,可以采用现有的任意方法,此处不再赘述。
步骤902,判断灰度方差值是否大于或等于预设阈值(第三阈值);在该灰度方差值大于或等于预设阈值的情况下执行步骤903;
在本实施方式中,第三阈值的具体数值例如可以根据经验值预先设定,本发明不对该第三阈值进行限制。
步骤903,确定该候选区域中不存在烟雾。
在本实施方式中,例如如果灰度方差值大于或等于第三阈值,则说明物体的纹理较高,而一般烟雾的纹理较低,因此可以确定该候选区域中不存在烟雾,可以把当前帧的这个“候选区域”从“候选区域”列表中移除。此外,在灰度方差值小于第三阈值的情况下,可以确定该候选区域中存在烟雾,或者为了使得检测结果更加精确,可以继续对该候选区域进行其他项目的检测。
在另一个实施方式(实施方式5)中,可以根据某一候选区域在连续运动区域中的灰度平均信息,确定该候选区域中是否存在烟雾。
图10是本发明实施例的对某一候选区域进行烟雾检测的另一示意图,如图10所示,所述方法包括:
步骤1001,从连续运动区域中去除该候选区域以获取剩余运动区域。
图11是本发明实施例的获取剩余运动区域的一示意图,示意性示出了在图4的基础上获得的剩余运动区域。如图11所示,可以从图4所示的连续运动区域402中去掉当前图像(第N帧)的候选区域401,从而得到剩余运动区域1101。
步骤1002,基于当前图像的灰度图,计算该剩余候选区域的当前灰度平均值;
例如,该当前灰度平均值的计算公式可以如下式(5)所示:
Figure PCTCN2015095178-appb-000006
其中,Ω为该剩余运动区域,N为该剩余运动区域的像素数目,i为该剩余运动 区域的某个像素,Yi为该像素i在当前图像中的灰度值,Favg为该剩余运动区域的当前灰度平均值。
步骤1003,基于背景图像的灰度图,计算该剩余候选区域的背景灰度平均值。
例如,该背景灰度平均值的计算公式可以如下式(6)所示:
Figure PCTCN2015095178-appb-000007
其中,Ω为该剩余运动区域,N为该剩余运动区域的像素数目,j为该剩余运动区域的某个像素,Yj为该像素j在背景图像中的灰度值,Bavg为该剩余运动区域的背景灰度平均值。
步骤1004,计算该当前灰度平均值与该背景灰度平均值的差值;
步骤1005,判断该差值是否小于或等于预设阈值(第四阈值);在该差值小于或等于预设阈值的情况下执行步骤1006;
在本实施方式中,第四阈值的具体数值例如可以根据经验值预先设定,本发明不对该第四阈值进行限制。
步骤1006,确定该候选区域中不存在烟雾。
在本实施方式中,例如如果该当前灰度平均值与该背景灰度平均值的差值小于或等于第四阈值,则说明该候选区域中的运动物体是刚性物体,而一般烟雾的具有弥漫发散的特点,因此可以确定该候选区域中不存在烟雾,可以把当前帧的这个“候选区域”从“候选区域”列表中移除。此外,在差值大于第四阈值的情况下,可以确定该候选区域中存在烟雾,或者为了使得检测结果更加精确,可以继续对该候选区域进行其他项目的检测。
在另一个实施方式(实施方式6)中,可以根据某一候选区域的梯度方向信息,确定该候选区域中是否存在烟雾。
图12是本发明实施例的对某一候选区域进行烟雾检测的另一示意图,如图12所示,所述方法包括:
步骤1201,对于该候选区域内的某一像素,基于当前图像的灰度图计算该像素的水平梯度和垂直梯度以获取该像素的当前图像梯度方向;
步骤1202,基于背景图像的灰度图计算该像素的水平梯度和垂直梯度以获取该 像素的背景图像梯度方向。
在本实施方式中,例如可以如下式(7)所述计算某一像素的水平梯度:
Gx=(-1)*f(x-1,y-1)+0*f(x,y-1)+1*f(x+1,y-1)
+(-2)*f(x-1,y)+0*f(x,y)+2*f(x+1,y)
+(-1)*f(x-1,y+1)+0*f(x,y+1)+1*f(x+1,y+1)
=[f(x+1,y-1)+2*f(x+1,y)+f(x+1,y+1)]-[f(x-1,y-1)+2*f(x-1,y)+f(x-1,y+1)]  (7)
可以如下式(8)所述计算该像素的垂直梯度:
Gy=1*f(x-1,y-1)+2*f(x,y-1)+1*f(x+1,y-1)
+0*f(x-1,y)0*f(x,y)+0*f(x+1,y)
+(-1)*f(x-1,y+1)+(-2)*f(x,y+1)+(-1)*f(x+1,y+1)
=[f(x-1,y-1)+2f(x,y-1)+f(x+1,y-1)]-[f(x-1,y+1)+2*f(x,y+1)+f(x+1,y+1)]  (8)
其中,f为该像素,x是指像素f的x坐标,y是指像素f的y坐标。
步骤1203,计算该像素的当前图像梯度方向和背景图像梯度方向的夹角相关值;
在本实施方式中,可以根据当前图像梯度方向和背景图像梯度方向获取夹角,然后计算该夹角的相关值(例如余弦值),但本实施方式不限于此,例如还可以是其他的相关值(例如余切值等),以下仅以夹角余弦值为例进行说明。
步骤1204,对该候选区域内多个像素(例如所有像素)的夹角相关值进行统计并平均,将该候选区域的平均夹角相关值作为梯度方向信息平均值。
步骤1205,判断该梯度方向信息平均值是否大于或等于预设阈值(第五阈值);在该梯度方向信息平均值大于或等于预设阈值的情况下执行步骤1206;
在本实施方式中,第五阈值的具体数值例如可以根据经验值预先设定,本发明不对该第五阈值进行限制。
步骤1206,确定该候选区域中不存在烟雾。
在本实施方式中,例如如果该梯度方向信息平均值大于或等于第五阈值,则说明该候选区域并非是由真正的运动物体产生的前景,而是光照变化而引起的伪前景,因此可以把当前帧的这个“候选区域”从“候选区域”列表中移除。此外,在梯度方向信息平均值小于第五阈值的情况下,可以确定该候选区域中存在烟雾,或者为了使得检测结果更加精确,可以继续对该候选区域进行其他项目的检测。
以上对于如何判断某一候选区域中存在烟雾进行了示意性说明,但本发明不限于 此,例如还可以使用其他的属性信息进行判断。并且上述公式(1)至(8)仅示意性对本发明进行了说明,但本发明不限于此,可以根据实际情况对上述公式(1)至(8)进行适当地变型。
此外可以采用上述实施方式1至6中的一种或多种,例如可以仅使用其中的某一实施方式,也可以使用上述全部的6种实施方式。并且上述实施方式之间也不存在执行顺序的限制;例如可以按照顺利分别执行实施方式1至6,也可以执行实施方式4之后再执行实施方式2,等等。在实际应用时,可以根据实际情况确定具体的检测方案。
由上述实施例可知,基于前景图像获取一个或多个候选区域,计算某一候选区域对应于当前图像和/或背景图像的属性信息,以及根据该属性信息确定该候选区域中是否存在烟雾。由此,不仅能够通过视频图像快速准确地对烟雾进行检测,而且可以提高基于视频的烟雾检测在光照变化以及复杂环境下的检测精度。
实施例2
本发明实施例提供一种烟雾检测装置,对应于实施例1所述的烟雾检测方法,其中相同的内容不再赘述。
图13是本发明实施例的烟雾检测装置的一示意图,如图13所示,烟雾检测装置1300包括:
背景图像建模单元1301,对当前图像进行背景图像建模以获取当前图像的前景图像和背景图像;
候选区域获取单元1302,基于前景图像获取当前图像中用于检测运动物体的一个或多个候选区域;
属性信息计算单元1303,计算某一候选区域对应于当前图像和/或背景图像的属性信息;以及
烟雾确定单元1304,根据该候选区域对应于当前图像和/或背景图像的属性信息,确定该候选区域中是否存在烟雾。
图14是本发明实施例的候选区域获取单元的一示意图,如图14所示,候选区域获取单元1302可以包括:
二值化图获取单元1401,获取前景图像的二值化图像;
连通域获取单元1402,将二值化图像中像素值相同且相互连通的多个像素作为一个连通域,以获取前景图像中表示运动物体的一个或多个连通域;
连通域选择单元1403,对连通域进行选择以获取一个或多个候选区域。
其中,连通域选择单元1403可以用于:去掉面积小于或等于预设阈值的连通域,和/或,去掉平均颜色深度在预设范围之外的连通域。但本发明不限于此,还可以根据其他规则对连通域进行筛选。
图15是本发明实施例的烟雾检测装置的另一示意图,如图15所示,烟雾检测装置1500包括:背景图像建模单元1301,候选区域获取单元1302,属性信息计算单元1303以及烟雾确定单元1304,如上所述。
如图15所示,烟雾检测装置1500还可以包括:
运动区域获取单元1501,根据该候选区域分别在包括当前图像的多个图像帧中的位置获取该候选区域对应的连续运动区域;
属性信息计算单元1303还可以用于:基于连续运动区域计算该候选区域对应于当前图像和/或背景图像的属性信息。
在一个实施方式中,可以根据某一候选区域在多个图像帧中的主运动方向是否向下,确定该候选区域中是否存在烟雾。
在本实施方式中,属性信息计算单元1303还可以用于:获取该候选区域在多个图像帧中的主运动方向;烟雾确定单元1304还可以用于:在该候选区域的主运动方向为向下的情况下,确定该候选区域中不存在烟雾。
图16是本发明实施例的属性信息计算单元的一示意图,如图16所示,属性信息计算单元1303可以包括:
运动方向计算单元1601,基于该候选区域的质心位置和重心位置,计算该候选区域在多个图像帧中相对于当前图像的运动方向;
运动方向统计单元1602,统计每个运动方向在多个图像帧中出现的频率;以及
主运动方向确定单元1603,将出现频率最高的运动方向作为该候选区域的主运动方向。
在另一个实施方式中,可以根据某一候选区域在连续运动区域中的饱和度信息是否小于预设阈值,确定该候选区域中是否存在烟雾。
图17是本发明实施例的属性信息计算单元的另一示意图,如图17所示,属性信 息计算单元133可以包括:
当前饱和度图获取单元1701,对当前图像进行色彩空间变换,并根据色彩分量计算饱和度色彩分量以获取当前图像的饱和度图;
当前饱和度计算单元1702,基于当前图像的饱和度图,计算该候选区域在连续运动区域中的当前饱和度平均值。
在本实施方式中,烟雾确定单元1304还可以用于:在当前饱和度平均值大于或等于预设阈值的情况下,确定该候选区域中不存在烟雾。
在另一个实施方式中,可以根据某一候选区域在连续运动区域中的当前饱和度信息和背景饱和度信息的比较结果,确定该候选区域中是否存在烟雾。
图18是本发明实施例的属性信息计算单元的另一示意图,如图18所示,属性信息计算单元1303可以包括:当前饱和度图获取单元1701和当前饱和度计算单元1702,如上所述。
如图18所示,属性信息计算单元1303还可以包括:
背景饱和度图获取单元1801,对背景图像进行色彩空间变换,根据色彩分量计算饱和度色彩分量以获取背景图像的饱和度图;
背景饱和度计算单元1802,基于背景图像的饱和度图,计算该候选区域在连续运动区域中的背景饱和度平均值。
在本实施方式中,烟雾确定单元1304还可以用于:在当前饱和度平均值大于或等于背景饱和度平均值的情况下,确定该候选区域中不存在烟雾。
在另一个实施方式中,可以根据某一候选区域在连续运动区域中的灰度方差信息,确定该候选区域中是否存在烟雾。
图19是本发明实施例的属性信息计算单元的另一示意图,如图19所示,属性信息计算单元1303可以包括:
方差值计算单元1901,基于当前图像的灰度图,计算该候选区域在连续运动区域中的灰度方差值。
在本实施方式中,烟雾确定单元1304还可以用于:在灰度方差值大于或等于预设阈值的情况下,确定该候选区域中不存在烟雾。
在另一个实施方式中,可以根据某一候选区域在连续运动区域中的灰度平均信息,确定该候选区域中是否存在烟雾。
图20是本发明实施例的属性信息计算单元的另一示意图,如图20所示,属性信息计算单元1303可以包括:
运动区域调整单元2001,从连续运动区域中去除该候选区域以获取剩余运动区域;
当前平均值计算单元2002,基于当前图像的灰度图,计算该剩余候选区域的当前灰度平均值;
背景平均值计算单元2003,基于背景图像的灰度图,计算该剩余候选区域的背景灰度平均值;以及
差值计算单元2004,计算该当前灰度平均值与该背景灰度平均值的差值。
在本实施方式中,烟雾确定单元1304还可以用于:在当前灰度平均值与背景灰度平均值的差值小于或等于预设阈值的情况下,确定该候选区域中不存在烟雾。
在另一个实施方式中,可以根据某一候选区域的梯度方向信息,确定该候选区域中是否存在烟雾。
在本实施方式中,属性信息计算单元1303还可以用于:计算该候选区域的梯度方向信息平均值;烟雾确定单元1304还可以用于:在梯度方向信息平均值大于或等于预设阈值的情况下,确定该候选区域中不存在烟雾。
图21是本发明实施例的属性信息计算单元的一示意图,如图21所示,属性信息计算单元1303可以包括:
当前梯度计算单元2101,对于该候选区域内的某一像素,基于当前图像的灰度图计算该像素的水平梯度和垂直梯度以获取该像素的当前图像梯度方向;
背景梯度计算单元2102,基于背景图像的灰度图计算该像素的水平梯度和垂直梯度以获取该像素的背景图像梯度方向;
夹角相关值计算单元2103,计算该像素的当前图像梯度方向和背景图像梯度方向的夹角相关值;以及
梯度平均值获取单元2104,对该候选区域内多个像素的夹角相关值进行统计并平均,将该候选区域的平均夹角相关值作为梯度方向信息平均值。
在本实施例中,属性信息可以包括如下的一种或多种:饱和度信息、灰度方差信息、梯度方向信息、灰度平均信息、运动方向信息。但本发明不限于此,例如还可以使用其他的属性信息进行判断。此外可以采用上述实施方式中的一种或多种,可以根 据实际情况确定具体的检测方案。
由上述实施例可知,基于前景图像获取一个或多个候选区域,计算某一候选区域对应于当前图像和/或背景图像的属性信息,以及根据该属性信息确定该候选区域中是否存在烟雾。由此,不仅能够通过视频图像快速准确地对烟雾进行检测,而且可以提高基于视频的烟雾检测在光照变化以及复杂环境下的检测精度。
实施例3
本发明实施例提供一种图像处理设备,该图像处理设备包括如实施例2所述的烟雾检测装置。
图22是本发明实施例的图像处理设备的一示意图。如图22所示,图像处理设备2200可以包括:中央处理器(CPU)100和存储器110;存储器110耦合到中央处理器100。其中该存储器110可存储各种数据;此外还存储信息处理的程序,并且在中央处理器100的控制下执行该程序。
在一个实施方式中,烟雾检测装置的功能可以被集成到中央处理器100中。其中,中央处理器100可以被配置为对实施例1所述的烟雾检测方法进行控制。
在另一个实施方式中,烟雾检测装置可以与中央处理器100分开配置,例如可以将烟雾检测装置配置为与中央处理器100连接的芯片,通过中央处理器100的控制来实现烟雾检测装置的功能。
在本实施例中,中央处理器100可以被配置为进行如下的控制:
对当前图像进行背景图像建模以获取当前图像的前景图像和背景图像;基于前景图像获取当前图像中用于检测运动物体的一个或多个候选区域;计算某一候选区域对应于当前图像和/或背景图像的属性信息;以及根据该属性信息确定该候选区域中是否存在烟雾。
进一步地,中央处理器100还可以被配置为进行如下的控制:根据该候选区域分别在多个图像帧中的位置获取该候选区域对应的连续运动区域;并基于该连续运动区域计算该候选区域对应于当前图像和/或背景图像的属性信息。
此外,如图22所示,图像处理设备2200还可以包括:输入输出(I/O)设备120和显示器130等;其中,上述部件的功能与现有技术类似,此处不再赘述。值得注意的是,图像处理设备2200也并不是必须要包括图22中所示的所有部件;此外,图像 处理设备2200还可以包括图22中没有示出的部件,可以参考现有技术。
本发明实施例提供一种计算机可读程序,其中当在图像处理设备中执行所述程序时,所述程序使得计算机在所述图像处理设备中执行如实施例1所述的烟雾检测方法。
本发明实施例提供一种存储有计算机可读程序的存储介质,其中所述计算机可读程序使得计算机在图像处理设备中执行如实施例1所述的烟雾检测方法。
本发明以上的装置和方法可以由硬件实现,也可以由硬件结合软件实现。本发明涉及这样的计算机可读程序,当该程序被逻辑部件所执行时,能够使该逻辑部件实现上文所述的装置或构成部件,或使该逻辑部件实现上文所述的各种方法或步骤。本发明还涉及用于存储以上程序的存储介质,如硬盘、磁盘、光盘、DVD、flash存储器等。
针对附图中描述的功能方框中的一个或多个和/或功能方框的一个或多个组合,可以实现为用于执行本申请所描述功能的通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件或者其任意适当组合。针对附图描述的功能方框中的一个或多个和/或功能方框的一个或多个组合,还可以实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、与DSP通信结合的一个或多个微处理器或者任何其它这种配置。
以上结合具体的实施方式对本发明进行了描述,但本领域技术人员应该清楚,这些描述都是示例性的,并不是对本发明保护范围的限制。本领域技术人员可以根据本发明的原理对本发明做出各种变型和修改,这些变型和修改也在本发明的范围内。

Claims (20)

  1. 一种烟雾检测装置,其中,所述烟雾检测装置包括:
    背景图像建模单元,其对当前图像进行背景图像建模,以获取所述当前图像的前景图像和背景图像;
    候选区域获取单元,其基于所述前景图像获取所述当前图像中用于检测运动物体的一个或多个候选区域;
    属性信息计算单元,其计算某一候选区域对应于所述当前图像和/或所述背景图像的属性信息;以及
    烟雾确定单元,其根据所述属性信息确定所述某一候选区域中是否存在烟雾。
  2. 根据权利要求1所述的烟雾检测装置,其中,所述烟雾检测装置还包括:
    运动区域获取单元,其根据所述某一候选区域分别在多个图像帧中的位置获取所述某一候选区域对应的连续运动区域;
    所述属性信息计算单元还用于:基于所述连续运动区域计算所述某一候选区域对应于所述当前图像和/或所述背景图像的属性信息。
  3. 根据权利要求2所述的烟雾检测装置,其中,所述属性信息计算单元包括:
    当前饱和度图获取单元,其对所述当前图像进行色彩空间变换,并根据色彩分量计算饱和度色彩分量以获取所述当前图像的饱和度图;
    当前饱和度计算单元,其基于所述当前图像的饱和度图,计算所述某一候选区域在所述连续运动区域中的当前饱和度平均值。
  4. 根据权利要求3所述的烟雾检测装置,其中,所述烟雾确定单元还用于:在所述当前饱和度平均值大于或等于预设阈值的情况下,确定所述某一候选区域中不存在烟雾。
  5. 根据权利要求3所述的烟雾检测装置,其中,所述属性信息计算单元还包括:
    背景饱和度图获取单元,其对所述背景图像进行色彩空间变换,根据色彩分量计算饱和度色彩分量以获取所述背景图像的饱和度图;
    背景饱和度计算单元,其基于所述背景图像的饱和度图,计算所述某一候选区域在所述连续运动区域中的背景饱和度平均值。
  6. 根据权利要求5所述的烟雾检测方法,其中,所述烟雾确定单元还用于:在 所述当前饱和度平均值大于或等于所述背景饱和度平均值的情况下,确定所述某一候选区域中不存在烟雾。
  7. 根据权利要求2所述的烟雾检测装置,其中,所述属性信息计算单元包括:
    方差值计算单元,其基于所述当前图像的灰度图,计算所述某一候选区域在所述连续运动区域中的灰度方差值。
  8. 根据权利要求7所述的烟雾检测装置,其中,所述烟雾确定单元还用于:在所述灰度方差值大于或等于预设阈值的情况下,确定所述某一候选区域中不存在烟雾。
  9. 根据权利要求2所述的烟雾检测装置,其中,所述属性信息计算单元包括:
    运动区域调整单元,其从所述连续运动区域中去除所述某一候选区域以获取剩余运动区域;
    当前平均值计算单元,其基于所述当前图像的灰度图,计算所述剩余候选区域的当前灰度平均值;
    背景平均值计算单元,其基于所述背景图像的灰度图,计算所述剩余候选区域的背景灰度平均值;以及
    差值计算单元,其计算所述当前灰度平均值与所述背景灰度平均值的差值。
  10. 根据权利要求9所述的烟雾检测装置,其中,所述烟雾确定单元还用于:在所述差值小于或等于预设阈值的情况下,确定所述某一候选区域中不存在烟雾。
  11. 根据权利要求1所述的烟雾检测装置,其中,所述属性信息计算单元包括:
    当前梯度计算单元,其对于所述某一候选区域内的某一像素,基于所述当前图像的灰度图计算所述某一像素的水平梯度和垂直梯度以获取所述某一像素的当前图像梯度方向;
    背景梯度计算单元,其基于所述背景图像的灰度图计算所述某一像素的水平梯度和垂直梯度以获取所述某一像素的背景图像梯度方向;
    夹角相关值计算单元,其计算所述某一像素的所述当前图像梯度方向和所述背景图像梯度方向的夹角相关值;以及
    梯度平均值获取单元,其对所述某一候选区域内多个像素的所述夹角相关值进行统计并平均,将所述某一候选区域的平均夹角相关值作为梯度方向信息平均值。
  12. 根据权利要求11所述的烟雾检测装置,其中,所述烟雾确定单元还用于: 在所述梯度方向信息平均值大于或等于预设阈值的情况下,确定所述某一候选区域中不存在烟雾。
  13. 根据权利要求1所述的烟雾检测装置,其中,所述候选区域获取单元包括:
    二值化图获取单元,其获取所述前景图像的二值化图像;
    连通域获取单元,其将所述二值化图像中像素值相同且相互连通的多个像素作为一个连通域,以获取所述前景图像中表示运动物体的一个或多个连通域;
    连通域选择单元,其对所述连通域进行选择以获取所述一个或多个候选区域。
  14. 根据权利要求13所述的烟雾检测装置,其中,所述连通域选择单元用于:去掉面积小于或等于预设阈值的所述连通域,和/或,去掉平均颜色深度在预设范围之外的所述连通域。
  15. 根据权利要求1所述的烟雾检测装置,其中,所述属性信息计算单元包括:
    运动方向计算单元,其基于所述某一候选区域的质心位置和重心位置,计算所述某一候选区域在所述多个图像帧中相对于所述当前图像的运动方向;
    运动方向统计单元,其统计每个运动方向在所述多个图像帧中出现的频率;以及
    主运动方向确定单元,其将出现频率最高的运动方向确定为所述某一候选区域的主运动方向。
  16. 根据权利要求15所述的烟雾检测装置,其中,所述烟雾确定单元还用于:在所述某一候选区域的主运动方向为向下的情况下,确定所述某一候选区域中不存在烟雾。
  17. 一种烟雾检测方法,其中,所述烟雾检测方法包括:
    对当前图像进行背景图像建模以获取所述当前图像的前景图像和背景图像;
    基于所述前景图像获取所述当前图像中用于检测运动物体的一个或多个候选区域;
    计算某一候选区域对应于所述当前图像和/或所述背景图像的属性信息;以及
    根据所述属性信息确定所述某一候选区域中是否存在烟雾。
  18. 根据权利要求17所述的烟雾检测方法,其中,所述方法还包括:
    根据所述某一候选区域分别在多个图像帧中的位置获取所述某一候选区域对应的连续运动区域;
    并且,基于所述连续运动区域计算所述某一候选区域对应于所述当前图像和/或 所述背景图像的属性信息。
  19. 根据权利要求18所述的烟雾检测方法,其中,所述属性信息包括如下的一种或多种:饱和度信息、灰度方差信息、梯度方向信息、灰度平均信息、运动方向信息。
  20. 一种图像处理设备,其中,所述图像处理设备包括如权利要求1所述的烟雾检测装置。
PCT/CN2015/095178 2015-11-20 2015-11-20 烟雾检测装置、方法以及图像处理设备 WO2017084094A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201580084015.3A CN108140291A (zh) 2015-11-20 2015-11-20 烟雾检测装置、方法以及图像处理设备
PCT/CN2015/095178 WO2017084094A1 (zh) 2015-11-20 2015-11-20 烟雾检测装置、方法以及图像处理设备
EP15908590.1A EP3379509A4 (en) 2015-11-20 2015-11-20 DEVICE, METHOD AND PICTURE PROCESSING DEVICE FOR SMOKE DETECTION
JP2018525692A JP6620888B2 (ja) 2015-11-20 2015-11-20 煙検出装置、方法及び画像処理装置
US15/978,817 US10846867B2 (en) 2015-11-20 2018-05-14 Apparatus, method and image processing device for smoke detection in image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/095178 WO2017084094A1 (zh) 2015-11-20 2015-11-20 烟雾检测装置、方法以及图像处理设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/978,817 Continuation US10846867B2 (en) 2015-11-20 2018-05-14 Apparatus, method and image processing device for smoke detection in image

Publications (1)

Publication Number Publication Date
WO2017084094A1 true WO2017084094A1 (zh) 2017-05-26

Family

ID=58717279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/095178 WO2017084094A1 (zh) 2015-11-20 2015-11-20 烟雾检测装置、方法以及图像处理设备

Country Status (5)

Country Link
US (1) US10846867B2 (zh)
EP (1) EP3379509A4 (zh)
JP (1) JP6620888B2 (zh)
CN (1) CN108140291A (zh)
WO (1) WO2017084094A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633212A (zh) * 2017-08-30 2018-01-26 清华大学苏州汽车研究院(吴江) 一种基于视频图像的烟火检测方法和装置
CN109142176A (zh) * 2018-09-29 2019-01-04 佛山市云米电器科技有限公司 基于空间联系的烟雾子区域空间复检方法
CN110263654A (zh) * 2019-05-23 2019-09-20 深圳市中电数通智慧安全科技股份有限公司 一种火焰检测方法、装置及嵌入式设备
US11210916B2 (en) 2018-12-21 2021-12-28 Fujitsu Limited Smoke detection method and apparatus
CN116503388A (zh) * 2023-06-25 2023-07-28 广东利元亨智能装备股份有限公司 缺陷检测方法、装置及存储介质

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997461B (zh) * 2017-03-28 2019-09-17 浙江大华技术股份有限公司 一种烟火检测方法及装置
CN109035666B (zh) * 2018-08-29 2020-05-19 深圳市中电数通智慧安全科技股份有限公司 一种火烟检测方法、装置及终端设备
CN109028234B (zh) * 2018-09-29 2020-11-10 佛山市云米电器科技有限公司 一种能够对烟雾等级进行标识的油烟机
CN111060442B (zh) * 2019-04-30 2022-06-17 威海戥同测试设备有限公司 一种基于图像处理的油液颗粒检测方法
CN111144312B (zh) * 2019-12-27 2024-03-22 ***通信集团江苏有限公司 图像处理方法、装置、设备和介质
CN112115875B (zh) * 2020-09-21 2024-05-24 北京林业大学 一种基于动静态结合区域层叠策略森林火灾烟雾根检测方法
US11908195B2 (en) 2020-12-01 2024-02-20 Devon Energy Corporation Systems, methods, and computer program products for object detection and analysis of an image
CN112616049B (zh) * 2020-12-15 2022-12-02 南昌欧菲光电技术有限公司 监控设备水雾霜处理方法、装置、设备和介质
CN112819791A (zh) * 2021-02-03 2021-05-18 广州市云景信息科技有限公司 环检线上的林格曼黑度检测方法、装置、检测仪及黑烟车识别***
KR102279627B1 (ko) 2021-03-02 2021-07-21 한국콜마주식회사 내용물 용기
CN113112453B (zh) * 2021-03-22 2022-03-22 深圳市华启生物科技有限公司 胶体金检测卡识别方法、***、电子设备及存储介质
TWI793901B (zh) * 2021-12-08 2023-02-21 威盛電子股份有限公司 煙霧偵測系統以及煙霧偵測方法
CN116433035B (zh) * 2023-06-13 2023-09-15 中科数创(临沂)数字科技有限公司 一种基于人工智能的建筑电气火灾风险评估预测方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080077481A (ko) * 2007-02-20 2008-08-25 (주)에이치엠씨 영상처리를 이용한 화재 감지 방법 및 시스템
CN101908141A (zh) * 2010-08-04 2010-12-08 丁天 一种基于混合高斯模型和形态特征的视频烟雾检测方法
CN102136059A (zh) * 2011-03-03 2011-07-27 苏州市慧视通讯科技有限公司 一种基于视频分析的烟雾检测方法
JP4926603B2 (ja) * 2006-08-17 2012-05-09 能美防災株式会社 煙検出装置
CN102663869A (zh) * 2012-04-23 2012-09-12 国家消防工程技术研究中心 基于视频监控平台的室内火灾检测方法
CN103983574A (zh) * 2014-06-03 2014-08-13 上海安维尔信息科技有限公司 一种烟雾检测方法
CN104978733A (zh) * 2014-04-11 2015-10-14 富士通株式会社 烟雾检测方法以及装置

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4672658B2 (ja) * 2006-04-26 2011-04-20 三菱電機株式会社 物体検出装置及びエレベータの物体検出装置
US7859419B2 (en) * 2006-12-12 2010-12-28 Industrial Technology Research Institute Smoke detecting method and device
EP2000952B1 (en) * 2007-05-31 2013-06-12 Industrial Technology Research Institute Smoke detecting method and device
CN101609589A (zh) * 2008-06-17 2009-12-23 侯荣琴 多频图像火灾探测***
US7786877B2 (en) * 2008-06-20 2010-08-31 Billy Hou Multi-wavelength video image fire detecting system
CN101441771B (zh) * 2008-12-19 2011-07-20 中国科学技术大学 基于色彩饱和度与运动模式的视频火灾烟雾检测方法
CN101738394B (zh) * 2009-02-11 2011-10-05 北京智安邦科技有限公司 室内烟雾检测方法及***
CN101751558B (zh) * 2009-12-16 2011-12-14 北京智安邦科技有限公司 一种基于视频的隧道烟雾检测方法及装置
WO2011151821A1 (en) * 2010-05-31 2011-12-08 Dvp Technologies Ltd. Inspection of region of interest
CN101916372B (zh) * 2010-09-08 2012-12-26 大连古野软件有限公司 基于视频的多特征融合的烟检测装置和方法
JP5697587B2 (ja) * 2011-12-09 2015-04-08 三菱電機株式会社 車両火災検出装置
JP5971761B2 (ja) * 2013-03-26 2016-08-17 能美防災株式会社 煙検出装置および煙検出方法
DE102013017395B3 (de) * 2013-10-19 2014-12-11 IQ Wireless Entwicklungsges. für Systeme und Technologien der Telekommunikation mbH Verfahren und Vorrichtung zur automatisierten Waldbrandfrüherkennung mittels optischer Detektion von Rauchwolken
CN104050480A (zh) * 2014-05-21 2014-09-17 燕山大学 基于计算机视觉的吸烟烟雾检测方法
CN104316974B (zh) * 2014-11-04 2017-03-01 青岛橡胶谷知识产权有限公司 森林烟雾区域检测***
CN104408745A (zh) * 2014-11-18 2015-03-11 北京航空航天大学 一种基于视频图像的实时烟雾场景检测方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4926603B2 (ja) * 2006-08-17 2012-05-09 能美防災株式会社 煙検出装置
KR20080077481A (ko) * 2007-02-20 2008-08-25 (주)에이치엠씨 영상처리를 이용한 화재 감지 방법 및 시스템
CN101908141A (zh) * 2010-08-04 2010-12-08 丁天 一种基于混合高斯模型和形态特征的视频烟雾检测方法
CN102136059A (zh) * 2011-03-03 2011-07-27 苏州市慧视通讯科技有限公司 一种基于视频分析的烟雾检测方法
CN102663869A (zh) * 2012-04-23 2012-09-12 国家消防工程技术研究中心 基于视频监控平台的室内火灾检测方法
CN104978733A (zh) * 2014-04-11 2015-10-14 富士通株式会社 烟雾检测方法以及装置
CN103983574A (zh) * 2014-06-03 2014-08-13 上海安维尔信息科技有限公司 一种烟雾检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3379509A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633212A (zh) * 2017-08-30 2018-01-26 清华大学苏州汽车研究院(吴江) 一种基于视频图像的烟火检测方法和装置
CN109142176A (zh) * 2018-09-29 2019-01-04 佛山市云米电器科技有限公司 基于空间联系的烟雾子区域空间复检方法
CN109142176B (zh) * 2018-09-29 2024-01-12 佛山市云米电器科技有限公司 基于空间联系的烟雾子区域空间复检方法
US11210916B2 (en) 2018-12-21 2021-12-28 Fujitsu Limited Smoke detection method and apparatus
CN110263654A (zh) * 2019-05-23 2019-09-20 深圳市中电数通智慧安全科技股份有限公司 一种火焰检测方法、装置及嵌入式设备
CN116503388A (zh) * 2023-06-25 2023-07-28 广东利元亨智能装备股份有限公司 缺陷检测方法、装置及存储介质
CN116503388B (zh) * 2023-06-25 2023-11-14 广东利元亨智能装备股份有限公司 缺陷检测方法、装置及存储介质

Also Published As

Publication number Publication date
US20180260963A1 (en) 2018-09-13
EP3379509A1 (en) 2018-09-26
US10846867B2 (en) 2020-11-24
EP3379509A4 (en) 2019-10-30
CN108140291A (zh) 2018-06-08
JP2019504379A (ja) 2019-02-14
JP6620888B2 (ja) 2019-12-18

Similar Documents

Publication Publication Date Title
WO2017084094A1 (zh) 烟雾检测装置、方法以及图像处理设备
JP6719457B2 (ja) 画像の主要被写体を抽出する方法とシステム
US9754160B2 (en) Method and device for detecting gathering of objects based on stereo vision as well as non-transitory computer-readable medium
CN109076198B (zh) 基于视频的对象跟踪遮挡检测***、方法和设备
US10839521B2 (en) Image processing apparatus, image processing method, and computer-readable storage medium
JP6125188B2 (ja) 映像処理方法及び装置
CN108875534B (zh) 人脸识别的方法、装置、***及计算机存储介质
US9536321B2 (en) Apparatus and method for foreground object segmentation
JP6024658B2 (ja) 物体検出装置、物体検出方法及びプログラム
US10169673B2 (en) Region-of-interest detection apparatus, region-of-interest detection method, and recording medium
CN106295640A (zh) 一种智能终端的物体识别方法和装置
WO2017193701A1 (zh) 共享单车的倒地检测方法
WO2019080743A1 (zh) 一种目标检测方法、装置及计算机设备
WO2017120796A1 (zh) 路面病害的检测方法及其装置、电子设备
JP6366999B2 (ja) 状態推定装置、プログラムおよび集積回路
JP6338429B2 (ja) 被写体検出装置、被写体検出方法及びプログラム
CN108229583B (zh) 一种基于主方向差分特征的快速模板匹配的方法及装置
WO2018058573A1 (zh) 对象检测方法、对象检测装置以及电子设备
WO2017128646A1 (zh) 一种图像处理的方法及装置
JPWO2018078806A1 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
CN115880643B (zh) 一种基于目标检测算法的社交距离监测方法和装置
TWI658431B (zh) 影像處理方法、影像處理裝置及電腦可讀取記錄媒體
CN107067411B (zh) 一种结合密集特征的Mean-shift跟踪方法
WO2017028010A1 (zh) 背景模型的提取方法、装置以及图像处理设备
CN112288769A (zh) 一种人体跟踪方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15908590

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2018525692

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2015908590

Country of ref document: EP