WO2022059500A1 - Monitoring system and monitoring method - Google Patents

Monitoring system and monitoring method Download PDF

Info

Publication number
WO2022059500A1
WO2022059500A1 PCT/JP2021/032264 JP2021032264W WO2022059500A1 WO 2022059500 A1 WO2022059500 A1 WO 2022059500A1 JP 2021032264 W JP2021032264 W JP 2021032264W WO 2022059500 A1 WO2022059500 A1 WO 2022059500A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
area
worker
monitoring system
Prior art date
Application number
PCT/JP2021/032264
Other languages
French (fr)
Japanese (ja)
Inventor
幸 藤井
辰行 澤野
豊彦 林
Original Assignee
株式会社日立国際電気
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立国際電気 filed Critical 株式会社日立国際電気
Priority to JP2022550457A priority Critical patent/JP7354461B2/en
Publication of WO2022059500A1 publication Critical patent/WO2022059500A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a monitoring system that detects an intruder in a monitoring area.
  • Patent Document 1 a map of a monitoring area is displayed on a monitoring screen, a movement locus line of an intruder is displayed on the map, and when an arbitrary position coordinate on the movement locus line is selected, the position is displayed.
  • a surveillance system that reproduces a recorded image of a surveillance camera according to coordinates is disclosed.
  • the planned work range is wide or the surrounding brightness is insufficient, the visible range of the worker will be narrower than the planned work range, so there is a possibility that intruders may be overlooked.
  • the planned work range is wide or the surrounding brightness is insufficient, the visible range of the worker will be narrower than the planned work range, so there is a possibility that intruders may be overlooked.
  • a long section of several hundred meters to several kilometers is the planned work range, so that the worker cannot visually monitor the entire planned work range.
  • the present invention has been made in view of the above-mentioned conventional circumstances, and an object of the present invention is to provide a monitoring system capable of reducing oversight of intruders while preventing false detection of workers. And.
  • the monitoring system is configured as follows. That is, in a monitoring system that detects an intruder into the monitoring area, a camera that captures the surveillance area, an image processing device that performs detection processing of the intruder using the captured image of the camera, and predetermined work are performed in the monitoring area.
  • the image processing device includes a positioning terminal carried by the worker, and the image processing device sets an area including the coordinates corresponding to the position positioned by the positioning terminal in the captured image of the camera as a non-target area for intruder detection processing. It is characterized by doing.
  • the non-target area may be an area based on at least one of the work range or the viewing range set for the worker, centering on the coordinates corresponding to the position positioned by the positioning terminal.
  • the non-target area may be adjusted based on the illuminance of the shooting range of the camera, the posture of the worker, and one or more of the equipment brought in by the worker.
  • the image processing device excludes the region corresponding to the protruding region in the image captured by the camera adjacent to the camera from the target of the intruder detection processing. It may be set in the area.
  • the present invention it is possible to provide a monitoring system capable of reducing oversight of intruders while preventing erroneous detection of workers.
  • FIG. 1 It is a figure which shows the schematic structure of the monitoring system which concerns on one Embodiment of this invention. It is a figure which shows the configuration example of the image processing apparatus in the monitoring system of FIG. It is a figure which shows the flowchart example of the intruder detection process. It is a figure which shows the flowchart example of the mask image creation process. It is a figure explaining the mask image created by the mask image creation process.
  • FIG. 1 shows a schematic configuration of a monitoring system according to an embodiment of the present invention.
  • the monitoring system of this example is a system that monitors by analyzing an image taken by a camera, and includes a camera 11, an image processing device 12, a display terminal 13, and a GPS (Global Positioning System) terminal 14. ,
  • the illuminance meter 15 is provided.
  • the camera 11, the GPS terminal 14, and the illuminance meter 15 are communicably connected to the image processing device 12 via a wired or wireless network.
  • the camera 11 is an imaging device for photographing the surveillance area, and a plurality of cameras 11 are installed so as to cover the entire surveillance area.
  • a visible light camera may be used, an infrared camera may be used, or both of them may be used.
  • the camera 11 transmits the captured image to the image processing device 12 by using the image output interface (I / F) of the digital signal system or the analog signal system.
  • the image processing device 12 performs various image processing on the captured image received from the camera 11 and transmits the processed output image to the display terminal 13. Further, the image processing device 12 performs intruder detection processing for detecting an intruder in the monitoring area based on the captured image received from the camera 11, and outputs an alarm signal when the intruder is detected. ..
  • the display terminal 13 is equipped with a VMS (video management system), and responds to the display of the output image processed by the image / image processing device 12 taken by the camera 11 and the alarm output from the image processing device 12. Sounds an alarm / displays an alarm, etc.
  • VMS video management system
  • the GPS terminal 14 is a device for positioning the current position, and periodically notifies the image processing device 12 of the measured current position.
  • the GPS terminal 14 is carried by each of the workers scheduled to work in the monitoring area or by a representative.
  • the illuminance meter 15 is a sensor that measures the illuminance in the surroundings, and periodically notifies the image processing device 12 of the measured illuminance.
  • a plurality of illuminance meters 15 are installed so as to correspond to the shooting range of each camera 11. It should be noted that each camera 11 may be provided with an illuminance meter 15.
  • FIG. 2 shows a configuration example of the image processing device 12.
  • the image processing device 12 includes an image input I / F 21, a processing memory 22, a CPU (Central Processing Unit) 23, a program memory 24, an image output I / F 25, and a communication I / F 26, which are buses 27. Is connected by.
  • the image input I / F 21 is an interface for inputting a captured image transmitted from the camera 11, and stores the input captured image in the processing memory 22.
  • the processing memory 22 has an image memory area 31, a mask image memory area 32, and a work memory area 33.
  • the image memory area 31 is a captured image input through the image input I / F 21 (for example, an image to be detected for an intruder or a background image required for the next intruder detection), or an output image subjected to image processing. It is a memory area that temporarily stores such things.
  • the mask image memory area 32 is a memory area for storing a mask image that defines a target area / non-target area for intruder detection.
  • the mask image a binary image in which the target area (non-mask area) for intruder detection is a white value and the non-target area (mask area) for intruder detection is a black value is used, but the present invention is not limited to this. ..
  • the work memory area 33 is a working memory area that is temporarily used when the intruder detection process is performed.
  • the CPU 23 performs various processes including an intruder detection process by executing a program stored in the program memory 24.
  • the program memory 24 is a memory for storing a program for realizing each function related to intruder detection processing.
  • functions related to intruder detection processing it has an object detection function 41, a posture detection function 42, a GPS reception function 43, an illuminance reception function 44, and an action range acquisition function 45.
  • the image output I / F 25 is an interface that converts the output video stored in the image memory 203 into a form that can be displayed on the monitor of the image processing device 12 or the display terminal 13 and outputs the image.
  • the communication I / F 26 is an interface for communicating with an external device including a GPS terminal 14 and an illuminance meter 15.
  • FIG. 3 shows an example of a flowchart of the intruder detection process executed by the image processing device 12.
  • the shooting range and the basic mask image of the camera 11 are preset for each of the cameras 11, and the information is registered in the image processing device 12.
  • the information (for example, the terminal ID) of the GPS terminal 14 to be carried by each worker is registered in the image processing device 12.
  • the information (for example, the size) of the brought-in equipment is also registered in the image processing apparatus 12.
  • the image processing device 12 first determines whether the intruder detection activation setting is ON or OFF (step S11). In the case of setting that intruder detection is OFF, that is, intruder detection is not performed, no particular processing is performed. On the other hand, in the case of setting to execute intruder detection ON, that is, intruder detection, the intruder detection process is performed as follows.
  • the image taken by the camera 11 is acquired by the image input I / F21 and saved in the image memory 31 (step S12). Further, the GPS receiving function 43 acquires GPS information from the GPS terminal 14 carried by the worker during maintenance work, and saves the latitude / longitude data of the worker in the work memory 33 (step S13). When GPS information is not received from the GPS terminal 14, a code indicating "no GPS information" is stored in the work memory 33 instead of the latitude / longitude data. Further, the illuminance receiving function 44 acquires illuminance data from the illuminance meter 15 corresponding to the shooting range of the camera 11 and stores it in the work memory 33 (step S14).
  • the posture detection function 42 detects the posture of the worker at the position indicated by the GPS information (step S15).
  • the posture detection function 42 detects the posture of the worker at the position indicated by the GPS information (step S15).
  • the posture detection function 42 detects the posture of the worker at the position indicated by the GPS information (step S15).
  • a sample of a downward posture is learned in advance by skeleton detection or AI learning in the image processing device 12, and a worker whose similarity with the trained sample is equal to or higher than a predetermined value is defined as a downward posture. judge.
  • the time in the downward posture hereinafter referred to as the downward continuation time
  • the downward duration is cleared.
  • the determination of whether or not the posture is downward may be performed by another method.
  • a method of determining a downward posture when a downward line of sight of a predetermined angle or more is detected by a camera with a line-of-sight sensing function attached to a worker's helmet, or a method of attaching to a worker's helmet can be mentioned.
  • a mask image creation process is performed for each camera (step S16).
  • the target area / non-target area for intruder detection in the image taken by the camera 11 is determined based on the information obtained by the GPS reception function 43, the illuminance reception function 44, and the posture detection function 42. , Create a mask image that represents these areas as an image.
  • the object detection function 41 analyzes the images taken by each camera 11 to determine the presence or absence of an intruder in the monitoring area (step S17).
  • the object instead of detecting the object from the entire image taken by the camera 11, the object is detected only from the target area of the intruder detection defined by the mask image, and the detected object is determined to be an intruder. do. That is, the object is not detected from the area outside the target of the intruder detection defined by the mask image.
  • an object is detected from the entire image taken by the camera 11, and an object in which an object is detected in the target area of intruder detection is determined to be an intruder, but is not subject to intruder detection.
  • the detection of an object included in a captured image can be determined by using a general object detection method such as a background subtraction method, an interframe subtraction method, and object recognition using learning.
  • an output image to be displayed on the monitor of the image processing device 12 or the display terminal 13 is created (step S18).
  • the output image is an image obtained by superimposing additional information such as information indicating the shooting area and information indicating the presence or absence of an intruder on the image taken by the camera 11.
  • the created output image is displayed on the monitor or display terminal 13 of the image processing device 12 through the image output I / F25 (step S19).
  • the output image obtained for each camera may be divided into a plurality of display areas and displayed simultaneously, or may be switched and displayed at regular time intervals.
  • the detection result (presence or absence of an intruder) by the object detection function 41 is determined (step S20), and if it is determined that there is an intruder, an alarm is output to the display terminal 13 through the communication I / F 26 (step). S21). As a result, the display terminal 13 sounds an alarm / displays an alarm, and notifies the observer of the detection of an intruder.
  • FIG. 4 shows an example of a flowchart of the mask image creation process.
  • the mask image creation process will be described with a focus on one camera (hereinafter referred to as a target camera).
  • step S31 it is determined whether or not GPS information has been received from the GPS terminal 14 carried by the maintenance worker (step S31).
  • the latitude / longitude data of the GPS terminal 14 (worker) stored in the work memory 33 is confirmed, and it is determined whether or not the latitude / longitude data is a code indicating "no GPS information".
  • the code representing "no GPS information” the basic mask image pre-registered for each camera is set to be applied in the intruder detection process at the current timing (step S37).
  • the latitude / longitude data of the GPS terminal 14 is not a code indicating "no GPS information"
  • the latitude / longitude of the GPS terminal 14 is converted into XY coordinates (step S32).
  • a method of converting latitude / longitude into XY coordinates for example, a method of converting general latitude / longitude into XY coordinates of world coordinates can be used.
  • step S33 whether or not the XY coordinates obtained by the coordinate conversion are within the shooting range of the target camera, and whether or not there is a mask position notification regarding another camera adjacent to the target camera (hereinafter, adjacent camera). Determination (step S33).
  • the mask position notification is a notification issued when a worker is detected within the shooting range of the adjacent camera.
  • the XY coordinates are not within the shooting range of the target camera and there is no mask position notification for the adjacent camera (that is, if the worker is not within the shooting range of the target camera and is not near it)
  • the pre-registered basic mask image is set to be applied in the intruder detection process at the present timing (step S37).
  • the XY coordinates are within the shooting range of the target camera (that is, when the worker is within the shooting range of the target camera), or when there is a mask position notification regarding the adjacent camera (that is, the worker is within the shooting range of the target camera). If it is out of the shooting range but close to it), create a mask image according to the XY coordinates as follows.
  • the work range for a certain period of time is estimated based on the size of the carry-on equipment, and the mask radius [work radius] is determined (step S34). For example, when there is no equipment to bring in (that is, when maintenance work is performed only by workers), the range in which a person can run and move within a predetermined number of seconds is determined as the mask radius [work radius]. For example, when bringing in a long object, the range in which the long object can be moved within a predetermined number of seconds is determined as the mask radius [working radius].
  • the mask radius [viewing radius] is determined based on the illuminance data corresponding to the shooting range of the target camera and the posture of the worker in the shooting range of the target camera (step S35). Since the visual field range that can be seen by the worker depends on the brightness of the surroundings, the mask radius [visual radius] is adjusted to be smaller as the illuminance is lower (that is, the darker it is), and the higher the illuminance (that is, the brighter). Adjust so that the mask radius [visual illuminance] becomes larger.
  • the downward duration of the worker is longer than a predetermined value, that is, when the worker is working downward, the worker does not move and cannot reach far, so that the mask radius [working radius] ] Is set to 0, and the mask radius [visual radius] is also adjusted to be small.
  • a mask image used in the intruder detection process at the current timing is generated based on the basic mask image of the target camera (step S36). Specifically, a circular mask area is added to the basic mask image, centered on the XY coordinates of the worker, with the larger of the mask radius [working radius] and the mask radius [viewing radius] as the radius. Further, when the mask position notification is issued, a mask area (see image 58 in FIG. 5) corresponding to the notification content is also added. When GPS information of a plurality of workers is received, the above-mentioned circular mask area is calculated for each worker, and the area in which they are overlapped is used as the final circular mask area.
  • Step S38 it is determined whether or not the circular mask area added to the basic mask image extends outside the image, and if it does, it is determined whether there is an adjacent camera that includes the area corresponding to the protruding portion in the shooting range.
  • a mask position notification including the worker's XY coordinates, mask radius [working radius], and mask radius [visible radius] is issued (step S39). ..
  • a mask area corresponding to the protruding portion is added to the mask image of the adjacent camera.
  • the white background portion is the non-masked region and the black portion is the masked region.
  • the pre-registered basic mask image 52 is applied as it is.
  • the mask image 54 is created by adding the circular mask area centered on the position of the worker to the basic mask image 52.
  • the mask image 56 having a partially protruding circular mask region is created. In this case, even if the image 57 not including the worker is taken by the adjacent camera, the mask image 58 is created by adding the area corresponding to the protruding portion from the mask image 56 as the mask area.
  • the camera 11 that captures the surveillance area, the image processing device 12 that performs the detection processing of the intruder using the captured image of the camera 11, and the predetermined work are performed in the monitoring area.
  • the image processing device 12 includes a GPS terminal 14 carried by a worker, and the image processing device 12 targets an intruder detection process in a region including coordinates corresponding to a position positioned by the GPS terminal 14 in an image captured by the camera 11. It is configured to be set in the mask area to be outside. In this way, the GPS terminal 14 carried by the worker sequentially recognizes the position of the worker and automatically sets the surrounding area in the mask area, thereby preventing false detection of the worker and preventing intruders. It is possible to reduce oversight.
  • the mask radius [working radius] and the mask radius [visual radius] set for the worker centering on the coordinates corresponding to the position positioned by the GPS terminal 14 as the mask area. ] Is configured to set the circular mask area to which the larger one is applied. Therefore, it is possible to easily set the mask area in consideration of the work range and the visible range of the worker. For simplification of processing, a mask area may be set in consideration of only one of the mask radius [working radius] and the mask radius [visual radius].
  • the mask radius [visual radius] is adjusted according to the illuminance of the shooting range of the target camera, and the mask radius [working radius] is adjusted according to the posture of the worker. And the mask radius [viewing radius] is configured to be adjusted. In addition, the mask radius [working radius] is adjusted according to the equipment brought in by the worker. In this way, by adjusting the mask radius [work radius] and mask radius [visual radius] according to factors such as the work environment and work content, it is possible to set the mask area according to the actual work situation. It will be possible.
  • the area corresponding to the above-mentioned protruding area is set as the mask area for the image captured by the camera adjacent to the target camera. It is configured to do. Therefore, when the worker is out of the shooting range of the target camera but is near, it is possible to prevent the worker from being erroneously detected as an intruder.
  • a circular mask area centered on the position of the worker is set as the mask area, but a mask area having another geometric shape such as an ellipse, a quadrangle, or a hexagon is used. It is also possible to set.
  • the target area / non-target area for intruder detection is determined based on the mask image, but the determination may be made on the world coordinates instead of the determination based on the mask image. good.
  • replace the position of the object with the world coordinate system convert the mask radius [working range] and mask radius [visual range] to the actual distance, and then check whether the position of the object is within the range outside the target of intrusion detection. You just have to decide whether or not.
  • the circular mask area is determined by using the viewing radius or the working radius based on the GPS position information of the worker, but the mask area is not limited to a circular shape but is a rectangular shape. It may be present or it may be a polygon. In this case, a region having a predetermined shape based on at least one of the work range (distance) or the viewing range (distance) set for the worker, centered on the coordinates based on the position information of the worker, is set as the mask area. ..
  • the present invention also provides, for example, a method including a technical procedure relating to the above processing, a program for executing the above processing by a processor, a storage medium for storing such a program in a computer-readable manner, and the like. Is also possible.
  • the present invention can be used in a monitoring system that detects an intruder in a monitoring area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)
  • Burglar Alarm Systems (AREA)

Abstract

Provided is a monitoring system with which it is possible to reduce the number of cases in which an intruding object is missed, while preventing a worker from being falsely sensed. The present invention comprises a camera 11 for capturing an image of a monitoring area, an image processing device 12 for performing a process for sensing an intruding object using an image captured by the camera 11, and a GPS terminal 14 carried by a worker performing prescribed work in the monitoring area. The image processing device 12 sets, as a mask region not subjected to the process for sensing an intruding object, a region in the image captured by the camera 11 that includes coordinates corresponding to a position measured by the GPS terminal.

Description

監視システム及び監視方法Monitoring system and monitoring method
 本発明は、監視エリアへの侵入物を検知する監視システムに関する。 The present invention relates to a monitoring system that detects an intruder in a monitoring area.
 従来、監視エリアへの侵入物(侵入者や動物も含む)を検知する監視システムが実用されている。例えば、特許文献1には、監視画面上に監視エリアの地図を表示し、地図上に侵入者の移動軌跡線を表示すると共に、移動軌跡線上の任意の位置座標が選択されると、その位置座標に応じた監視カメラの記録映像の再生を行う監視システムが開示されている。 Conventionally, a monitoring system that detects intruders (including intruders and animals) that have entered the monitoring area has been put into practical use. For example, in Patent Document 1, a map of a monitoring area is displayed on a monitoring screen, a movement locus line of an intruder is displayed on the map, and when an arbitrary position coordinate on the movement locus line is selected, the position is displayed. A surveillance system that reproduces a recorded image of a surveillance camera according to coordinates is disclosed.
特開2017-92808号公報Japanese Unexamined Patent Publication No. 2017-922808
 従来の監視システムは、侵入物の見逃しを極力減らしたいという考えから、監視エリア内に入ってきた物体すべてを検知する方式のものが多い。その一方で、監視エリアには保守作業のために作業員が立ち入ることがあり、侵入物と作業員の区別が難しいため、作業員を侵入物として誤検知しかねない問題がある。そこで、保守作業の間は作業員が作業予定範囲を目視できることを前提として、作業予定範囲全体に対して侵入物検知をOFFにする措置が採られていた。 Many conventional monitoring systems detect all objects that have entered the monitoring area because they want to reduce the oversight of intruders as much as possible. On the other hand, there is a problem that a worker may enter the monitoring area for maintenance work and it is difficult to distinguish between an intruder and a worker, so that the worker may be falsely detected as an intruder. Therefore, on the premise that the worker can visually check the planned work range during the maintenance work, a measure has been taken to turn off the intruder detection for the entire planned work range.
 しかしながら、作業予定範囲が広い場合や周囲の明るさが足りない場合は、作業員の目視可能範囲が作業予定範囲より狭くなってしまうので、侵入物の見逃しが発生する可能性がある。例えば、道路や鉄道軌道の保守作業では、数百m~数km程度の長い区間が作業予定範囲となるため、作業員は作業予定範囲全体を目視で監視することは到底できない。 However, if the planned work range is wide or the surrounding brightness is insufficient, the visible range of the worker will be narrower than the planned work range, so there is a possibility that intruders may be overlooked. For example, in the maintenance work of roads and railroad tracks, a long section of several hundred meters to several kilometers is the planned work range, so that the worker cannot visually monitor the entire planned work range.
 本発明は、上記のような従来の事情に鑑みて為されたものであり、作業員の誤検知を防止しつつ、侵入物の見逃しを低減することが可能な監視システムを提供することを目的とする。 The present invention has been made in view of the above-mentioned conventional circumstances, and an object of the present invention is to provide a monitoring system capable of reducing oversight of intruders while preventing false detection of workers. And.
 上記の目的を達成するために、本発明では、監視システムを以下のように構成した。
 すなわち、監視エリアへの侵入物を検知する監視システムにおいて、監視エリアを撮影するカメラと、カメラの撮影画像を用いて侵入物の検知処理を行う画像処理装置と、監視エリア内で所定作業を行う作業員に携帯される測位端末とを備え、画像処理装置は、カメラの撮影画像における、測位端末により測位された位置に対応する座標を含む領域を、侵入物の検知処理の対象外領域に設定することを特徴とする。
In order to achieve the above object, in the present invention, the monitoring system is configured as follows.
That is, in a monitoring system that detects an intruder into the monitoring area, a camera that captures the surveillance area, an image processing device that performs detection processing of the intruder using the captured image of the camera, and predetermined work are performed in the monitoring area. The image processing device includes a positioning terminal carried by the worker, and the image processing device sets an area including the coordinates corresponding to the position positioned by the positioning terminal in the captured image of the camera as a non-target area for intruder detection processing. It is characterized by doing.
 ここで、対象外領域は、測位端末により測位された位置に対応する座標を中心とした、作業員に対して設定された作業範囲又は視認範囲の少なくとも一方に基づく領域としてもよい。 Here, the non-target area may be an area based on at least one of the work range or the viewing range set for the worker, centering on the coordinates corresponding to the position positioned by the positioning terminal.
 また、対象外領域は、カメラの撮影範囲の照度、作業員の姿勢、作業員の持ち込み機材の1つ以上に基づいて調整されてもよい。 Further, the non-target area may be adjusted based on the illuminance of the shooting range of the camera, the posture of the worker, and one or more of the equipment brought in by the worker.
 また、画像処理装置は、カメラの撮影画像の範囲から対象外領域がはみ出る場合に、当該カメラに隣接するカメラの撮影画像における前記はみ出た領域に対応する領域を、侵入物の検知処理の対象外領域に設定するようにしてもよい。 Further, when the non-target area protrudes from the range of the image captured by the camera, the image processing device excludes the region corresponding to the protruding region in the image captured by the camera adjacent to the camera from the target of the intruder detection processing. It may be set in the area.
 本発明によれば、作業員の誤検知を防止しつつ、侵入物の見逃しを低減することが可能な監視システムを提供することができる。 According to the present invention, it is possible to provide a monitoring system capable of reducing oversight of intruders while preventing erroneous detection of workers.
本発明の一実施形態に係る監視システムの概略構成を示す図である。It is a figure which shows the schematic structure of the monitoring system which concerns on one Embodiment of this invention. 図1の監視システムにおける画像処理装置の構成例を示す図である。It is a figure which shows the configuration example of the image processing apparatus in the monitoring system of FIG. 侵入物検知処理のフローチャート例を示す図である。It is a figure which shows the flowchart example of the intruder detection process. マスク画像作成処理のフローチャート例を示す図である。It is a figure which shows the flowchart example of the mask image creation process. マスク画像作成処理で作成されるマスク画像について説明する図である。It is a figure explaining the mask image created by the mask image creation process.
 本発明の一実施形態について図面を参照して説明する。
 図1には、本発明の一実施形態に係る監視システムの概略構成を示してある。本例の監視システムは、カメラで撮影された画像の解析により監視を行うシステムであり、カメラ11と、画像処理装置12と、表示端末13を備えるほかに、GPS(Global Positioning System)端末14と、照度計15を備える。カメラ11、GPS端末14、照度計15は、有線又は無線のネットワークを介して画像処理装置12と通信可能に接続される。
An embodiment of the present invention will be described with reference to the drawings.
FIG. 1 shows a schematic configuration of a monitoring system according to an embodiment of the present invention. The monitoring system of this example is a system that monitors by analyzing an image taken by a camera, and includes a camera 11, an image processing device 12, a display terminal 13, and a GPS (Global Positioning System) terminal 14. , The illuminance meter 15 is provided. The camera 11, the GPS terminal 14, and the illuminance meter 15 are communicably connected to the image processing device 12 via a wired or wireless network.
 カメラ11は、監視エリアを撮影するための撮像装置であり、監視エリア全体をカバーできるように複数台が設置される。カメラ11としては、可視光カメラを用いてもよいし、赤外線カメラを用いてもよいし、これら両方を用いてもよい。カメラ11は、デジタル信号方式又はアナログ信号方式の画像出力インタフェース(I/F)を用いて、撮影した画像を画像処理装置12に送信する。 The camera 11 is an imaging device for photographing the surveillance area, and a plurality of cameras 11 are installed so as to cover the entire surveillance area. As the camera 11, a visible light camera may be used, an infrared camera may be used, or both of them may be used. The camera 11 transmits the captured image to the image processing device 12 by using the image output interface (I / F) of the digital signal system or the analog signal system.
 画像処理装置12は、カメラ11から受信した撮影画像に対して種々の画像処理を施し、処理後の出力用画像を表示端末13に送信する。また、画像処理装置12は、カメラ11から受信した撮影画像に基づいて、監視エリアへの侵入物を検知するための侵入物検知処理を行い、侵入物が検知された場合は警報信号を出力する。 The image processing device 12 performs various image processing on the captured image received from the camera 11 and transmits the processed output image to the display terminal 13. Further, the image processing device 12 performs intruder detection processing for detecting an intruder in the monitoring area based on the captured image received from the camera 11, and outputs an alarm signal when the intruder is detected. ..
 表示端末13は、VMS(ビデオマネージメントシステム)を搭載しており、カメラ11による撮影画像/画像処理装置12で画像処理された出力用画像の表示や、画像処理装置12からの警報出力に応じたアラーム鳴動/アラーム表示などを行う。 The display terminal 13 is equipped with a VMS (video management system), and responds to the display of the output image processed by the image / image processing device 12 taken by the camera 11 and the alarm output from the image processing device 12. Sounds an alarm / displays an alarm, etc.
 GPS端末14は、現在位置を測位するための機器であり、測位した現在位置を定期的に画像処理装置12に通知する。GPS端末14は、監視エリア内で作業を行う予定の作業員のそれぞれ、または、代表者に携帯される。 The GPS terminal 14 is a device for positioning the current position, and periodically notifies the image processing device 12 of the measured current position. The GPS terminal 14 is carried by each of the workers scheduled to work in the monitoring area or by a representative.
 照度計15は、周辺の照度を計測するセンサーであり、計測した照度を定期的に画像処理装置12に通知する。照度計15は、それぞれのカメラ11の撮影範囲に対応するように複数台が設置される。なお、各々のカメラ11が照度計15を備えても構わない。 The illuminance meter 15 is a sensor that measures the illuminance in the surroundings, and periodically notifies the image processing device 12 of the measured illuminance. A plurality of illuminance meters 15 are installed so as to correspond to the shooting range of each camera 11. It should be noted that each camera 11 may be provided with an illuminance meter 15.
 図2には、画像処理装置12の構成例を示してある。画像処理装置12は、画像入力I/F21と、処理メモリ22と、CPU(Central Processing Unit)23と、プログラムメモリ24と、画像出力I/F25と、通信I/F26を備え、これらはバス27により接続されている。 FIG. 2 shows a configuration example of the image processing device 12. The image processing device 12 includes an image input I / F 21, a processing memory 22, a CPU (Central Processing Unit) 23, a program memory 24, an image output I / F 25, and a communication I / F 26, which are buses 27. Is connected by.
 画像入力I/F21は、カメラ11から送信された撮影画像が入力されるインタフェースであり、入力された撮影画像を処理メモリ22に保存する。 The image input I / F 21 is an interface for inputting a captured image transmitted from the camera 11, and stores the input captured image in the processing memory 22.
 処理メモリ22は、画像メモリ領域31と、マスク画像メモリ領域32と、ワークメモリ領域33とを有する。画像メモリ領域31は、画像入力I/F21を通じて入力された撮影画像(例えば、侵入物検知の対象となる画像や次の侵入物検知で必要な背景画像)や、画像処理を施した出力用画像などを、一時的に保存しておくメモリ領域である。マスク画像メモリ領域32は、侵入物検知の対象領域/対象外領域を規定するマスク画像を保存しておくメモリ領域である。本例では、マスク画像として、侵入物検知の対象領域(非マスク領域)を白値、侵入物検知の対象外領域(マスク領域)を黒値とした二値画像を用いるが、これに限定されない。ワークメモリ領域33は、侵入物検知処理を行う際に一時的に使用される作業用のメモリ領域である。 The processing memory 22 has an image memory area 31, a mask image memory area 32, and a work memory area 33. The image memory area 31 is a captured image input through the image input I / F 21 (for example, an image to be detected for an intruder or a background image required for the next intruder detection), or an output image subjected to image processing. It is a memory area that temporarily stores such things. The mask image memory area 32 is a memory area for storing a mask image that defines a target area / non-target area for intruder detection. In this example, as the mask image, a binary image in which the target area (non-mask area) for intruder detection is a white value and the non-target area (mask area) for intruder detection is a black value is used, but the present invention is not limited to this. .. The work memory area 33 is a working memory area that is temporarily used when the intruder detection process is performed.
 CPU23は、プログラムメモリ24に格納されたプログラムを実行することで、侵入物検知処理を含む種々の処理を行うものである。 The CPU 23 performs various processes including an intruder detection process by executing a program stored in the program memory 24.
 プログラムメモリ24は、侵入物検知処理に関する各機能を実現するためのプログラムを記憶しておくメモリである。本例では、侵入物検知処理に関する機能として、物体検知機能41、姿勢検知機能42、GPS受信機能43、照度受信機能44、行動範囲取得機能45を有する。 The program memory 24 is a memory for storing a program for realizing each function related to intruder detection processing. In this example, as functions related to intruder detection processing, it has an object detection function 41, a posture detection function 42, a GPS reception function 43, an illuminance reception function 44, and an action range acquisition function 45.
 画像出力I/F25は、画像メモリ203に保存された出力用映像を、画像処理装置12のモニタや表示端末13に表示可能な形に変換して出力するインタフェースである。 The image output I / F 25 is an interface that converts the output video stored in the image memory 203 into a form that can be displayed on the monitor of the image processing device 12 or the display terminal 13 and outputs the image.
 通信I/F26は、GPS端末14や照度計15を含む外部の機器と通信するためのインタフェースである。 The communication I / F 26 is an interface for communicating with an external device including a GPS terminal 14 and an illuminance meter 15.
 図3には、画像処理装置12により実行される侵入物検知処理のフローチャート例を示してある。ここで、カメラ11の各々に対して、そのカメラ11の撮影範囲及び基本マスク画像が予め設定されており、その情報が画像処理装置12に登録されているものとする。また、保守作業を開始する前に、各作業員に携帯させるGPS端末14の情報(例えば、端末ID)が画像処理装置12に登録されているものとする。また、保守作業に際して長尺物などの或る程度の大さの機材を持ち込む場合には、持ち込み機材の情報(例えば、大きさ)も画像処理装置12に登録されているものとする。 FIG. 3 shows an example of a flowchart of the intruder detection process executed by the image processing device 12. Here, it is assumed that the shooting range and the basic mask image of the camera 11 are preset for each of the cameras 11, and the information is registered in the image processing device 12. Further, before starting the maintenance work, it is assumed that the information (for example, the terminal ID) of the GPS terminal 14 to be carried by each worker is registered in the image processing device 12. Further, when a certain size of equipment such as a long object is brought in for maintenance work, it is assumed that the information (for example, the size) of the brought-in equipment is also registered in the image processing apparatus 12.
 画像処理装置12は、まず、侵入物検知の起動設定がONかOFFかを判定する(ステップS11)。侵入物検知OFF、すなわち、侵入者検知を実施しない設定の場合には、特に処理を行わない。一方、侵入物検知ON、すなわち、侵入者検知を実施する設定の場合には、以下のようにして侵入物検知処理を実施する。 The image processing device 12 first determines whether the intruder detection activation setting is ON or OFF (step S11). In the case of setting that intruder detection is OFF, that is, intruder detection is not performed, no particular processing is performed. On the other hand, in the case of setting to execute intruder detection ON, that is, intruder detection, the intruder detection process is performed as follows.
 カメラ11で撮影された画像を画像入力I/F21で取得し、画像メモリ31に保存する(ステップS12)。また、GPS受信機能43により、保守作業中の作業員が携帯するGPS端末14からGPS情報を取得し、作業員の緯度・経度データをワークメモリ33に保存する(ステップS13)。なお、GPS端末14からGPS情報を受信しなかった場合には、緯度・経度データに代えて、「GPS情報なし」を表すコードをワークメモリ33に保存する。また、照度受信機能44により、カメラ11の撮影範囲に対応する照度計15から照度データを取得し、ワークメモリ33に保存する(ステップS14)。 The image taken by the camera 11 is acquired by the image input I / F21 and saved in the image memory 31 (step S12). Further, the GPS receiving function 43 acquires GPS information from the GPS terminal 14 carried by the worker during maintenance work, and saves the latitude / longitude data of the worker in the work memory 33 (step S13). When GPS information is not received from the GPS terminal 14, a code indicating "no GPS information" is stored in the work memory 33 instead of the latitude / longitude data. Further, the illuminance receiving function 44 acquires illuminance data from the illuminance meter 15 corresponding to the shooting range of the camera 11 and stores it in the work memory 33 (step S14).
 また、姿勢検知機能42により、GPS情報が示す位置にいる作業員の姿勢を検知する(ステップS15)。本例では、画像メモリ31に保存されたカメラ11の画像を解析することで、作業員が下向き姿勢で作業しているか否かを判定する。具体的には、画像処理装置12内で骨格検知やAI学習で下向き姿勢のサンプルを事前に学習させておき、学習させたサンプルとの類似度が所定値以上であった作業員を下向き姿勢と判定する。作業員が下向き姿勢であると判定された場合には、下向き姿勢の時間(以下、下向き継続時間)を計測する。なお、下向き姿勢でない状態が一定時間を超えた場合には、下向き継続時間はクリアされる。 Further, the posture detection function 42 detects the posture of the worker at the position indicated by the GPS information (step S15). In this example, by analyzing the image of the camera 11 stored in the image memory 31, it is determined whether or not the worker is working in the downward posture. Specifically, a sample of a downward posture is learned in advance by skeleton detection or AI learning in the image processing device 12, and a worker whose similarity with the trained sample is equal to or higher than a predetermined value is defined as a downward posture. judge. When it is determined that the worker is in the downward posture, the time in the downward posture (hereinafter referred to as the downward continuation time) is measured. If the non-downward posture exceeds a certain period of time, the downward duration is cleared.
 なお、下向き姿勢か否かの判定は、別の手法で行うようにしてもよい。別の手法としては、例えば、作業員のヘルメットに取り付けておいた視線感知機能付きのカメラで所定角度以上の下向き視線が検出された場合に下向き姿勢と判定する手法や、作業員のヘルメットに取り付けておいたジャイロセンサで所定角度以上の前方向への傾きが検知された場合に下向き姿勢と判定する手法などが挙げられる。 It should be noted that the determination of whether or not the posture is downward may be performed by another method. As another method, for example, a method of determining a downward posture when a downward line of sight of a predetermined angle or more is detected by a camera with a line-of-sight sensing function attached to a worker's helmet, or a method of attaching to a worker's helmet. A method of determining a downward posture when a forward tilt of a predetermined angle or more is detected by the gyro sensor that has been set up can be mentioned.
 上記の各処理は、システム内の全てのカメラ11、GPS端末14、照度計15に対して行われる。その後、カメラ毎にマスク画像作成処理を行う(ステップS16)。マスク画像作成処理では、GPS受信機能43、照度受信機能44及び姿勢検知機能42により得られた情報に基づいて、カメラ11で撮影された画像における侵入物検知の対象領域/対象外領域を決定し、これらの領域を画像で表したマスク画像を作成する。 Each of the above processes is performed on all the cameras 11, the GPS terminal 14, and the illuminance meter 15 in the system. After that, a mask image creation process is performed for each camera (step S16). In the mask image creation process, the target area / non-target area for intruder detection in the image taken by the camera 11 is determined based on the information obtained by the GPS reception function 43, the illuminance reception function 44, and the posture detection function 42. , Create a mask image that represents these areas as an image.
 次に、物体検知機能41により、それぞれのカメラ11で撮影された画像を解析して、監視エリアに対する侵入物の有無を判断する(ステップS17)。このとき、カメラ11で撮影された画像の全体から物体検知を行うのではなく、マスク画像で規定された侵入物検知の対象領域のみから物体検知を行って、検知された物体を侵入物と判定する。すなわち、マスク画像で規定された侵入物検知の対象外領域からは物体検知を行わない。なお、別の手法として、カメラ11で撮影された画像の全体から物体検知を行い、侵入物検知の対象領域で物体が検知された物体を侵入物と判定する一方で、侵入物検知の対象外領域で物体が検知された物体は侵入物ではないと判定してもよい。ここで、撮影画像に含まれる物体の検知は、背景差分法、フレーム間差分法、学習を用いた物体認識などの、一般的な物体検知手法を使用して判断することができる。 Next, the object detection function 41 analyzes the images taken by each camera 11 to determine the presence or absence of an intruder in the monitoring area (step S17). At this time, instead of detecting the object from the entire image taken by the camera 11, the object is detected only from the target area of the intruder detection defined by the mask image, and the detected object is determined to be an intruder. do. That is, the object is not detected from the area outside the target of the intruder detection defined by the mask image. As another method, an object is detected from the entire image taken by the camera 11, and an object in which an object is detected in the target area of intruder detection is determined to be an intruder, but is not subject to intruder detection. It may be determined that the object in which the object is detected in the area is not an intruder. Here, the detection of an object included in a captured image can be determined by using a general object detection method such as a background subtraction method, an interframe subtraction method, and object recognition using learning.
 次に、画像処理装置12のモニタや表示端末13に表示するための出力用画像を作成する(ステップS18)。出力用画像は、カメラ11で撮影された画像に、その撮影エリアを示す情報や侵入物の有無を示す情報などの付加情報を重畳した画像である。その後、作成した出力用画像を、画像出力I/F25を通じて画像処理装置12のモニタや表示端末13に表示させる(ステップS19)。カメラ毎に得られる出力用画像は、表示領域を複数に分割して同時表示してもよいし、一定時間毎に切り替え表示してもよい。 Next, an output image to be displayed on the monitor of the image processing device 12 or the display terminal 13 is created (step S18). The output image is an image obtained by superimposing additional information such as information indicating the shooting area and information indicating the presence or absence of an intruder on the image taken by the camera 11. After that, the created output image is displayed on the monitor or display terminal 13 of the image processing device 12 through the image output I / F25 (step S19). The output image obtained for each camera may be divided into a plurality of display areas and displayed simultaneously, or may be switched and displayed at regular time intervals.
 また、物体検知機能41による検知結果(侵入物の有無)を判定し(ステップS20)、侵入物ありと判定された場合には、通信I/F26を通じて表示端末13への警報出力を行う(ステップS21)。これにより、表示端末13においてアラーム鳴動/アラーム表示などが行われ、侵入物の検知が監視員に通知される。 Further, the detection result (presence or absence of an intruder) by the object detection function 41 is determined (step S20), and if it is determined that there is an intruder, an alarm is output to the display terminal 13 through the communication I / F 26 (step). S21). As a result, the display terminal 13 sounds an alarm / displays an alarm, and notifies the observer of the detection of an intruder.
 次に、カメラ毎のマスク画像作成処理(ステップS16)の詳細について、図4を用いて説明する。図4には、マスク画像作成処理のフローチャート例を示してある。以下では、1つのカメラ(以下、対象カメラ)に着目して、マスク画像作成処理を説明する。 Next, the details of the mask image creation process (step S16) for each camera will be described with reference to FIG. FIG. 4 shows an example of a flowchart of the mask image creation process. Hereinafter, the mask image creation process will be described with a focus on one camera (hereinafter referred to as a target camera).
 まず、保守作業の作業員に携帯させるGPS端末14からGPS情報を受信したか否かを判定する(ステップS31)。ここでは、ワークメモリ33に保存されたGPS端末14(作業員)の緯度・経度データを確認し、緯度・経度データが「GPS情報なし」を表すコードか否かを判断する。「GPS情報なし」を表すコードの場合には、カメラ毎に事前登録された基本マスク画像を、今タイミングの侵入物検知処理で適用するように設定する(ステップS37)。 First, it is determined whether or not GPS information has been received from the GPS terminal 14 carried by the maintenance worker (step S31). Here, the latitude / longitude data of the GPS terminal 14 (worker) stored in the work memory 33 is confirmed, and it is determined whether or not the latitude / longitude data is a code indicating "no GPS information". In the case of the code representing "no GPS information", the basic mask image pre-registered for each camera is set to be applied in the intruder detection process at the current timing (step S37).
 一方、GPS端末14の緯度・経度データが「GPS情報なし」を表すコードではない場合には、GPS端末14の緯度・経度をXY座標に変換する(ステップS32)。緯度・経度をXY座標に変換する方法としては、例えば、一般的な緯度・経度をワールド座標のXY座標に変換する方式を用いることができる。 On the other hand, if the latitude / longitude data of the GPS terminal 14 is not a code indicating "no GPS information", the latitude / longitude of the GPS terminal 14 is converted into XY coordinates (step S32). As a method of converting latitude / longitude into XY coordinates, for example, a method of converting general latitude / longitude into XY coordinates of world coordinates can be used.
 次に、座標変換で得られたXY座標が対象カメラの撮影範囲内であるか否か、及び、対象カメラに隣接する他のカメラ(以下、隣接カメラ)に関するマスク位置通知があるか否かを判定する(ステップS33)。マスク位置通知は、隣接カメラの撮影範囲内で作業員が検出された場合に発せられる通知である。 Next, whether or not the XY coordinates obtained by the coordinate conversion are within the shooting range of the target camera, and whether or not there is a mask position notification regarding another camera adjacent to the target camera (hereinafter, adjacent camera). Determination (step S33). The mask position notification is a notification issued when a worker is detected within the shooting range of the adjacent camera.
 XY座標が対象カメラの撮影範囲内でなく、隣接カメラに関するマスク位置通知もない場合(つまり、作業員が対象カメラの撮影範囲内におらず、その近くにもいない場合)には、カメラ毎に事前登録された基本マスク画像を、今タイミングの侵入物検知処理で適用するように設定する(ステップS37)。一方、XY座標が対象カメラの撮影範囲内ある場合(つまり、作業員が対象カメラの撮影範囲内にいる場合)、又は、隣接カメラに関するマスク位置通知がある場合(つまり、作業員が対象カメラの撮影範囲外だが近くにいる場合)には、そのXY座標に応じたマスク画像を以下のようにして作成する。 If the XY coordinates are not within the shooting range of the target camera and there is no mask position notification for the adjacent camera (that is, if the worker is not within the shooting range of the target camera and is not near it), then for each camera The pre-registered basic mask image is set to be applied in the intruder detection process at the present timing (step S37). On the other hand, when the XY coordinates are within the shooting range of the target camera (that is, when the worker is within the shooting range of the target camera), or when there is a mask position notification regarding the adjacent camera (that is, the worker is within the shooting range of the target camera). If it is out of the shooting range but close to it), create a mask image according to the XY coordinates as follows.
 まず、事前登録された持ち込み機材の情報を確認し、持ち込み機材の大きさに基づいて一定時間の作業範囲を推定し、マスク半径[作業半径]を決定する(ステップS34)。例えば、持ち込み機材が無い場合(つまり、作業員のみで保守作業を行う場合)には、人が所定秒数内に走って移動できる範囲をマスク半径[作業半径]に決定する。例えば、長尺物を持ち込む場合には、長尺物を所定秒数内に移動させることができる範囲をマスク半径[作業半径]に決定する。 First, the information on the pre-registered carry-on equipment is confirmed, the work range for a certain period of time is estimated based on the size of the carry-on equipment, and the mask radius [work radius] is determined (step S34). For example, when there is no equipment to bring in (that is, when maintenance work is performed only by workers), the range in which a person can run and move within a predetermined number of seconds is determined as the mask radius [work radius]. For example, when bringing in a long object, the range in which the long object can be moved within a predetermined number of seconds is determined as the mask radius [working radius].
 また、対象カメラの撮影範囲に対応する照度データ、及び、対象カメラの撮影範囲にいる作業員の姿勢に基づいて、マスク半径[視認半径]を決定する(ステップS35)。作業員が視認できる視野範囲は周囲の明るさで左右されるので、照度が低いほど(つまり、暗いほど)マスク半径[視認半径]が小さくなるように調整し、照度が高いほど(つまり、明るいほど)マスク半径[視認半径]が大きくなるように調整する。また、作業員の下向き継続時間が所定値以上の場合、すなわち、作業員が下向きで作業している場合には、作業員は移動せず且つ遠くに目が届かないので、マスク半径[作業半径]を0とし、マスク半径[視認半径]も小さくなるように調整する。 Further, the mask radius [viewing radius] is determined based on the illuminance data corresponding to the shooting range of the target camera and the posture of the worker in the shooting range of the target camera (step S35). Since the visual field range that can be seen by the worker depends on the brightness of the surroundings, the mask radius [visual radius] is adjusted to be smaller as the illuminance is lower (that is, the darker it is), and the higher the illuminance (that is, the brighter). Adjust so that the mask radius [visual illuminance] becomes larger. In addition, when the downward duration of the worker is longer than a predetermined value, that is, when the worker is working downward, the worker does not move and cannot reach far, so that the mask radius [working radius] ] Is set to 0, and the mask radius [visual radius] is also adjusted to be small.
 その後、対象カメラの基本マスク画像に基づいて、現タイミングの侵入物検知処理で使用するマスク画像を生成する(ステップS36)。具体的には、基本マスク画像に対し、作業員のXY座標を中心として、マスク半径[作業半径]とマスク半径[視認半径]の大きい方を半径とした円状のマスク領域を追加する。また、マスク位置通知が発せされている場合には、その通知内容に応じたマスク領域(図5の画像58を参照)も追加する。なお、複数の作業員のGPS情報を受信した場合、作業員毎に上述の円状のマスク領域を算出し、それらを重ね合わせた領域を最終的な円状マスク領域とする。 After that, a mask image used in the intruder detection process at the current timing is generated based on the basic mask image of the target camera (step S36). Specifically, a circular mask area is added to the basic mask image, centered on the XY coordinates of the worker, with the larger of the mask radius [working radius] and the mask radius [viewing radius] as the radius. Further, when the mask position notification is issued, a mask area (see image 58 in FIG. 5) corresponding to the notification content is also added. When GPS information of a plurality of workers is received, the above-mentioned circular mask area is calculated for each worker, and the area in which they are overlapped is used as the final circular mask area.
 次に、基本マスク画像に追加した円状マスク領域が画像外にはみ出たか否かを判定し、はみ出ている場合には、はみ出し部分に対応するエリアを撮影範囲に含む隣接カメラがあるかを判定する(ステップS38)。はみ出し部分に対応するエリアを撮影範囲に含む隣接カメラがある場合には、作業員のXY座標、マスク半径[作業半径]、マスク半径[視認半径]を含むマスク位置通知を発行する(ステップS39)。これにより、隣接カメラのマスク画像に、上記はみ出し部分に対応するマスク領域が追加されることになる。 Next, it is determined whether or not the circular mask area added to the basic mask image extends outside the image, and if it does, it is determined whether there is an adjacent camera that includes the area corresponding to the protruding portion in the shooting range. (Step S38). If there is an adjacent camera that includes the area corresponding to the protruding portion in the shooting range, a mask position notification including the worker's XY coordinates, mask radius [working radius], and mask radius [visible radius] is issued (step S39). .. As a result, a mask area corresponding to the protruding portion is added to the mask image of the adjacent camera.
 マスク画像作成処理で作成されるマスク画像について、図5を用いて説明する。なお、図5に示すマスク画像52,54,56,58では、白地部分が非マスク領域であり、黒字部分がマスク領域である。
 作業員を含まない画像51が撮影された場合には、事前登録された基本マスク画像52がそのまま適用される。一方、作業員を含む画像53が撮影された場合には、基本マスク画像52に作業員の位置を中心とした円状マスク領域を追加したマスク画像54が作成される。また、対象カメラの撮影範囲内の外側近くに作業員を含む画像55が撮影された場合には、部分的にはみ出た円状マスク領域を有するマスク画像56が作成される。この場合、隣接カメラで作業員を含まない画像57が撮影されていても、マスク画像56からのはみ出し部分に対応する領域をマスク領域として追加したマスク画像58が作成される。
The mask image created by the mask image creation process will be described with reference to FIG. In the mask images 52, 54, 56, 58 shown in FIG. 5, the white background portion is the non-masked region and the black portion is the masked region.
When the image 51 not including the worker is taken, the pre-registered basic mask image 52 is applied as it is. On the other hand, when the image 53 including the worker is taken, the mask image 54 is created by adding the circular mask area centered on the position of the worker to the basic mask image 52. Further, when the image 55 including the worker is taken near the outside of the shooting range of the target camera, the mask image 56 having a partially protruding circular mask region is created. In this case, even if the image 57 not including the worker is taken by the adjacent camera, the mask image 58 is created by adding the area corresponding to the protruding portion from the mask image 56 as the mask area.
 以上のように、本例の監視システムでは、監視エリアを撮影するカメラ11と、カメラ11の撮影画像を用いて侵入物の検知処理を行う画像処理装置12と、監視エリア内で所定作業を行う作業員に携帯されるGPS端末14とを備え、画像処理装置12は、カメラ11の撮影画像における、GPS端末14により測位された位置に対応する座標を含む領域を、侵入物の検知処理の対象外とするマスク領域に設定するように構成されている。このように、作業員に携帯させたGPS端末14により作業員の位置を逐次認識し、その周辺を自動的にマスク領域に設定することで、作業員の誤検知を防止しつつ、侵入物の見逃しを低減することができる。 As described above, in the monitoring system of this example, the camera 11 that captures the surveillance area, the image processing device 12 that performs the detection processing of the intruder using the captured image of the camera 11, and the predetermined work are performed in the monitoring area. The image processing device 12 includes a GPS terminal 14 carried by a worker, and the image processing device 12 targets an intruder detection process in a region including coordinates corresponding to a position positioned by the GPS terminal 14 in an image captured by the camera 11. It is configured to be set in the mask area to be outside. In this way, the GPS terminal 14 carried by the worker sequentially recognizes the position of the worker and automatically sets the surrounding area in the mask area, thereby preventing false detection of the worker and preventing intruders. It is possible to reduce oversight.
 また、本例の監視システムでは、マスク領域として、GPS端末14により測位された位置に対応する座標を中心とした、作業員に対して設定されたマスク半径[作業半径]とマスク半径[視認半径]の大きい方を適用した円状マスク領域を設定するように構成されている。したがって、作業員の作業範囲及び視認可能な範囲を考慮したマスク領域を簡易に設定することが可能となる。なお、処理の簡易化のために、マスク半径[作業半径]又はマスク半径[視認半径]の一方のみを考慮したマスク領域を設定するようにしても構わない。 Further, in the monitoring system of this example, the mask radius [working radius] and the mask radius [visual radius] set for the worker centering on the coordinates corresponding to the position positioned by the GPS terminal 14 as the mask area. ] Is configured to set the circular mask area to which the larger one is applied. Therefore, it is possible to easily set the mask area in consideration of the work range and the visible range of the worker. For simplification of processing, a mask area may be set in consideration of only one of the mask radius [working radius] and the mask radius [visual radius].
 また、本例の監視システムでは、対象カメラの撮影範囲の照度に応じてマスク半径[視認半径]を調整するように構成されている、また、作業員の姿勢に応じてマスク半径[作業半径]及びマスク半径[視認半径]を調整するように構成されている。また、作業員の持ち込み機材に応じてマスク半径[作業半径]を調整するように構成されている。このように、作業時の環境や作業の内容などの要因に応じてマスク半径[作業半径]やマスク半径[視認半径]を調整することで、作業の実情に即したマスク領域を設定することが可能となる。 Further, in the monitoring system of this example, the mask radius [visual radius] is adjusted according to the illuminance of the shooting range of the target camera, and the mask radius [working radius] is adjusted according to the posture of the worker. And the mask radius [viewing radius] is configured to be adjusted. In addition, the mask radius [working radius] is adjusted according to the equipment brought in by the worker. In this way, by adjusting the mask radius [work radius] and mask radius [visual radius] according to factors such as the work environment and work content, it is possible to set the mask area according to the actual work situation. It will be possible.
 また、本例の監視システムでは、対象カメラの撮影画像の範囲からマスク領域がはみ出る場合に、対象カメラに隣接するカメラの撮影画像に対し、上記のはみ出し領域に対応する領域を、マスク領域に設定するように構成されている。したがって、作業員が対象カメラの撮影範囲外だが近くにいる場合に、その作業員が侵入物として誤検知されることを抑制することができる。 Further, in the monitoring system of this example, when the mask area extends beyond the range of the image captured by the target camera, the area corresponding to the above-mentioned protruding area is set as the mask area for the image captured by the camera adjacent to the target camera. It is configured to do. Therefore, when the worker is out of the shooting range of the target camera but is near, it is possible to prevent the worker from being erroneously detected as an intruder.
 なお、上記の説明では、マスク領域として、作業員の位置を中心とした円状のマスク領域を設定しているが、楕円、四角形、六角形などの他の幾何学的な形状のマスク領域を設定することも可能である。 In the above description, a circular mask area centered on the position of the worker is set as the mask area, but a mask area having another geometric shape such as an ellipse, a quadrangle, or a hexagon is used. It is also possible to set.
 また、上記の説明では、侵入物検知の対象領域/対象外領域をマスク画像に基づいて判断しているが、マスク画像に基づいて判断するのではなく、ワールド座標上で判断するようにしてもよい。この場合には、物体の位置をワールド座標系で置き換え、マスク半径[作業範囲]、マスク半径[視認範囲]も実距離に変換した上で、物体の位置が侵入物検知の対象外の範囲か否かを判断すればよい。 Further, in the above explanation, the target area / non-target area for intruder detection is determined based on the mask image, but the determination may be made on the world coordinates instead of the determination based on the mask image. good. In this case, replace the position of the object with the world coordinate system, convert the mask radius [working range] and mask radius [visual range] to the actual distance, and then check whether the position of the object is within the range outside the target of intrusion detection. You just have to decide whether or not.
 以上、本発明について一実施形態に基づいて説明したが、本発明はここに記載された構成に限定されるものではなく、他の構成のシステムに広く適用することができることは言うまでもない。
 例えば、上述した実施形態では、作業員のGPS位置情報に基づいて視野半径または作業半径を用いて円状のマスク領域を決定したが、マスク領域は円状に限定されるものではなく、矩形であってもよいし多角形であってもよい。この場合、作業員の位置情報に基づく座標を中心とした、作業員に対して設定された作業範囲(距離)又は視認範囲(距離)の少なくとも一方に基づく所定形状の領域をマスク領域として設定する。
 また、本発明は、例えば、上記の処理に関する技術的手順を含む方法や、上記の処理をプロセッサにより実行させるためのプログラム、そのようなプログラムをコンピュータ読み取り可能に記憶する記憶媒体などとして提供することも可能である。
Although the present invention has been described above based on one embodiment, it is needless to say that the present invention is not limited to the configurations described here and can be widely applied to systems having other configurations.
For example, in the above-described embodiment, the circular mask area is determined by using the viewing radius or the working radius based on the GPS position information of the worker, but the mask area is not limited to a circular shape but is a rectangular shape. It may be present or it may be a polygon. In this case, a region having a predetermined shape based on at least one of the work range (distance) or the viewing range (distance) set for the worker, centered on the coordinates based on the position information of the worker, is set as the mask area. ..
The present invention also provides, for example, a method including a technical procedure relating to the above processing, a program for executing the above processing by a processor, a storage medium for storing such a program in a computer-readable manner, and the like. Is also possible.
 なお、本発明の範囲は、図示され記載された例示的な実施形態に限定されるものではなく、本発明が目的とするものと均等な効果をもたらす全ての実施形態をも含む。更に、本発明の範囲は、全ての開示されたそれぞれの特徴のうち特定の特徴のあらゆる所望する組み合わせによって画され得る。 It should be noted that the scope of the present invention is not limited to the exemplary embodiments illustrated and described, but also includes all embodiments that bring about an effect equal to that of the object of the present invention. Moreover, the scope of the invention can be defined by any desired combination of specific features of all disclosed features.
 本発明は、監視エリアへの侵入物を検知する監視システムに利用することができる。 The present invention can be used in a monitoring system that detects an intruder in a monitoring area.
 11:カメラ、 12:画像処理装置、 13:表示端末、 14:GPS端末、 15:照度計、 21:画像入力I/F、 22:処理メモリ、 23:CPU、 24:プログラムメモリ、 25:画像出力I/F、 26:通信I/F、 27:バス、 31:画像メモリ領域、 32:マスク画像メモリ領域、 33:ワークメモリ領域、 41:物体検知機能、 42:姿勢検知機能、 43:GPS受信機能、 44:照度受信機能、 45:行動範囲取得機能 11: Camera, 12: Image processing device, 13: Display terminal, 14: GPS terminal, 15: Luminometer, 21: Image input I / F, 22: Processing memory, 23: CPU, 24: Program memory, 25: Image Output I / F, 26: Communication I / F, 27: Bus, 31: Image memory area, 32: Mask image memory area, 33: Work memory area, 41: Object detection function, 42: Attitude detection function, 43: GPS Reception function, 44: Illumination reception function, 45: Action range acquisition function

Claims (7)

  1.  監視エリアへの侵入物を検知する監視システムにおいて、
     前記監視エリアを撮影するカメラと、
     前記カメラの撮影画像を用いて前記侵入物の検知処理を行う画像処理装置と、
     前記監視エリア内で所定作業を行う作業員に携帯される測位端末とを備え、
     前記画像処理装置は、前記カメラの撮影画像における、前記測位端末により測位された位置に対応する座標を含む領域を、前記侵入物の検知処理の対象外領域に設定することを特徴とする監視システム。
    In a monitoring system that detects intruders into the monitoring area
    A camera that shoots the surveillance area and
    An image processing device that performs detection processing of the intruder using the image taken by the camera, and
    A positioning terminal carried by a worker who performs a predetermined work in the monitoring area is provided.
    The image processing device is a monitoring system characterized in that a region including coordinates corresponding to a position positioned by the positioning terminal in an image captured by the camera is set as a region not subject to the intruder detection process. ..
  2.  請求項1に記載の監視システムにおいて、
     前記対象外領域は、前記測位端末により測位された位置に対応する座標を中心とした、前記作業員に対して設定された作業範囲又は視認範囲の少なくとも一方に基づく領域であることを特徴とする監視システム。
    In the monitoring system according to claim 1,
    The non-target area is characterized by being an area based on at least one of a work range or a viewing range set for the worker, centered on the coordinates corresponding to the position positioned by the positioning terminal. Monitoring system.
  3.  請求項1又は請求項2に記載の監視システムにおいて、
     前記対象外領域は、前記カメラの撮影範囲の照度に応じて調整されることを特徴とする監視システム。
    In the monitoring system according to claim 1 or 2.
    The monitoring system, characterized in that the non-target area is adjusted according to the illuminance of the shooting range of the camera.
  4.  請求項1乃至請求項3のいずれかに記載の監視システムにおいて、
     前記対象外領域は、前記作業員の姿勢に応じて調整されることを特徴とする監視システム。
    In the monitoring system according to any one of claims 1 to 3.
    A monitoring system characterized in that the non-target area is adjusted according to the posture of the worker.
  5.  請求項1乃至請求項4のいずれかに記載の監視システムにおいて、
     前記対象外領域は、前記作業員の持ち込み機材に基づいて調整されることを特徴とする監視システム。
    In the monitoring system according to any one of claims 1 to 4.
    The monitoring system, characterized in that the non-target area is adjusted based on the equipment brought in by the worker.
  6.  請求項1乃至請求項5のいずれかに記載の監視システムにおいて、
     前記画像処理装置は、前記カメラの撮影画像の範囲から前記対象外領域がはみ出る場合に、当該カメラに隣接するカメラの撮影画像における前記はみ出た領域に対応する領域を、前記侵入物の検知処理の対象外領域に設定することを特徴とする監視システム。
    In the monitoring system according to any one of claims 1 to 5.
    When the non-target area protrudes from the range of the image captured by the camera, the image processing device determines the region corresponding to the protruding region in the image captured by the camera adjacent to the camera in the detection process of the intruder. A monitoring system characterized by setting in a non-target area.
  7.  監視エリアへの侵入物を検知する監視方法において、
     前記監視エリア内で所定作業を行う作業員が測位端末を携帯し、
     前記画像処理装置が、前記監視エリアを撮影するカメラの撮影画像を用いて前記侵入物の検知処理を行う際に、前記カメラの撮影画像における、前記測位端末により測位された位置に対応する座標を含む領域を、前記侵入物の検知処理の対象外領域とすることを特徴とする監視方法。
    In the monitoring method to detect intruders in the monitoring area
    A worker who performs a predetermined work in the monitoring area carries a positioning terminal and carries it.
    When the image processing device performs detection processing of the intruder using the image taken by the camera that captures the monitoring area, the coordinates corresponding to the position positioned by the positioning terminal in the image captured by the camera are obtained. A monitoring method characterized in that the included area is set as a non-target area of the intruder detection process.
PCT/JP2021/032264 2020-09-15 2021-09-02 Monitoring system and monitoring method WO2022059500A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022550457A JP7354461B2 (en) 2020-09-15 2021-09-02 Monitoring system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-154507 2020-09-15
JP2020154507 2020-09-15

Publications (1)

Publication Number Publication Date
WO2022059500A1 true WO2022059500A1 (en) 2022-03-24

Family

ID=80776926

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/032264 WO2022059500A1 (en) 2020-09-15 2021-09-02 Monitoring system and monitoring method

Country Status (2)

Country Link
JP (1) JP7354461B2 (en)
WO (1) WO2022059500A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08163652A (en) * 1994-11-30 1996-06-21 Mitsubishi Electric Corp Tresspass monitor system
JP2002334382A (en) * 2001-05-08 2002-11-22 Hitachi Kiden Kogyo Ltd System for managing entering/leaving persons
JP2015179984A (en) * 2014-03-19 2015-10-08 株式会社東芝 Image processing apparatus, and method and program therefor
WO2018198385A1 (en) * 2017-04-28 2018-11-01 株式会社 テクノミライ Digital register security system, method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08163652A (en) * 1994-11-30 1996-06-21 Mitsubishi Electric Corp Tresspass monitor system
JP2002334382A (en) * 2001-05-08 2002-11-22 Hitachi Kiden Kogyo Ltd System for managing entering/leaving persons
JP2015179984A (en) * 2014-03-19 2015-10-08 株式会社東芝 Image processing apparatus, and method and program therefor
WO2018198385A1 (en) * 2017-04-28 2018-11-01 株式会社 テクノミライ Digital register security system, method, and program

Also Published As

Publication number Publication date
JP7354461B2 (en) 2023-10-02
JPWO2022059500A1 (en) 2022-03-24

Similar Documents

Publication Publication Date Title
KR102021999B1 (en) Apparatus for alarming thermal heat detection results obtained by monitoring heat from human using thermal scanner
KR101766305B1 (en) Apparatus for detecting intrusion
US6937743B2 (en) Process and device for detecting fires based on image analysis
KR101464344B1 (en) Surveillance camera and image managing system, and method for detecting abnormal state by training normal state of surveillance image
WO2018096787A1 (en) Person's behavior monitoring device and person's behavior monitoring system
KR20140127574A (en) Fire detecting system using unmanned aerial vehicle for reducing of fire misinformation
KR20070028813A (en) Method and system for monitoring forest fire
KR20190046351A (en) Method and Apparatus for Detecting Intruder
CN111753780B (en) Transformer substation violation detection system and violation detection method
RU2268497C2 (en) System and method for automated video surveillance and recognition of objects and situations
CN113205659A (en) Fire disaster identification method and system based on artificial intelligence
KR102233679B1 (en) Apparatus and method for detecting invader and fire for energy storage system
CN115376269A (en) Fire monitoring system based on unmanned aerial vehicle
KR101656642B1 (en) Group action analysis method by image
KR101695127B1 (en) Group action analysis method by image
KR102357736B1 (en) Fire detection system
CN114005088A (en) Safety rope wearing state monitoring method and system
WO2022059500A1 (en) Monitoring system and monitoring method
KR101903615B1 (en) Visual observation system and visual observation method using the same
CN111966126A (en) Unmanned aerial vehicle patrol method and device and unmanned aerial vehicle
US20230064953A1 (en) Surveillance device, surveillance system, and surveillance method
KR20230121229A (en) Occupational safety and health education system through artificial intelligence video control and method thereof
EP0614155A2 (en) Motion detection system
KR101224534B1 (en) Fire detection device based on image processing with motion detect function
KR200191446Y1 (en) A forest fires sensing appararus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21869187

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022550457

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21869187

Country of ref document: EP

Kind code of ref document: A1