WO2016157327A1 - 映像監視システム及び映像監視方法 - Google Patents
映像監視システム及び映像監視方法 Download PDFInfo
- Publication number
- WO2016157327A1 WO2016157327A1 PCT/JP2015/059737 JP2015059737W WO2016157327A1 WO 2016157327 A1 WO2016157327 A1 WO 2016157327A1 JP 2015059737 W JP2015059737 W JP 2015059737W WO 2016157327 A1 WO2016157327 A1 WO 2016157327A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- monitoring
- camera
- predetermined
- predetermined event
- target position
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19608—Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19639—Details of the system layout
- G08B13/19645—Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
Definitions
- the present invention relates to video surveillance technology.
- Patent Document 1 proposes a monitoring method in which a plurality of cameras including a rotating camera capable of pan / tilt / zoom control are linked to enlarge and display an intruding object with a minimum number of cameras without blind spots.
- the two rotating cameras are caused to shoot the monitoring space in a wide area so as to complement the blind spots.
- Patent Document 2 a surveillance camera system is proposed in which a master camera and a slave camera work together to be able to photograph a player who is an object to be monitored.
- the photographing direction of the slave camera is calculated based on the photographing direction of the master camera and the position information of the master camera and the slave camera. Then, by directing the shooting direction of the slave camera in the calculated direction, the master camera and the slave camera can shoot the monitoring object.
- each proposed method is a method in which when a person such as an intruder is detected by a certain image pickup apparatus, another image pickup apparatus is directed toward the person.
- a method reduces the monitoring efficiency in a scene where a plurality of people come and go.
- the camera is controlled one by one in response to all of the people who come and go, and the monitoring efficiency is lowered.
- the present invention has been made in view of such circumstances, and provides a technique for efficiently monitoring with a plurality of imaging devices.
- the first aspect relates to a video surveillance system.
- the video monitoring system includes a detection unit that detects a predetermined event based on an image captured by the first imaging device, and the second imaging device captures a predetermined position after the detection of the predetermined event. And a control means for controlling the second imaging device.
- the second aspect relates to a video monitoring method executed by at least one computer.
- the video monitoring method according to the second aspect detects a predetermined event based on an image captured by the first imaging device, and after the predetermined event is detected, the second imaging device images a predetermined position. Controlling the second imaging device.
- Another aspect of the present invention may be a program that causes at least one computer to execute the method of the second aspect, or a computer-readable recording medium that records such a program. May be.
- This recording medium includes a non-transitory tangible medium.
- FIG. 1st embodiment It is a figure which shows notionally the hardware structural example of the video monitoring system in 1st embodiment. It is a figure which shows notionally the process structural example of the monitoring control apparatus in 1st embodiment. It is a figure which shows the relationship between the imaging area of a surveillance camera, and the set surveillance area. It is a figure which shows the example of the correspondence information stored in a correspondence storage part. It is a flowchart which shows the operation example of the monitoring control apparatus in 1st embodiment. It is a figure which shows the example of control of a surveillance camera notionally. It is a figure which shows the relationship between the imaging region of a monitoring camera, a monitoring target position, and a detection target position. It is a figure which shows the example of the correspondence information further stored in a correspondence storage part.
- a monitoring system that controls another monitoring camera when a certain monitoring camera detects an object such as a person or a car.
- a plurality of surveillance cameras can comprehensively monitor intruders and the like in cooperation with each other.
- a monitoring system is supposed to be applied to prohibited areas where people are originally not allowed to enter or scenes where the appearance of people is limited to some extent.
- Start controlling the camera Therefore, for example, in a scene where a large number of people come and go, such as streets such as cities, large-scale commercial facilities, airports, terminal stations, platforms, etc., leisure facilities, sports facilities, stadiums, etc.
- the monitoring camera is controlled one by one with the detection of the person, and the monitoring work cannot be performed efficiently.
- FIG. 1 is a diagram conceptually illustrating a hardware configuration example of a video monitoring system 1 (hereinafter sometimes abbreviated as system 1) in the first embodiment.
- the system 1 includes a monitoring control device 10, a plurality of monitoring cameras 9 (# 1), 9 (# 2) to 9 (#n), and the like.
- the plurality of surveillance cameras 9 (# 1), 9 (# 2) to 9 (#n) include at least one movable camera capable of changing the imaging direction. If this movable camera can change the imaging direction, the movable direction may be changed only up and down, or may be changed only left and right.
- the monitoring camera 9 (# 1) is a PTZ (Pan-Tilt-Zoom) camera.
- the other surveillance cameras 9 (# 2) and the like are fixed or movable cameras.
- each monitoring camera is referred to as “monitoring camera 9” unless it is necessary to distinguish the individual monitoring cameras.
- Each surveillance camera 9 is installed in a different place so that the imaging area overlaps with at least one other surveillance camera 9.
- Each surveillance camera 9 sends a video signal (image frame) to the communication unit 5.
- the transmission rate of the image frame that each surveillance camera 9 sends to the communication unit 5 is not limited. If the transmission rate of image frames is high, the monitoring control apparatus 10 can acquire many image frames in time units, and therefore can perform high-precision monitoring control. If the transmission rate of the image frame is determined according to the specification of the frame rate of each monitoring camera 9, the communication capacity between the monitoring control device 10 and each monitoring camera 9, the accuracy required for the video monitoring system 1, etc. good. Moreover, as long as each surveillance camera 9 can output a video signal, its performance and function are not limited.
- the monitoring control device 10 is a so-called computer, and includes, for example, a CPU (Central Processing Unit) 2, a memory 3, an input / output interface (I / F) 4, a communication unit 5 and the like connected by a bus.
- the number of hardware elements is not limited, and these hardware elements can be collectively referred to as an information processing circuit.
- the hardware configuration of the monitoring control device 10 is not limited to the example shown in FIG.
- the CPU 2 may include an application specific integrated circuit (ASIC), a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), and the like.
- the memory 3 is a RAM (Random Access Memory), a ROM (Read Only Memory), or an auxiliary storage device (such as a hard disk).
- the input / output I / F 4 can be connected to a user interface device such as a display device 7, an input device 8, a printer (not shown), a projection device (not shown), or the like.
- the display device 7 displays a screen corresponding to drawing data processed by a CPU 2 or a GPU (Graphics Processing Unit) (not shown), such as an LCD (Liquid Crystal Display) or CRT (Cathode Ray Tube) display. Device.
- the display device 7 may each display an image obtained from the video signal sent from each surveillance camera 9.
- the input device 8 is a device that receives input of a user operation such as a keyboard and a mouse.
- the monitoring control device 10 may be implemented as a computer of a mobile device (smart phone, tablet, etc.), and a touch panel in which the display device 7 and the input device 8 are integrated is connected to the input / output I / F 4. Also good.
- the communication unit 5 exchanges signals with other computers and devices by wired communication or wireless communication.
- the communication unit 5 communicates with a plurality of monitoring cameras 9.
- the communication method between the communication unit 5 and each monitoring camera 9 is not limited.
- the communication unit 5 acquires a video signal from each monitoring camera 9 and sends an instruction signal to the monitoring camera 9.
- a portable recording medium or the like can be connected to the communication unit 5.
- FIG. 2 is a diagram conceptually illustrating a processing configuration example of the monitoring control device 10 in the first embodiment.
- the monitoring control apparatus 10 includes an acquisition unit 11, an image storage unit 12, a detection unit 13, a camera control unit 14, a correspondence storage unit 15, an output processing unit 16, an input unit 17, a calculation unit 18, and the like.
- Each of these processing modules is realized, for example, by executing a program stored in the memory 3 by the CPU 2.
- the program is installed from a portable recording medium such as a CD (Compact Disc) or a memory card or another computer on the network via the input / output I / F 4 or the communication unit 5 and is stored in the memory 3. It may be stored.
- the acquisition unit 11 acquires data of an image captured by each monitoring camera 9 from each monitoring camera 9. Specifically, the acquisition unit 11 sequentially acquires image data from video signals sent from each monitoring camera 9. At this time, the acquisition unit 11 may acquire the image data by capturing the input video signal at an arbitrary timing. The acquisition unit 11 stores the acquired image data in the image storage unit 12 in association with the identification information of the monitoring camera 9 that captured the image. The acquisition unit 11 acquires an image via the communication unit 5, for example.
- the acquisition unit 11 further acquires camera parameters that can specify the imaging direction of the monitoring camera 9, and stores the camera parameters in the image storage unit 12 in further association with the image data and the identification information of the monitoring camera 9.
- This camera parameter indicates, for example, the position of the monitoring camera 9, the posture of the monitoring camera 9 indicating the imaging direction, the zoom value, etc. (may include internal parameters), and may be acquired from each monitoring camera 9. It may be acquired from the camera control unit 14.
- the image data may be referred to as an image.
- the input unit 17 receives an input by the user for setting a monitoring position for detecting a predetermined event.
- a “monitor position” is defined as any static point, line, plane or space in a common coordinate system or camera coordinate system.
- Monitoring position is, for example, “position of video monitoring line” or “position of monitoring area”.
- the common coordinate system is coordinates that are recognized in common between a plurality of monitoring cameras 9 and between images captured by the monitoring cameras 9.
- a GPS (Global Positioning System) coordinate system a coordinate system defined on a floor map in a facility, or the like can be used.
- the camera coordinate system is coordinates set on an image captured by a certain monitoring camera 9.
- the input unit 17 may accept the setting of the monitoring position in the camera coordinate system of the predetermined monitoring camera 9 or in the common coordinate system.
- the input unit 17 converts the monitoring position into the common coordinate system based on the camera parameters described above.
- Various known methods can be applied to the conversion between the common coordinate system and the camera coordinate system using the camera parameters.
- the input unit 17 outputs the received information on the common coordinate system of the monitoring position to the calculation unit 18 together with the identification information of the monitoring position.
- the input unit 17 receives the monitoring position setting input by the user operating the input device 8 illustrated in FIG. 1 via the input / output I / F 4.
- the “video monitoring line” is a line that is specified by the user and is superimposed on a monitoring image or the like, and is a line for detecting an object that crosses (passes through) or touches the line.
- the “monitoring area” is a part of an area such as a monitoring image set by the user for detecting a predetermined event. Other names for the monitoring area include AOI (Area Of Interest), ROI (Region Of Interest), sensing area, restricted area, and the like.
- FIG. 3 is a diagram showing the relationship between the imaging area of the surveillance camera and the set surveillance area.
- the monitoring cameras 9 (# 1) and 9 (# 2) are installed so that the imaging areas overlap.
- the monitoring area AM is set so as to partially overlap the imaging area where the two imaging areas overlap.
- the monitoring area AM is set to a plane area on the common coordinate system, that is, a plane area on the floor surface (ground) in the real world.
- the monitoring camera 9 is installed and the monitoring area AM is set as shown in FIG.
- the calculation unit 18 calculates camera parameters for imaging the monitoring position for each monitoring camera 9 capable of imaging the monitoring position from the monitoring position represented by the common coordinate system output from the input unit 17.
- the method by which the calculation unit 18 calculates camera parameters is not limited, and various known methods may be used.
- the correspondence storage unit 15 indicates the correspondence relationship between the range that can be captured by each monitoring camera 9 in the common coordinate system and the camera parameter group for capturing the range for each monitoring camera 9. May be held in advance. In that case, the calculation unit 18 can determine the camera parameter using the correspondence relationship stored in the correspondence storage unit 15 so that a certain position (monitoring position) on the common coordinate system can be imaged. .
- the calculation unit 18 may calculate the camera parameter so as to capture the video surveillance line at the center of the angle of view. For example, the calculation unit 18 may calculate the camera parameters so that the ratio of the monitoring area occupying the entire angle of view is about half.
- the calculation unit 18 includes identification information (ID) of each monitoring position, coordinate information of the common coordinate system of each monitoring position, and camera parameters that can capture the monitoring position for each monitoring camera 9 that can capture the monitoring position. Is output to the correspondence storage unit 15.
- the detection unit 13 detects a predetermined event at a monitoring position included in an image captured by each monitoring camera 9.
- the detection unit 13 can specify the monitoring position in the image based on the camera parameters of the monitoring camera 9 when capturing the image and the common coordinates of the monitoring position stored in the correspondence storage unit 15. What is necessary is just to use what is linked
- the detection unit 13 is based on a correspondence relationship between a range that can be captured by each monitoring camera 9 in the common coordinate system and a camera parameter group for capturing the range, and a camera parameter when capturing the image.
- the camera coordinate system on the image can be converted to a common coordinate system.
- the correspondence relationship is held in advance in the correspondence storage unit 15 as described above, for example.
- the detection unit 13 can determine whether or not the monitoring position is included in the image from the relationship between the range of the common coordinate system appearing in the image and the common coordinates of the monitoring position.
- the detection unit 13 can convert the common coordinates of the monitoring position into the camera coordinates of the monitoring camera 9.
- the detection unit 13 detects a predetermined event at the monitoring position (monitoring position in the image) represented by the camera coordinate system.
- the method for specifying the monitoring position of the detection unit 13 is not limited.
- the predetermined event detected by the detection unit 13 is various situations to be monitored in the system 1.
- the detection unit 13 detects a predetermined situation caused by the object as the predetermined event.
- the “object” is a part or all of a predetermined object such as a person, an animal, a vehicle such as a car, or a suitcase.
- the predetermined event may be the passage of the object through the video surveillance line, the predetermined condition of the object in the monitoring area (intrusion, leaving, appearance, disappearance, fighting, staying, wandering, falling, rising, sitting, moving direction change, Reverse running, shoplifting, detouring, damage, taking away, leaving, graffiti, etc.), moving objects on a specific route defined by line segments, etc.
- the detection unit 13 may detect that a predetermined situation has occurred in a predetermined number or more of objects as a predetermined event.
- the “object” is as described above.
- the predetermined situation is a situation that can be caused by an object, and includes, for example, the situation exemplified above, such as passage of a video surveillance line and intrusion in a surveillance area.
- the predetermined event detected in this case can be described as an “abnormal state” because it is a situation that requires special attention. For example, when the object is a person, a crowd is formed when a plurality of people stay in the monitoring area. The detection unit 13 detects the state in which the crowd is formed as an abnormal state.
- the detection unit 13 may detect as an “abnormal state” particularly when the above-described predetermined situation occurs at approximately the same time on a predetermined number or more objects. For example, when the object is a person, when a plurality of people sit down at the same time in the monitoring area, it is considered that an abnormal situation occurs, such as a certain person suddenly firing a handgun.
- the detection unit 13 detects a situation in which a large number of people are sitting at once as an abnormal state.
- the video surveillance line may have direction attributes (right to left, left to right, both directions, etc.), and is detected as a predetermined event only when an object in the direction specified by the user has passed. It may be designed to be Further, it may be designed so that a predetermined event is detected when the number of people passing through the video monitoring line per unit time exceeds a predetermined number. Moreover, you may design so that the state may be detected as an abnormal state.
- the detection unit 13 detects an object from the acquired image in order to detect a predetermined event.
- the detection unit 13 detects an object using various existing methods. For example, the detection unit 13 detects the object from the acquired image by the background difference method. In that case, the detection unit 13 constructs a model representing background information from a plurality of images input along a time series, and detects a moving object using the model. Most simply, the detection unit 13 defines a background image generated by averaging information of a still area of an image between a plurality of frames as a background model. The detection unit 13 calculates a difference between the target image and the background image, and detects a region having a large difference as a target.
- the detection unit 13 may detect directly using a model of an object such as a person without using a background model.
- the model used here may be a model representing the entire person or a model representing a part of the person.
- the detection unit 13 may detect a face or head using a face detector or head detector that models and detects a face or head as a part of a person.
- the detection part 13 may detect a target object using the detector which detects a part of person area, such as an upper body and a lower body.
- the detection unit 13 detects a predetermined event using various existing methods. For example, the detection unit 13 detects that the target object changes to a predetermined state while tracking the detected target object between a plurality of images. The predetermined state after the change may be held in advance as an image feature amount. A well-known method should just be utilized for the tracking method of the target object between images, and the detection method of the state change of a target object. Or the detection part 13 may detect the event applicable to the said predetermined state by detecting a predetermined state using the image feature-value of the target object in the predetermined state.
- the detection unit 13 detects a predetermined event at the monitoring position in the image by converting the common coordinates of the monitoring position stored in the correspondence storage unit 15 into the camera coordinate system. Therefore, when detecting the predetermined event, the detecting unit 13 can identify the ID of the monitoring position where the predetermined event or the abnormal state is detected based on the information stored in the correspondence storage unit 15. The ID of the monitoring position where the predetermined event or abnormal state is detected is used, for example, by the camera control unit 14 described later.
- the camera control unit 14 selects the monitoring camera 9 that can capture the monitoring position (for example, the position of the video monitoring line or the position of the monitoring area) from the plurality of monitoring cameras 9. Then, the selected monitoring camera 9 is controlled so as to image the monitoring position where the predetermined event is detected.
- the camera control unit 14 excludes the monitoring camera 9 that has not been selected from the control target when the predetermined event is detected.
- the camera control unit 14 is a monitoring camera 9 other than the monitoring camera 9 that has captured an image in which a predetermined event is detected from among the plurality of monitoring cameras 9, and the monitoring in which the predetermined event is detected.
- a monitoring camera 9 capable of imaging the position is selected as a control target.
- the camera control unit 14 refers to the correspondence relationship information stored in the correspondence storage unit 15 illustrated in FIG. 4 and selects the monitoring camera 9 to be controlled.
- FIG. 4 is a diagram illustrating an example of correspondence information stored in the correspondence storage unit 15.
- the correspondence storage unit 15 includes monitoring camera 9 identification information (camera ID), monitoring position identification information (monitoring position ID), monitoring position common coordinates, camera parameters, and the like.
- Information indicating a plurality of correspondence relationships is stored.
- the camera parameter included in the correspondence relationship information indicates a parameter for the monitoring camera 9 specified by the camera ID to capture the monitoring position specified by the ID of the monitoring position, and indicates the attitude, zoom value, and the like of the monitoring camera 9. .
- the common coordinates of the monitoring position may be defined by a line segment such as a video monitoring line or may be defined by an area such as a monitoring area.
- FIG. 4 shows an example in which the common coordinates are defined by the X axis and the Y axis set on a street map.
- the camera control unit 14 acquires the identification information (ID) of the monitoring position where the predetermined event is detected from the detection unit 13.
- the camera control unit 14 may directly refer to the correspondence relationship information stored in the correspondence storage unit 15 and specify the ID of the monitoring position where the predetermined event is detected. That is, the camera control unit 14 can specify the ID of the monitoring position using the camera parameter acquired together with the image captured by the monitoring camera 9 (# 1). For example, the camera control unit 14 sets a parameter closer to the camera parameter acquired together with the image in which the predetermined event is detected, the “parameter 01” associated with the ID “001” of the monitoring camera 9 (# 1), and Select from “Parameter 02”. The camera control unit 14 can specify the monitoring position identified by the ID “002” associated with the selected “parameter 02” as the monitoring position where the predetermined event is detected.
- the camera control unit 14 When the camera control unit 14 specifies the ID of the monitoring position where the predetermined event is detected, the camera control unit 14 monitors other than the monitoring camera 9 that captures the image where the predetermined event is detected and is associated with the ID of the specified monitoring position.
- the ID of the camera 9 (control target) and camera parameters can be acquired from the correspondence storage unit 15.
- the camera control unit 14 controls the monitoring camera 9 specified by the acquired ID using the acquired camera parameter.
- a monitoring camera 9 other than the monitoring camera 9 that has captured the image where the predetermined event is detected images the monitoring position where the predetermined event is detected.
- the camera control unit 14 can control the monitoring camera 9 by sending the camera parameter to the monitoring camera 9 so that the acquired camera parameter is set in the monitoring camera 9 to be controlled.
- the camera control unit 14 may transmit a control signal to the monitoring camera 9 so that the posture or zoom value indicated by the acquired camera parameter is obtained.
- the camera control unit 14 may change the imaging direction of the monitoring camera 9, may change other parameters (such as a zoom value) of the monitoring camera 9, or may change both.
- a specific control method is not limited as long as the monitoring camera 9 that has not been able to image the monitoring position where the predetermined event is detected can be imaged.
- the selection method of the monitoring camera 9 to be controlled and the control method of the monitoring camera 9 are not limited to the above example, and various known methods may be used.
- other information that enables the monitoring camera 9 to image the monitoring position instead of the camera parameter may be associated with the camera ID and the ID of the monitoring position.
- the correspondence relationship information stored in the correspondence storage unit 15 includes identification information of a plurality of monitoring cameras 9 capable of imaging the same monitoring position, and camera parameters for the monitoring cameras 9 to image the monitoring positions. The associated information may be used.
- the correspondence relationship information stored in the correspondence storage unit 15 may not include information used for controlling the imaging direction of the monitoring camera 9 to be controlled.
- the camera control unit 14 controls the monitoring camera 9 to be controlled, and sequentially confirms images obtained from the monitoring camera 9 after the control until the monitoring position to be included is included. Good.
- the camera control unit 14 stops the control of the monitoring camera 9.
- the output processing unit 16 causes the display device 7 to display an image captured by each monitoring camera 9 stored in the image storage unit 12. Further, the output processing unit 16 may cause the display device 7 to display the image acquired by the acquisition unit 11. For example, the output processing unit 16 causes the display device 7 to always display the video captured by each monitoring camera 9. When a predetermined event is detected by the detection unit 13, the output processing unit 16 can also display an image in which the predetermined event is detected on the display device 7 with emphasis over other images. In addition, when a predetermined event is detected, the output processing unit 16 can also output to an output device other than the display device 7 (printer, audio output device, LED (Light Emitting Diode), etc.). In this embodiment, the output form of the image captured by each surveillance camera 9 is not limited.
- FIG. 5 is a flowchart illustrating an operation example of the monitoring control device 10 according to the first embodiment.
- the video monitoring method in the first embodiment is executed by at least one computer (CPU 2) such as the monitoring control device 10. Since each process is the same as the processing content of each processing module described above that the monitoring control apparatus 10 has, the details of each process are omitted as appropriate.
- the monitoring control device 10 receives an input of setting of the monitoring position by the user (S51). For example, the monitoring control device 10 receives the setting of the monitoring position in the camera coordinate system or the common coordinate system of the predetermined monitoring camera 9. When the monitoring position is set in the camera coordinate system, the monitoring control device 10 converts the monitoring position to the common coordinate system based on the camera parameters of the camera that has captured the image in which the camera coordinate system is set.
- the monitoring control device 10 captures the monitoring position for each monitoring camera 9 capable of imaging the monitoring position from the common coordinates of the monitoring position obtained from the input received in (S51). Is calculated (S52). This camera parameter calculation method is as described above.
- the monitoring control apparatus 10 determines the ID of the monitoring position for the monitoring position set by the input received in (S51), and calculates the ID of the monitoring position, the common coordinates of the monitoring position, and (S52).
- the camera parameters for each monitoring camera 9 are associated and stored in the correspondence storage unit 15 (S53).
- the correspondence storage unit 15 is associated with identification information (ID), common coordinates, and camera parameters for imaging with the monitoring camera 9 capable of imaging the monitoring position for the monitoring position input by the user. Will be stored.
- the monitoring control device 10 acquires data of images captured by each monitoring camera 9 from each monitoring camera 9 (S54).
- the monitoring control apparatus 10 stores the acquired image data in the image storage unit 12 in association with the identification information of the monitoring camera 9 that captured the image.
- the monitoring control apparatus 10 further acquires camera parameters of the monitoring camera 9 that has captured the image, and stores the camera parameters in the image storage unit 12 in further association with the image data and the identification information of the monitoring camera 9.
- the monitoring control device 10 specifies the monitoring position (for example, the position of the video monitoring line or the position of the monitoring area) in the image acquired in (S51) or the image extracted from the image storage unit 12 (S55).
- the camera parameter acquired in (S54) and the common coordinates of the monitoring position stored in the correspondence storage unit 15 are used.
- the specific method for specifying the monitoring position from the image is as described above. Note that the process of S55 is executed every time the PTZ of the surveillance camera is controlled (every time the imaging range changes).
- the monitoring control device 10 detects a predetermined event at the monitoring position specified in (S55) (S56).
- the monitoring control apparatus 10 may be any one of a predetermined event, such as the passage of a video monitoring line of a target object, a predetermined state of the target object in a monitoring area, and a movement of a target object of a specific route defined by a line segment. The above is detected.
- the monitoring control apparatus 10 may detect an abnormal state as a predetermined event. The contents of the predetermined event and the detection method of the predetermined event are as described above.
- the monitoring control device 10 is a monitoring camera 9 other than the monitoring camera 9 that has captured the image acquired in (S54) from among the plurality of monitoring cameras 9,
- the surveillance camera 9 capable of imaging the surveillance position specified in S55) is selected (S57).
- the method for selecting the monitoring camera 9 is also as described above.
- the monitoring control device 10 controls the monitoring camera 9 selected in (S57) so as to image the monitoring position specified in (S55) (S58). For example, the monitoring control device 10 changes the imaging direction of the monitoring camera 9 selected in (S57) so that the monitoring position can be imaged.
- the control method of the monitoring camera 9 is also as described above.
- FIG. 6 is a diagram conceptually illustrating an example of control of the monitoring camera 9.
- the monitoring control device 10 receives an input for setting the position of the monitoring area AM (S51). This input may be performed by the user's range designation operation on the image captured by the monitoring camera 9 (# 2).
- the monitoring control device 10 calculates common coordinates of the monitoring area AM. Then, the monitoring control device 10 calculates camera parameters for imaging the position of the monitoring area AM in the monitoring cameras 9 (# 1) and (# 2) capable of imaging the position of the monitoring area AM ( S52).
- the monitoring control device 10 may hold the camera parameter in advance.
- the monitoring control apparatus 10 associates the position ID of the monitoring area AM, the common coordinates of the monitoring area, the camera parameters of the monitoring camera 9 (# 1), and the camera parameters of the monitoring camera 9 (# 2) in the correspondence storage unit 15. Store (S53).
- the monitoring control device 10 acquires data of an image captured by the monitoring camera 9 (# 2) and camera parameters at the time of capturing the image.
- the monitoring control device 10 may hold the camera parameter in advance.
- the monitoring control device 10 identifies the position of the monitoring area AM in the image (S55), and detects that the human OB1 has entered the position of the monitoring area AM as a predetermined event (S56).
- the monitoring camera 9 (# 1) is imaging in the imaging direction D1, and cannot capture the position of the monitoring area AM.
- the monitoring control device 10 selects a monitoring camera 9 (# 1) other than the monitoring camera 9 (# 2) as a monitoring camera capable of imaging the position of the monitoring area AM (S57). . Then, the monitoring control device 10 controls the selected monitoring camera 9 (# 1) so as to image the position of the monitoring area AM (S58). That is, the monitoring control device 10 performs control so that the imaging direction of the monitoring camera 9 (# 1) changes from D1 to D2. Thereby, the position of the monitoring area AM is imaged by both the monitoring cameras 9 (# 1) and (# 2), and an image in which the monitoring area AM is imaged from different directions can be obtained.
- a predetermined event is detected at a monitoring position (for example, a position of a video monitoring line or a position of a monitoring area) included in an image captured by a certain monitoring camera 9.
- the surveillance cameras 9 other than the surveillance camera 9 that captured the image in which the predetermined event is detected are controlled so as to capture the predetermined position.
- the monitoring position where the predetermined event is detected is imaged by two or more monitoring cameras 9. Therefore, according to the first embodiment, the state of the monitoring position (for example, the position of the video monitoring line or the position of the monitoring area) when the predetermined event occurs and the state of the predetermined event that has occurred are monitored in detail from multiple directions. be able to.
- the surveillance camera 9 in response to detection of a predetermined event such as the passage of a video monitoring line of an object, a predetermined situation in the monitoring area of the object, movement of an object of a specific route defined by a line segment, etc.
- the surveillance camera 9 is controlled. Therefore, it is possible to efficiently perform monitoring work even in a situation where a large number of people come and go or a crowd is formed, as compared with a method in which surveillance cameras are controlled one by one with detection of a person.
- it is detected that a predetermined situation has occurred in a predetermined number or more of objects and the monitoring camera 9 is controlled in response to this detection. According to this, since the monitoring camera 9 is controlled only by the occurrence of an event to be particularly noted such as an abnormal state, the efficiency of the monitoring work can be further improved.
- a movable monitoring camera 9 is included like a PTZ camera, and the movable monitoring camera 9 is controlled to image a monitoring position where a predetermined event is detected. Accordingly, by using the movable monitoring camera 9, it is possible to comprehensively monitor a wide area with a small number of monitoring cameras, and in conjunction with the detection of a predetermined event, the position of the video monitoring line or the monitoring area A monitoring position such as a position can be preferentially monitored.
- the processing configuration of the monitoring control device 10 in the second embodiment is the same as that in the first embodiment (see FIG. 2).
- the processing contents shown below are different from the first embodiment.
- the detection unit 13 detects a predetermined event at a predetermined position included in an image captured by a certain monitoring camera 9.
- the camera control unit 14 controls the other monitoring camera 9 so as to capture another position corresponding to the predetermined position where the detection unit 13 has detected the predetermined event.
- a predetermined position for detecting a predetermined event from an image of a certain monitoring camera 9 is referred to as a “detection target position”, and a position monitored by one or more other controlled monitoring cameras 9 is “ In other words, the detection unit 13 detects a predetermined event at a detection target position different from the monitoring target position included in an image captured by a certain monitoring camera 9.
- Detection target position Is a position determined in advance for detecting a predetermined event, and is a position that can be imaged by at least one monitoring camera 9.
- the “monitoring target position” is a position to be monitored provided in association with one or more “detection target positions”, and is a position that can be imaged by at least one monitoring camera 9.
- the “detection target position” and the “monitoring target position” are set to any static point, line, plane, or space in the real world.
- “Detection target position” corresponds to, for example, “position of video monitoring line” or “position of monitoring area” in the first embodiment.
- the “monitoring target position” is an arbitrary position of a line segment or a region.
- the correspondence storage unit 15 has an ID of a detection target position, common coordinates of the detection target position, an ID of a monitoring target position corresponding to the detection target position, and a monitor capable of imaging the monitoring target position. The correspondence relationship with the camera parameter for imaging the monitoring target position for each camera 9 is stored.
- the predetermined event detected by the detection unit 13 is as described above. However, since it is the monitoring target position that is monitored by the control of the monitoring camera 9, the occurrence of a predetermined event at the detection target position is a trigger (priming) that requires monitoring of the monitoring target position. Therefore, for example, based on such a relationship, the detection target position with respect to the monitoring target position and the content of the predetermined event to be detected are determined. For example, the detection target position is set to a position where a person who goes to the monitoring target position is likely to pass.
- the detection unit 13 detects, as a predetermined event, a state in which a person moves in the direction of the monitoring target position (passing in a predetermined direction of the video monitoring line) or a person staying state at the detection target position.
- the monitoring target position is set in an area along the track of the platform of a certain station, and the detection target position is set in a part of the tracks around the station.
- the detection unit 13 detects, as a predetermined event, a state in which the train moves in the direction of the monitoring target position (station) at the detection target position.
- the monitoring camera 9 installed on the platform and monitoring the escalator is controlled so as to monitor the area along the track that is the monitoring target position.
- the monitoring target position, the detection target position, and the content of the predetermined event to be detected are not limited to the above examples.
- the detection unit 13 can specify the detection target position in the image by the same method as the monitoring position in the first embodiment.
- the detection unit 13 detects a predetermined event at the detection target position in the image by converting the common coordinates of the detection target position stored in the correspondence storage unit 15 into the camera coordinate system. Therefore, when detecting the predetermined event, the detection unit 13 can specify the ID of the detection target from which the predetermined event is detected based on the information stored in the correspondence storage unit 15.
- the ID of the detection target position where the predetermined event is detected is used, for example, by the camera control unit 14 described later.
- FIG. 7 is a diagram showing the relationship among the imaging area of the monitoring camera, the monitoring target position, and the detection target position.
- the monitoring target position is set in a plane area on the floor surface (ground) in the real world
- the detection target position is set by a line segment on the floor surface (ground) in the real world.
- the monitoring camera 9 (# 1), the monitoring camera 9 (# 3), and the monitoring camera 9 (# 4) can image the monitoring target position
- the monitoring camera 9 (# 2) can image the detection target position.
- the detection unit 13 detects a predetermined event at the detection target position included in the image captured by the monitoring camera 9 (# 2).
- the correspondence storage unit 15 stores the following correspondence information.
- the correspondence storage unit 15 identifies the identification information of the monitoring camera 9, the identification information of the detection target position that can be imaged by the monitoring camera 9, the common coordinates of the detection target position, and the monitoring target position corresponding to the detection target position.
- a plurality of pieces of correspondence information between the identification information and camera parameters for imaging the monitoring target position by the monitoring camera 9 are stored.
- FIG. 8 is a diagram illustrating an example of correspondence information stored in the correspondence storage unit 15.
- the correspondence information stored in the correspondence storage unit 15 includes the camera ID of the monitoring camera 9, the ID of the detection target position, the common coordinates of the detection target position, and the monitoring corresponding to the detection target position.
- a plurality of correspondence relationships between an ID of the target position and a camera parameter for imaging the monitoring target position with the monitoring camera 9 are shown.
- This camera parameter indicates a parameter for the monitoring camera 9 specified by the camera ID to image the monitoring target position specified by the ID of the monitoring target position, and indicates the posture of the monitoring camera 9, a zoom value, and the like.
- the monitoring camera 9 with the camera ID “001” indicates that the monitoring target position “001” can be imaged.
- the monitoring camera 9 with the camera ID “003” can capture the monitoring target position “001” and the detection target position “002” and cannot capture the monitoring target position “002” corresponding to the detection target position “002”.
- the camera parameter “03” for imaging the monitoring target position “001” and the common coordinates of the detection target position “002” are held, but the camera parameter for imaging the monitoring target position “002” is shown. This indicates that the monitoring camera 9 with the camera ID “004” can capture the monitoring target position “002”.
- the information stored in the correspondence storage unit 15 is stored by receiving the setting of the detection target position and the monitoring target position by the user at the input unit as described above.
- the camera parameters that can image the monitoring target position are calculated for each monitoring camera 9 by the calculation unit 18 based on the common coordinates of the monitoring target position input by the user.
- the camera control unit 14 After detecting the predetermined event by the detection unit 13, the camera control unit 14 refers to the correspondence relationship information stored in the correspondence storage unit 15, so that the camera control unit 14 corresponds to the predetermined detection target position where the predetermined event is detected.
- the surveillance camera 9 capable of imaging the monitoring target position is specified.
- the camera control unit 14 acquires the identification information (ID) of the detection target position where the predetermined event is detected from the detection unit 13. From the correspondence information stored in the correspondence storage unit 15, the camera control unit 14 can identify the monitoring target position identification information (ID) corresponding to the acquired ID of the detection target position and can monitor the monitoring target position.
- the camera ID of the camera 9 is specified.
- the camera control unit 14 can acquire a camera parameter capable of capturing an image of the monitoring target position for each camera ID of the specified monitoring camera 9.
- a predetermined event is detected at the detection target position specified by the detection target position ID “001” of the monitoring camera 9 specified by the camera ID “001”.
- the camera control unit 14 acquires the ID “001” of the monitoring target position corresponding to the detection target position ID “001”, and further the camera ID of the monitoring camera corresponding to the ID “001” of the monitoring target position. “001”, “002”, and “003” are acquired. Furthermore, the camera control unit 14 acquires camera parameters for each acquired camera ID.
- the camera control unit 14 controls the monitoring camera 9 specified by the acquired ID using the acquired camera parameter. For example, the camera control unit 14 controls the monitoring camera 9 with the camera ID “001” using the camera parameter “01”, and controls the monitoring camera 9 with the camera ID “002” using the camera parameter “02”. Then, the monitoring camera 9 with the camera ID “003” is controlled using the camera parameter “03”.
- the monitoring camera 9 with the camera ID “001” that acquired the image in which the predetermined event is detected is also controlled to face the direction of the monitoring target position.
- FIG. 9 is a flowchart illustrating an operation example of the monitoring control device 10 according to the second embodiment.
- the video monitoring method in the second embodiment is executed by at least one computer (CPU 2) such as the monitoring control device 10. Since each process is the same as the processing content of each processing module described above that the monitoring control apparatus 10 has, the details of each process are omitted as appropriate.
- the monitoring control device 10 receives an input of setting of the detection target position and the monitoring target position by the user (S91). For example, the monitoring control device 10 receives the detection target position and the setting of the monitoring target position in the camera coordinate system or the common coordinate system of the predetermined monitoring camera 9. When the detection target position and the monitoring target position are set in the camera coordinate system, the monitoring control device 10 detects the detection target position and the monitoring target position based on the camera parameters of the camera that has captured the image in which the camera coordinate system is set. Are each converted to a common coordinate system.
- the monitoring control device 10 images the monitoring target position for each monitoring camera 9 capable of imaging the monitoring target position from the common coordinates of the monitoring target position obtained from the input received in (S91).
- Camera parameters are calculated (S92). This camera parameter calculation method is as described in the first embodiment.
- the monitoring control apparatus 10 determines the ID of the detection target position and the monitoring target position for the detection target position and the monitoring target position set by the input received in (S91), and the detection target position and the ID of the monitoring target position.
- the common coordinates of the detection target position and the camera parameters for each monitoring camera 9 calculated in (S92) are associated with each other and stored in the correspondence storage unit 15 (S93).
- the correspondence storage unit 15 can capture the correspondence between the detection target position and the monitoring target position input by the user, the common coordinates of the detection target position, and the monitoring target position corresponding to the detection target position.
- the camera parameters for imaging at 9 are stored in association with each other.
- the monitoring control device 10 acquires data of images captured by each monitoring camera 9 from each monitoring camera 9 (S94).
- the monitoring control apparatus 10 stores the acquired image data in the image storage unit 12 in association with the identification information of the monitoring camera 9 that captured the image.
- the monitoring control apparatus 10 further acquires camera parameters of the monitoring camera 9 that has captured the image, and stores the camera parameters in the image storage unit 12 in further association with the image data and the identification information of the monitoring camera 9.
- the monitoring control device 10 specifies the detection target position set in (S91) in the image acquired in (S94) or the image extracted from the image storage unit 12 (S95).
- the camera parameters acquired in (S94) and the common coordinates of the detection target position stored in the correspondence storage unit 15 are used.
- the method for specifying the detection target position from the image is the same as the method for specifying the monitoring position in the first embodiment. Note that the process of S95 is executed every time the PTZ of the surveillance camera is controlled (every time the imaging range changes).
- the monitoring control device 10 detects a predetermined event at the detection target position specified in (S95) (S96).
- the monitoring control apparatus 10 may be any one of a predetermined event, such as the passage of a video monitoring line of a target object, a predetermined state of the target object in a monitoring area, and a movement of a target object of a specific route defined by a line segment. The above is detected.
- the monitoring control apparatus 10 may detect an abnormal state as a predetermined event. The contents of the predetermined event and the detection method of the predetermined event are as described in the first embodiment.
- the monitoring control device 10 can monitor the monitoring target position 9 corresponding to the detection target position specified in (S95) from among the plurality of monitoring cameras 9. Is selected (S97).
- the method for selecting the monitoring camera 9 is also as described above.
- the monitoring control device 10 controls the monitoring camera 9 selected in (S97) so as to image the monitoring target position corresponding to the detection target position specified in (S95) (S98). For example, the monitoring control device 10 changes the imaging direction of the monitoring camera 9 selected in (S97) so that the monitoring target position can be imaged.
- the control method of the monitoring camera 9 is also as described above.
- FIG. 10 is a diagram conceptually illustrating an example of control of the monitoring camera 9.
- the monitoring control device 10 receives input of setting of the detection target position (video monitoring line) and the monitoring target position (S91). This input may be performed by a user's line segment designation operation and range designation operation for each image captured by each surveillance camera 9.
- the monitoring control device 10 calculates the common coordinates of the detection target position and the monitoring target position. Then, the monitoring control device 10 calculates camera parameters for imaging the monitoring target position in each of the monitoring cameras 9 (# 1), 9 (# 3), and 9 (# 4) capable of imaging the monitoring target position. (S92).
- the monitoring control device 10 may hold the camera parameter in advance.
- the monitoring control device 10 stores the following correspondence information in the correspondence storage unit 15 (S93).
- the monitoring control device 10 provides correspondence information on the camera ID, the ID of the detection target position (video monitoring line), and the ID of the monitoring target position corresponding to the detection target position.
- the monitoring control device 10 stores correspondence information between the camera ID, the ID of the monitoring target position, and the camera parameter for imaging the monitoring target position.
- the monitoring control apparatus 10 stores correspondence information between the camera ID, the ID of the monitoring target position, and the camera parameter for imaging the monitoring target position.
- the monitoring control apparatus 10 stores correspondence information between the camera ID, the ID of the monitoring target position, and the camera parameter for imaging the monitoring target position.
- the monitoring control device 10 acquires data of an image captured by the monitoring camera 9 (# 2) and camera parameters at the time of capturing the image.
- the monitoring control device 10 may hold the camera parameter in advance.
- the monitoring control device 10 specifies a detection target position (video monitoring line) in the image (S95), and detects that a person has passed the video monitoring line as a predetermined event (S96).
- a detection target position video monitoring line
- S96 detects that a person has passed the video monitoring line as a predetermined event
- the monitoring control device 10 uses the monitoring camera 9 as a monitoring camera that can image the monitoring target position corresponding to the detection target position based on the correspondence relationship information stored in the correspondence storage unit 15. (# 1), 9 (# 3) and 9 (# 4) are selected (S97). Then, the monitoring control device 10 uses the camera parameters included in the corresponding relationship information so as to image the monitoring target position, and the selected monitoring cameras 9 (# 1), 9 (# 3) and 9 (# 4) is controlled (S98). As a result, as shown in FIG. 10, the monitoring target position is imaged by the monitoring cameras 9 (# 1), 9 (# 3), and 9 (# 4).
- a detection target position and a monitoring target position indicating different positions are acquired by user input, a predetermined event is detected at a detection target position in a captured image of a certain monitoring camera 9, and The other monitoring camera 9 is controlled so as to image the monitoring target position corresponding to the detection target position where the predetermined event is detected.
- a sign of something happening at the monitoring target position can be detected by detecting a predetermined event at the detection target position, and the monitoring target position can be placed under monitoring at the sign stage. It becomes like this.
- a video monitoring line is set on the path to the escalator, and an event is set to be detected when an abnormal condition occurs when the number of people passing by per unit time exceeds a predetermined number.
- the monitoring and control apparatus 10 in the second embodiment raises an alert and concentrates a plurality of monitoring cameras 9 installed around the escalator at the entrance and exit of the escalator. Control to monitor. In this way, by controlling the monitoring camera 9 before something happens at the monitoring target position, the situation occurring at the monitoring target position can be reliably monitored without being missed.
- camera parameters for imaging the monitoring position are automatically calculated for each monitoring camera 9 capable of imaging the monitoring position from the common coordinates of the monitoring position output by the input unit 17.
- the correspondence storage unit 15 stores identification information (ID), coordinate information of the common coordinate system, and camera parameters for each monitoring position.
- ID identification information
- coordinate information of the common coordinate system coordinate information of the common coordinate system
- camera parameters for each monitoring position are not be used.
- the input unit 17 receives an input of setting of one monitoring position that can be imaged by two movable monitoring cameras 9.
- the user performs an operation of designating the monitoring position for each image captured by the two monitoring cameras 9.
- the input unit 17 acquires camera coordinate information (coordinates in the image) of the monitoring position specified by user input for the image of one monitoring camera 9 and camera parameters at the time of capturing the image. Further, the input unit 17 acquires camera coordinate information (coordinates in the image) of the monitoring position designated by user input for the image of the other monitoring camera 9 and camera parameters at the time of capturing the image.
- the correspondence storage unit 15 stores IDs of the monitoring positions, camera IDs of the two monitoring cameras 9, camera parameters for the monitoring cameras 9 for capturing the monitoring positions, and camera coordinate information of the monitoring positions. Attached and stored.
- the detection unit 13 can detect a predetermined event at the monitoring position for the image of each monitoring camera 9 using the stored information. In this case, the change of the camera parameter of each movable monitoring camera 9 is allowed in a range including the camera parameter stored in the correspondence storage unit 15.
- the camera control unit 14 can control each monitoring camera 9 using this stored information so as to capture an image of the monitoring position where the predetermined event is detected.
- the second embodiment can be similarly modified. That is, in the second embodiment, the common coordinates of the detection target position and the monitoring target position may not be used.
- the input unit 17 acquires the camera parameters of each monitoring camera 9 when imaging the monitoring target position. Further, the input unit 17 acquires camera coordinate information (in-image coordinates) of the detection target position for each monitoring camera 9 that can capture the detection target position, and associates the detection target position with the monitoring target position. To get more.
- the correspondence storage unit 15 captures the correspondence (ID pair) between the detection target position and the monitoring target position, the camera ID of the monitoring camera 9 that can capture the detection target position or the monitoring target position, and the monitoring target position. The correspondence information of the camera parameters for this is stored.
- the input unit 17 may acquire the image feature amount of the monitoring area instead of the coordinate information of the monitoring area.
- the detection unit 13 detects an area similar to the image feature amount of the monitoring area in the acquired image, and specifies the detected area as the monitoring area.
- the correspondence storage unit may store the image feature amount of the monitoring area input by the user.
- the third embodiment may be a program that causes at least one computer to execute the video monitoring method, or may be a recording medium that can be read by the at least one computer that records such a program. Good.
- FIG. 11 is a diagram conceptually illustrating a processing configuration example of the video monitoring system 100 in the third embodiment.
- the video monitoring system 100 includes a detection unit 101 and a control unit 102.
- the video monitoring system 100 shown in FIG. 11 can be realized as the above-described monitoring control apparatus 10 shown in FIG. In this case, the video monitoring system 100 has the same hardware configuration as the monitoring control device 10 shown in FIG.
- FIG. 12 is a diagram conceptually illustrating a hardware configuration example of the video monitoring system 100 in the third embodiment.
- the video monitoring system 100 may be realized as a monitoring camera 9 (#n) as shown in FIG.
- the video monitoring system 100 (monitoring camera 9 (#n)) includes a CPU 2, a memory 3, an input / output interface (I / F) 4, a communication unit 5, and the like, and the monitoring camera 9 (#n) itself And the other surveillance camera 9 is controlled.
- the surveillance camera 9 (#n) in this case is a so-called intelligent camera.
- the hardware configuration of the video monitoring system 100 in the third embodiment is not limited to the examples of FIGS. 1 and 12, and the video monitoring system 100 is realized by both the monitoring control device 10 and the monitoring camera 9 (#n). May be.
- the detection unit 101 and the control unit 102 are realized, for example, by executing a program stored in the memory 3 by the CPU 2. Further, the program may be installed from a portable recording medium such as a CD or a memory card or another computer on the network via the input / output I / F 4 or the communication unit 5 and stored in the memory 3. Good.
- the video monitoring system 100 is realized by both the monitoring control device 10 and the monitoring camera 9 (#n)
- the detection unit 101 is realized by the monitoring control device 10
- the control unit 102 is set by the monitoring camera 9 (# 2).
- the detection unit 101 detects a predetermined event based on an image captured by the first imaging device (for example, the monitoring camera 9 (# 2)).
- the detection unit 101 corresponds to the detection unit 13 described above.
- the contents of the predetermined event and the detection method of the predetermined event are as described above, and are not limited.
- the first imaging device may be a fixed monitoring camera 9 or a movable monitoring camera 9.
- the control unit 102 controls the second imaging device so that the second imaging device (for example, the monitoring camera 9 (# 1)) images the predetermined position after the detection unit 101 detects the predetermined event.
- the “predetermined position” is a predetermined monitoring position, and is set to an arbitrary static point, line, plane, or space in the real world.
- the control unit 102 corresponds to the above-described camera control unit 14.
- the method for controlling the second imaging device is as described above and is not limited.
- the video monitoring system 100 may not include the acquisition unit 11, the image storage unit 12, the correspondence storage unit 15, and the output processing unit 16 shown in FIG. 2. These processing modules that the video monitoring system 100 does not have are provided by other computers, and the video monitoring system 100 can cooperate with these processing modules by communicating with the other computers.
- FIG. 13 is a flowchart showing an operation example of the video monitoring system 100 in the third embodiment.
- the video monitoring method in the third embodiment is executed by at least one computer such as the video monitoring system 100.
- each illustrated process is executed by each processing module included in the video monitoring system 100.
- the video monitoring method in the present embodiment detects a predetermined event based on an image captured by the first imaging device (for example, the monitoring camera 9 (# 2)) (S131), and after detecting the predetermined event (S132; YES), the second imaging device (for example, the monitoring camera 9 (# 1)) includes controlling the second imaging device so as to image a predetermined position (S133).
- FIG. 14 is a conceptual diagram of a stadium to which the video surveillance system 1 is applied.
- the video surveillance system 1 described above is applied to a stadium where a large number of people gather as shown in FIG.
- a plurality of monitoring cameras 9 are installed at positions where the spectator seats, passages, entrances and the like can be imaged, and a plurality of areas where a large number of people can exist are set as the monitoring areas.
- the monitoring control device 10 detects, as a predetermined event, that a plurality of persons (crowds) have changed their states simultaneously in the monitoring area.
- the predetermined event is handled as an abnormal state.
- the monitoring and control apparatus 10 detects as a predetermined event (abnormal state) that crowds appearing in the image of the monitoring camera 9 that captures the passenger seat have started to run all around at a certain point.
- FIG. 15 is a diagram showing a specific example of a predetermined event.
- a state in which the person D1 is holding a weapon is shown.
- the monitoring control apparatus 10 detects as a predetermined event that a plurality of persons appearing in the image have started running in the direction away from the center centering on the person D1.
- the monitoring control device 10 (camera control unit 14) is installed in the vicinity of the monitoring area where the predetermined event (abnormal state) is detected, and all the monitoring cameras 9 capable of imaging the monitoring area are displayed in the monitoring area. Control all at once to capture images.
- the cause of the abnormal state for example, a person (see FIG. 15) who suddenly raised a weapon can be imaged from various directions, and the criminal can be easily identified even in the subsequent verification.
- a plurality of areas where a large number of people can exist are set as detection target areas, and a plurality of monitoring target areas corresponding to the detection target areas are set.
- the vicinity of the passenger seat is set as the detection target area, and the plurality of entrances / exits are set as the monitoring target areas.
- the monitoring control device 10 detects, as a predetermined event (abnormal state), that a plurality of persons (crowds) have changed their states at once in the detection target region.
- the monitoring control device 10 sets a predetermined event (abnormal state) that the crowd shown in the image of the monitoring camera 9 that captures the passenger seat has started to run outwards around a certain point. To detect.
- the monitoring control device 10 (camera control unit 14) selects a plurality of monitoring cameras 9 capable of imaging the monitoring target area (entrance / exit) corresponding to the detection target area where the predetermined event (abnormal state) is detected.
- the monitoring control apparatus 10 controls all of the selected plurality of monitoring cameras 9 so as to image the monitoring target area (entrance / exit).
- Detecting means for detecting a predetermined event based on an image captured by the first imaging device; Control means for controlling the second imaging device so that the second imaging device images a predetermined position after the detection of the predetermined event;
- a video surveillance system comprising: 2.
- the control means selects a movable imaging device capable of imaging the predetermined position from a plurality of movable imaging devices capable of changing an imaging direction after detecting the predetermined event, and selects the selected movable imaging device. Control and exclude the movable imaging device that was not selected from the control targets at the time of detection of the predetermined event, 1.
- the video surveillance system described in 1. 3.
- the detecting means detects the predetermined event at the predetermined position included in the image captured by the first imaging device; 1. Or 2.
- the detection means detects the predetermined event at another predetermined position different from the predetermined position included in the image captured by the first imaging device. 1. Or 2.
- Corresponding storage for storing a plurality of correspondence information between a predetermined detection target position where the predetermined event is detected and a predetermined monitoring target position corresponding to the predetermined detection target position among the plurality of predetermined monitoring target positions Part, Further comprising After the detection of the predetermined event, the control means refers to the correspondence relationship information, so that the predetermined monitoring target position corresponding to the predetermined detection target position where the predetermined event is detected can be imaged. Identify two imaging devices, 4). The video surveillance system described in 1. 6).
- the detection means detects, as the predetermined event, a predetermined situation in a video surveillance line of the object or a monitoring area of the object, 1. To 5.
- the video surveillance system according to any one of the above. 7).
- the detection means detects, as the predetermined event, the occurrence of a predetermined situation in a predetermined time interval when a predetermined situation in the monitoring area of the target object passes through a video monitoring line of the target object, 6).
- a video surveillance method executed by at least one computer Detecting a predetermined event based on an image captured by the first imaging device; After the predetermined event is detected, the second imaging device is controlled so that the second imaging device images a predetermined position.
- Video surveillance method including the above. 9. Select a movable imaging device capable of imaging the predetermined position from a plurality of movable imaging devices capable of changing the imaging direction, Excluding the movable imaging device that was not selected from the control target at the time of detection of the predetermined event; Further including The control of the second imaging device controls the selected movable imaging device as the second imaging device. 8).
- the predetermined event is detected by detecting the predetermined event at the predetermined position included in the image captured by the first imaging device. 8). Or 9. The video surveillance method described in 1. 11. The detection of the predetermined event is to detect the predetermined event at another predetermined position different from the predetermined position included in the image captured by the first imaging device. 8). Or 9. The video surveillance method described in 1. 12 Corresponding storage for storing a plurality of correspondence information between a predetermined detection target position where the predetermined event is detected and a predetermined monitoring target position corresponding to the predetermined detection target position among the plurality of predetermined monitoring target positions See Identifying the second imaging device capable of imaging the predetermined monitoring target position corresponding to the predetermined detection target position where the predetermined event is detected; Further includes: The video surveillance method described in 1.
- the detection of the predetermined event is, as the predetermined event, detecting a predetermined situation in the monitoring area of the object or the passage of the object through the video monitoring line, 8).
- the video surveillance method according to any one of the above. 14
- the predetermined event is detected as the predetermined event that a predetermined situation in the monitoring area of the object has occurred for a plurality of objects during a predetermined time interval.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Closed-Circuit Television Systems (AREA)
- Alarm Systems (AREA)
Abstract
Description
或る監視カメラが人や車等の物体を検出した場合に、他の監視カメラを制御する監視システムが存在する。このような監視システムによれば、複数の監視カメラが互いに連携して網羅的に侵入者等を監視することができる。しかしながらそのような監視システムは、元々人が侵入することが禁止されている禁止エリアや、人の出現がある程度限定されている場面への適用が想定されているため、単純に人を検知すると他のカメラの制御を開始する。そのため、例えば都市等街頭や大規模商業施設、空港、ターミナル駅、プラットホーム等重要施設、レジャー施設、スポーツ施設、スタジアム等、大勢の人が行き来したり、群衆が形成されたりするような場面では、人の検知に伴い監視カメラが一々制御されることになり、効率的に監視業務を行うことができない。
〔システム構成〕
図1は、第一実施形態における映像監視システム1(以降、システム1と略称する場合もある)のハードウェア構成例を概念的に示す図である。システム1は、監視制御装置10、複数の監視カメラ9(#1)、9(#2)から9(#n)等を有する。
図2は、第一実施形態における監視制御装置10の処理構成例を概念的に示す図である。図2に示されるように、監視制御装置10は、取得部11、画像格納部12、検出部13、カメラ制御部14、対応格納部15、出力処理部16、入力部17、算出部18等を有する。これら各処理モジュールは、例えば、CPU2によりメモリ3に格納されるプログラムが実行されることにより実現される。また、当該プログラムは、例えば、CD(Compact Disc)、メモリカード等のような可搬型記録媒体やネットワーク上の他のコンピュータから入出力I/F4又は通信ユニット5を介してインストールされ、メモリ3に格納されてもよい。
以下、第一実施形態における映像監視方法について図5及び図6を用いて説明する。図5は、第一実施形態における監視制御装置10の動作例を示すフローチャートである。図5に示されるように、第一実施形態における映像監視方法は、監視制御装置10のような少なくとも一つのコンピュータ(CPU2)により実行される。各工程は、監視制御装置10が有する上述の各処理モジュールの処理内容と同様であるため、各工程の詳細は、適宜省略される。
上述したように第一実施形態では、或る監視カメラ9で撮像された画像に含まれる監視位置(例えば映像監視線の位置又は監視領域の位置)において、所定イベントが検出される。この検出に応じて、その所定イベントが検出された画像を撮像した監視カメラ9以外の監視カメラ9が、その所定位置を撮像するように、制御される。これにより、所定イベントが検出された監視位置が2台以上の監視カメラ9で撮像されるようになる。従って、第一実施形態によれば、所定イベントが発生した際の監視位置(例えば映像監視線の位置又は監視領域の位置)の様子、及び発生した所定イベントの様子を多方向から詳細に監視することができる。
上述の第一実施形態では、或る監視カメラ9の撮像画像中における監視位置で所定イベントが検出されると、他の監視カメラ9がその監視位置を撮像するように制御された。第二実施形態では、或る監視カメラ9の撮像画像中でイベントが検出されると、そのイベントが検出された位置と異なる位置を撮像するように他の監視カメラ9が制御される。以下、第二実施形態における映像監視システム1について、第一実施形態と異なる内容を中心に説明する。以下の説明では、第一実施形態と同様の内容については適宜省略する。
第二実施形態における監視制御装置10の処理構成は、第一実施形態と同様である(図2参照)。以下に示す処理内容が第一実施形態と異なる。
以下、第二実施形態における映像監視方法について図9及び図10を用いて説明する。図9は、第二実施形態における監視制御装置10の動作例を示すフローチャートである。図9に示されるように、第二実施形態における映像監視方法は、監視制御装置10のような少なくとも一つのコンピュータ(CPU2)により実行される。各工程は、監視制御装置10が有する上述の各処理モジュールの処理内容と同様であるため、各工程の詳細は、適宜省略される。
上述のように、第二実施形態では、異なる位置を示す検出対象位置及び監視対象位置がユーザ入力により取得され、或る監視カメラ9の撮像画像内の検出対象位置で所定イベントが検出され、その所定イベントが検出された検出対象位置に対応する監視対象位置を撮像するように、他の監視カメラ9が制御される。
上述の第一実施形態では、入力部17により出力された監視位置の共通座標から、その監視位置を撮像可能な監視カメラ9毎にその監視位置を撮像するためのカメラパラメータが自動で算出された(算出部18)。そして、対応格納部15には、各監視位置について、識別情報(ID)、共通座標系の座標情報及びカメラパラメータがそれぞれ格納された。しかしながら、上述の第一実施形態では、監視位置の共通座標は用いられなくてもよい。
以下、第三実施形態における映像監視システム及び映像監視方法について図11、図12、及び図13を用いて説明する。また、第三実施形態は、この映像監視方法を少なくとも1つのコンピュータに実行させるプログラムであってもよいし、このようなプログラムを記録した当該少なくとも1つのコンピュータが読み取り可能な記録媒体であってもよい。
以下、上述の各実施形態における映像監視システム1及び100(以降、符号1で総称する)の適用例が示される。但し、上述の各実施形態の適用は、以下の例に限定されない。
前記所定イベントの検出後、第二の撮像装置が所定位置を撮像するように第二の撮像装置を制御する制御手段と、
を備える映像監視システム。
2. 前記制御手段は、前記所定イベントの検出後、撮像方向を変更可能な複数の可動式撮像装置の中から前記所定位置を撮像可能な可動式撮像装置を選択し、選択された可動式撮像装置を制御し、選択されなかった可動式撮像装置を前記所定イベントの検出時の制御対象から除外する、
1.に記載の映像監視システム。
3. 前記検出手段は、前記第一の撮像装置により撮像された前記画像に含まれる前記所定位置において前記所定イベントを検出する、
1.又は2.に記載の映像監視システム。
4. 前記検出手段は、前記第一の撮像装置により撮像された前記画像に含まれる、前記所定位置とは異なる他の所定位置で前記所定イベントを検出する、
1.又は2.に記載の映像監視システム。
5. 前記所定イベントが検出される所定の検出対象位置と、複数の所定の監視対象位置の中のその所定の検出対象位置に対応する所定の監視対象位置との複数の対応関係情報を格納する対応格納部、
を更に備え、
前記制御手段は、前記所定イベントの検出後、前記対応関係情報を参照することにより、前記所定イベントが検出された前記所定の検出対象位置に対応する前記所定の監視対象位置を撮像可能な前記第二の撮像装置を特定する、
4.に記載の映像監視システム。
6. 前記検出手段は、前記所定イベントとして、対象物の映像監視線の通過又は対象物の監視領域における所定の状況を検出する、
1.から5.のいずれか1つに記載の映像監視システム。
7. 前記検出手段は、前記対象物の映像監視線の通過又は前記対象物の監視領域における所定の状況が、所定の時間間隔の間で複数の対象物について発生したことを前記所定イベントとして検出する、
6.記載の映像監視システム。
第一の撮像装置により撮像された画像に基づいて所定イベントを検出し、
前記所定イベントの検出後、第二の撮像装置が所定位置を撮像するように第二の撮像装置を制御する、
ことを含む映像監視方法。
9. 撮像方向を変更可能な複数の可動式撮像装置の中から前記所定位置を撮像可能な可動式撮像装置を選択し、
選択されなかった可動式撮像装置を前記所定イベントの検出時の制御対象から除外する、
ことを更に含み、
前記第二の撮像装置の制御は、前記選択された可動式撮像装置を前記第二の撮像装置として制御する、
8.に記載の映像監視方法。
10. 前記所定イベントの検出は、前記第一の撮像装置により撮像された前記画像に含まれる前記所定位置において前記所定イベントを検出する、
8.又は9.に記載の映像監視方法。
11. 前記所定イベントの検出は、前記第一の撮像装置により撮像された前記画像に含まれる、前記所定位置とは異なる他の所定位置で前記所定イベントを検出する、
8.又は9.に記載の映像監視方法。
12. 前記所定イベントが検出される所定の検出対象位置と、複数の所定の監視対象位置の中のその所定の検出対象位置に対応する所定の監視対象位置との複数の対応関係情報を格納する対応格納部を参照し、
前記所定イベントが検出された前記所定の検出対象位置に対応する前記所定の監視対象位置を撮像可能な前記第二の撮像装置を特定する、
ことを更に含む11.に記載の映像監視方法。
13. 前記所定イベントの検出は、前記所定イベントとして、対象物の映像監視線の通過又は対象物の監視領域における所定の状況を検出する、
8.から12.のいずれか1つに記載の映像監視方法。
14. 前記所定イベントの検出は、前記対象物の映像監視線の通過又は前記対象物の監視領域における所定の状況が、所定の時間間隔の間で複数の対象物について発生したことを前記所定イベントとして検出する、
13.に記載の映像監視方法。
Claims (9)
- 第一の撮像装置により撮像された画像に基づいて所定イベントを検出する検出手段と、
前記所定イベントの検出後、第二の撮像装置が所定位置を撮像するように第二の撮像装置を制御する制御手段と、
を備える映像監視システム。 - 前記制御手段は、前記所定イベントの検出後、撮像方向を変更可能な複数の可動式撮像装置の中から前記所定位置を撮像可能な可動式撮像装置を選択し、選択された可動式撮像装置を制御し、選択されなかった可動式撮像装置を前記所定イベントの検出時の制御対象から除外する、
請求項1に記載の映像監視システム。 - 前記検出手段は、前記第一の撮像装置により撮像された前記画像に含まれる前記所定位置において前記所定イベントを検出する、
請求項1又は2に記載の映像監視システム。 - 前記検出手段は、前記第一の撮像装置により撮像された前記画像に含まれる、前記所定位置とは異なる他の所定位置で前記所定イベントを検出する、
請求項1又は2に記載の映像監視システム。 - 前記所定イベントが検出される所定の検出対象位置と、複数の所定の監視対象位置の中のその所定の検出対象位置に対応する所定の監視対象位置との複数の対応関係情報を格納する対応格納部、
を更に備え、
前記制御手段は、前記所定イベントの検出後、前記対応関係情報を参照することにより、前記所定イベントが検出された前記所定の検出対象位置に対応する前記所定の監視対象位置を撮像可能な前記第二の撮像装置を特定する、
請求項4に記載の映像監視システム。 - 前記検出手段は、前記所定イベントとして、対象物の映像監視線の通過又は対象物の監視領域における所定の状況を検出する、
請求項1から5のいずれか1項に記載の映像監視システム。 - 前記検出手段は、前記対象物の映像監視線の通過又は前記対象物の監視領域における所定の状況が、所定の時間間隔の間で複数の対象物について発生したことを前記所定イベントとして検出する、
請求項6に記載の映像監視システム。 - 少なくとも一つのコンピュータにより実行される映像監視方法において、
第一の撮像装置により撮像された画像に基づいて所定イベントを検出し、
前記所定イベントの検出後、第二の撮像装置が所定位置を撮像するように第二の撮像装置を制御する、
ことを含む映像監視方法。 - 請求項8に記載の映像監視方法を少なくとも一つのコンピュータに実行させるプログラム。
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017508858A JP6631619B2 (ja) | 2015-03-27 | 2015-03-27 | 映像監視システム及び映像監視方法 |
PCT/JP2015/059737 WO2016157327A1 (ja) | 2015-03-27 | 2015-03-27 | 映像監視システム及び映像監視方法 |
US15/561,572 US11019268B2 (en) | 2015-03-27 | 2015-03-27 | Video surveillance system and video surveillance method |
US16/288,195 US11228715B2 (en) | 2015-03-27 | 2019-02-28 | Video surveillance system and video surveillance method |
US17/462,808 US20210400200A1 (en) | 2015-03-27 | 2021-08-31 | Video surveillance system and video surveillance method |
US18/201,989 US20230300466A1 (en) | 2015-03-27 | 2023-05-25 | Video surveillance system and video surveillance method |
US18/238,054 US20230403470A1 (en) | 2015-03-27 | 2023-08-25 | Video surveillance system and video surveillance method |
US18/239,626 US20230412925A1 (en) | 2015-03-27 | 2023-08-29 | Video surveillance system and video surveillance method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/059737 WO2016157327A1 (ja) | 2015-03-27 | 2015-03-27 | 映像監視システム及び映像監視方法 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/561,572 A-371-Of-International US11019268B2 (en) | 2015-03-27 | 2015-03-27 | Video surveillance system and video surveillance method |
US16/288,195 Continuation US11228715B2 (en) | 2015-03-27 | 2019-02-28 | Video surveillance system and video surveillance method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016157327A1 true WO2016157327A1 (ja) | 2016-10-06 |
Family
ID=57004862
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/059737 WO2016157327A1 (ja) | 2015-03-27 | 2015-03-27 | 映像監視システム及び映像監視方法 |
Country Status (3)
Country | Link |
---|---|
US (6) | US11019268B2 (ja) |
JP (1) | JP6631619B2 (ja) |
WO (1) | WO2016157327A1 (ja) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018160822A (ja) * | 2017-03-23 | 2018-10-11 | セコム株式会社 | 監視システム |
JP2018173501A (ja) * | 2017-03-31 | 2018-11-08 | サクサ株式会社 | 表示制御装置及び表示制御方法 |
JP2019050452A (ja) * | 2017-09-07 | 2019-03-28 | キヤノン株式会社 | 制御装置、制御方法、プログラム、及び監視システム |
WO2019065757A1 (ja) * | 2017-09-26 | 2019-04-04 | ソニーセミコンダクタソリューションズ株式会社 | 情報処理システム |
JP2019153986A (ja) * | 2018-03-06 | 2019-09-12 | キヤノン株式会社 | 監視システム、管理装置、監視方法、コンピュータプログラム、及び記憶媒体 |
JP2020184292A (ja) * | 2018-05-04 | 2020-11-12 | ゴリラ・テクノロジー・インコーポレイテッドGorilla Technology Inc. | 分散型対象追跡システム |
CN113068000A (zh) * | 2019-12-16 | 2021-07-02 | 杭州海康威视数字技术股份有限公司 | 视频目标的监控方法、装置、设备、***及存储介质 |
CN113469021A (zh) * | 2021-06-29 | 2021-10-01 | 深圳市商汤科技有限公司 | 视频处理及装置、电子设备及计算机可读存储介质 |
CN114619443A (zh) * | 2020-12-14 | 2022-06-14 | 苏州大学 | 机器人作业空间设定方法及机器人主动安全*** |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6701018B2 (ja) * | 2016-07-19 | 2020-05-27 | キヤノン株式会社 | 情報処理装置、情報処理方法及びプログラム |
US10785458B2 (en) * | 2017-03-24 | 2020-09-22 | Blackberry Limited | Method and system for distributed camera network |
US10438465B1 (en) * | 2017-03-28 | 2019-10-08 | Alarm.Com Incorporated | Camera enhanced with light detecting sensor |
US10250812B2 (en) * | 2017-05-17 | 2019-04-02 | Caterpillar Inc. | Display system for machine |
JP2019067813A (ja) * | 2017-09-28 | 2019-04-25 | 株式会社デンソー | 半導体モジュール |
CN108759834B (zh) * | 2018-04-28 | 2023-03-21 | 温州大学激光与光电智能制造研究院 | 一种基于全局视觉的定位方法 |
US11281909B2 (en) * | 2019-07-12 | 2022-03-22 | Timothy Kephart | System and method for analyzing graffiti and tracking graffiti vandals |
JP6780057B1 (ja) * | 2019-05-17 | 2020-11-04 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びプログラム |
KR102040939B1 (ko) * | 2019-07-15 | 2019-11-27 | 한화테크윈 주식회사 | 감시 시스템 및 그 동작 방법 |
US20220343743A1 (en) * | 2019-08-22 | 2022-10-27 | Nec Corporation | Display control apparatus, display control method, and program |
US11386764B2 (en) * | 2019-10-01 | 2022-07-12 | Deere & Company | Detecting objects in a restricted zone |
TWI735201B (zh) * | 2020-04-09 | 2021-08-01 | 台灣高速鐵路股份有限公司 | 軌道車廂的車端牆顯示裝置 |
JP7440332B2 (ja) * | 2020-04-21 | 2024-02-28 | 株式会社日立製作所 | 事象解析システムおよび事象解析方法 |
US11368991B2 (en) | 2020-06-16 | 2022-06-21 | At&T Intellectual Property I, L.P. | Facilitation of prioritization of accessibility of media |
US11233979B2 (en) | 2020-06-18 | 2022-01-25 | At&T Intellectual Property I, L.P. | Facilitation of collaborative monitoring of an event |
US11411757B2 (en) | 2020-06-26 | 2022-08-09 | At&T Intellectual Property I, L.P. | Facilitation of predictive assisted access to content |
US11184517B1 (en) | 2020-06-26 | 2021-11-23 | At&T Intellectual Property I, L.P. | Facilitation of collaborative camera field of view mapping |
US11463739B2 (en) | 2020-06-29 | 2022-10-04 | Seagate Technology Llc | Parameter based load balancing in a distributed surveillance system |
US11343544B2 (en) | 2020-06-29 | 2022-05-24 | Seagate Technology Llc | Selective use of cameras in a distributed surveillance system |
US11503381B2 (en) | 2020-06-29 | 2022-11-15 | Seagate Technology Llc | Distributed surveillance system with abstracted functional layers |
US20210409817A1 (en) * | 2020-06-29 | 2021-12-30 | Seagate Technology Llc | Low latency browser based client interface for a distributed surveillance system |
US11356349B2 (en) | 2020-07-17 | 2022-06-07 | At&T Intellectual Property I, L.P. | Adaptive resource allocation to facilitate device mobility and management of uncertainty in communications |
US11768082B2 (en) | 2020-07-20 | 2023-09-26 | At&T Intellectual Property I, L.P. | Facilitation of predictive simulation of planned environment |
CN114900669A (zh) * | 2020-10-30 | 2022-08-12 | 深圳市商汤科技有限公司 | 场景监测方法、装置、电子设备及存储介质 |
CN113393523B (zh) * | 2021-06-04 | 2023-03-14 | 上海蓝色帛缔智能工程有限公司 | 一种自动化监控机房图像的方法、装置及电子设备 |
DE102021207641A1 (de) | 2021-07-16 | 2023-01-19 | Robert Bosch Gesellschaft mit beschränkter Haftung | Überwachungsvorrichtung mit einer Mehrzahl an Kameras, Verfahren und Computerprogramm zur Überwachung |
CN113824927B (zh) * | 2021-08-18 | 2023-07-25 | 浙江大华技术股份有限公司 | 云台自动巡航的控制方法、装置、电子装置和存储介质 |
DE102021213211A1 (de) * | 2021-11-24 | 2023-05-25 | Robert Bosch Gesellschaft mit beschränkter Haftung | Verfahren zur Analyse von potentiellen Ereignissen, Vorrichtung zur Analyse von potentiellen Ereignissen, Computerprogramm sowie Speichermedium |
CN115914575A (zh) * | 2022-11-11 | 2023-04-04 | 菲尼克斯(南京)智能制造技术工程有限公司 | 一种设备工况捕捉***及方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002092751A (ja) * | 2000-09-18 | 2002-03-29 | Matsushita Electric Ind Co Ltd | 監視システム |
JP2012212407A (ja) * | 2011-03-31 | 2012-11-01 | Sogo Keibi Hosho Co Ltd | 状態判定装置、状態判定方法およびプログラム |
JP2015002553A (ja) * | 2013-06-18 | 2015-01-05 | キヤノン株式会社 | 情報処理システムおよびその制御方法 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0816898A (ja) | 1994-06-24 | 1996-01-19 | Sanden Corp | 自動販売機 |
JPH11152034A (ja) | 1997-11-20 | 1999-06-08 | Fujitsu General Ltd | 列車監視システム |
JP2000272863A (ja) | 1999-03-24 | 2000-10-03 | Hitachi Ltd | 乗客コンベアの監視装置 |
JP4568009B2 (ja) * | 2003-04-22 | 2010-10-27 | パナソニック株式会社 | カメラ連携による監視装置 |
JP3800217B2 (ja) * | 2003-10-10 | 2006-07-26 | コニカミノルタホールディングス株式会社 | 監視システム |
JP2005333628A (ja) | 2004-04-23 | 2005-12-02 | Toa Corp | カメラ制御装置およびこれを用いた監視カメラシステム |
JP4685390B2 (ja) | 2004-09-07 | 2011-05-18 | 株式会社日立国際電気 | 監視カメラシステム |
WO2006106496A1 (en) * | 2005-04-03 | 2006-10-12 | Nice Systems Ltd. | Apparatus and methods for the semi-automatic tracking and examining of an object or an event in a monitored site |
US8199009B2 (en) | 2007-06-08 | 2012-06-12 | Bas Strategic Solutions, Inc. | Method and system for administering remote area monitoring system |
US8675074B2 (en) * | 2007-07-20 | 2014-03-18 | Honeywell International Inc. | Custom video composites for surveillance applications |
JP2009255654A (ja) | 2008-04-14 | 2009-11-05 | Toshiba Corp | 列車運行監視システム |
JP2010128727A (ja) | 2008-11-27 | 2010-06-10 | Hitachi Kokusai Electric Inc | 画像処理装置 |
US8254633B1 (en) * | 2009-04-21 | 2012-08-28 | Videomining Corporation | Method and system for finding correspondence between face camera views and behavior camera views |
CN102754435A (zh) * | 2010-03-15 | 2012-10-24 | 欧姆龙株式会社 | 监视摄像机终端 |
JP6149357B2 (ja) | 2012-07-25 | 2017-06-21 | 日本電気株式会社 | 防護発報無線システム |
JP5935617B2 (ja) | 2012-09-14 | 2016-06-15 | オムロン株式会社 | 画像処理装置、移動体の状態判定方法、および移動体の状態判定プログラム |
WO2014122879A1 (ja) * | 2013-02-05 | 2014-08-14 | 日本電気株式会社 | 解析処理システム |
US9762865B2 (en) * | 2013-03-15 | 2017-09-12 | James Carey | Video identification and analytical recognition system |
CA2910492C (en) * | 2013-05-17 | 2021-03-30 | International Electronic Machines Corporation | Operations monitoring in an area |
US20140362225A1 (en) * | 2013-06-11 | 2014-12-11 | Honeywell International Inc. | Video Tagging for Dynamic Tracking |
JP5866499B2 (ja) * | 2014-02-24 | 2016-02-17 | パナソニックIpマネジメント株式会社 | 監視カメラシステム及び監視カメラシステムの制御方法 |
US20150334299A1 (en) * | 2014-05-14 | 2015-11-19 | Panasonic Intellectual Property Management Co., Ltd. | Monitoring system |
KR102174839B1 (ko) * | 2014-12-26 | 2020-11-05 | 삼성전자주식회사 | 보안 시스템 및 그 운영 방법 및 장치 |
-
2015
- 2015-03-27 JP JP2017508858A patent/JP6631619B2/ja active Active
- 2015-03-27 US US15/561,572 patent/US11019268B2/en active Active
- 2015-03-27 WO PCT/JP2015/059737 patent/WO2016157327A1/ja active Application Filing
-
2019
- 2019-02-28 US US16/288,195 patent/US11228715B2/en active Active
-
2021
- 2021-08-31 US US17/462,808 patent/US20210400200A1/en active Pending
-
2023
- 2023-05-25 US US18/201,989 patent/US20230300466A1/en active Pending
- 2023-08-25 US US18/238,054 patent/US20230403470A1/en active Pending
- 2023-08-29 US US18/239,626 patent/US20230412925A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002092751A (ja) * | 2000-09-18 | 2002-03-29 | Matsushita Electric Ind Co Ltd | 監視システム |
JP2012212407A (ja) * | 2011-03-31 | 2012-11-01 | Sogo Keibi Hosho Co Ltd | 状態判定装置、状態判定方法およびプログラム |
JP2015002553A (ja) * | 2013-06-18 | 2015-01-05 | キヤノン株式会社 | 情報処理システムおよびその制御方法 |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018160822A (ja) * | 2017-03-23 | 2018-10-11 | セコム株式会社 | 監視システム |
JP2018173501A (ja) * | 2017-03-31 | 2018-11-08 | サクサ株式会社 | 表示制御装置及び表示制御方法 |
JP7062879B2 (ja) | 2017-03-31 | 2022-05-09 | サクサ株式会社 | 表示制御装置及び表示制御方法 |
JP2019050452A (ja) * | 2017-09-07 | 2019-03-28 | キヤノン株式会社 | 制御装置、制御方法、プログラム、及び監視システム |
CN111034171B (zh) * | 2017-09-26 | 2022-05-17 | 索尼半导体解决方案公司 | 信息处理*** |
WO2019065757A1 (ja) * | 2017-09-26 | 2019-04-04 | ソニーセミコンダクタソリューションズ株式会社 | 情報処理システム |
CN111034171A (zh) * | 2017-09-26 | 2020-04-17 | 索尼半导体解决方案公司 | 信息处理*** |
JPWO2019065757A1 (ja) * | 2017-09-26 | 2020-11-19 | ソニーセミコンダクタソリューションズ株式会社 | 情報処理システム |
JP7369623B2 (ja) | 2017-09-26 | 2023-10-26 | ソニーセミコンダクタソリューションズ株式会社 | 情報処理システムおよび情報処理方法 |
JP2019153986A (ja) * | 2018-03-06 | 2019-09-12 | キヤノン株式会社 | 監視システム、管理装置、監視方法、コンピュータプログラム、及び記憶媒体 |
JP7146416B2 (ja) | 2018-03-06 | 2022-10-04 | キヤノン株式会社 | 情報処理装置、情報処理システム、情報処理方法、及びプログラム |
JP2020184292A (ja) * | 2018-05-04 | 2020-11-12 | ゴリラ・テクノロジー・インコーポレイテッドGorilla Technology Inc. | 分散型対象追跡システム |
CN113068000A (zh) * | 2019-12-16 | 2021-07-02 | 杭州海康威视数字技术股份有限公司 | 视频目标的监控方法、装置、设备、***及存储介质 |
CN114619443A (zh) * | 2020-12-14 | 2022-06-14 | 苏州大学 | 机器人作业空间设定方法及机器人主动安全*** |
CN113469021A (zh) * | 2021-06-29 | 2021-10-01 | 深圳市商汤科技有限公司 | 视频处理及装置、电子设备及计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20190199932A1 (en) | 2019-06-27 |
US20230403470A1 (en) | 2023-12-14 |
US20230300466A1 (en) | 2023-09-21 |
US20210400200A1 (en) | 2021-12-23 |
US11228715B2 (en) | 2022-01-18 |
US20180091741A1 (en) | 2018-03-29 |
US20230412925A1 (en) | 2023-12-21 |
JP6631619B2 (ja) | 2020-01-15 |
US11019268B2 (en) | 2021-05-25 |
JPWO2016157327A1 (ja) | 2018-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016157327A1 (ja) | 映像監視システム及び映像監視方法 | |
JP7173196B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
US6690374B2 (en) | Security camera system for tracking moving objects in both forward and reverse directions | |
US7436887B2 (en) | Method and apparatus for video frame sequence-based object tracking | |
KR101530255B1 (ko) | 객체 자동 추적 장치가 구비된 cctv 시스템 | |
JP2006523043A (ja) | 監視を行なう方法及びシステム | |
JP6403687B2 (ja) | 監視システム | |
JP2023126352A (ja) | プログラム、映像監視方法及び映像監視システム | |
JP2009027393A (ja) | 映像検索システムおよび人物検索方法 | |
JP5771039B2 (ja) | 放置人物検出装置 | |
CN105592301A (zh) | 摄像设备及其控制方法和监视照相机*** | |
JPWO2008035411A1 (ja) | 移動体情報検出装置、移動体情報検出方法および移動体情報検出プログラム | |
KR101656642B1 (ko) | 영상을 이용한 집단 행동 분석 방법 | |
JP6253950B2 (ja) | 画像監視システム | |
CN112131915A (zh) | 人脸考勤***以及摄像机和码流设备 | |
TWI514890B (zh) | Monitoring method of panoramic surveillance video | |
JP2005284652A (ja) | 動きベクトルを用いた映像監視方法及び装置 | |
KR20230152410A (ko) | 다중 카메라 및 이동 카메라를 이용한 영상 분석 장치 | |
KR20220114819A (ko) | 동적 카메라 영상 내의 객체를 실시간 추적하는 시스템 및 방법 | |
WO2005120070A2 (en) | Method and system for performing surveillance | |
JP7101080B2 (ja) | 画像処理装置 | |
EP4280187A1 (en) | Methods and systems for reducing redundant alarm notifications in a security system | |
KR101623331B1 (ko) | 영상을 활용한 이동객체 검지 및 근접 확대 촬영 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15887480 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
ENP | Entry into the national phase |
Ref document number: 2017508858 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15561572 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15887480 Country of ref document: EP Kind code of ref document: A1 |