WO2016185738A1 - Dispositif d'analyse d'image, procédé d'analyse d'image, et programme d'analyse d'image - Google Patents

Dispositif d'analyse d'image, procédé d'analyse d'image, et programme d'analyse d'image Download PDF

Info

Publication number
WO2016185738A1
WO2016185738A1 PCT/JP2016/053347 JP2016053347W WO2016185738A1 WO 2016185738 A1 WO2016185738 A1 WO 2016185738A1 JP 2016053347 W JP2016053347 W JP 2016053347W WO 2016185738 A1 WO2016185738 A1 WO 2016185738A1
Authority
WO
WIPO (PCT)
Prior art keywords
bed
person
watched
image
behavior
Prior art date
Application number
PCT/JP2016/053347
Other languages
English (en)
Japanese (ja)
Inventor
松本 修一
Original Assignee
ノ-リツプレシジョン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ノ-リツプレシジョン株式会社 filed Critical ノ-リツプレシジョン株式会社
Priority to JP2017518775A priority Critical patent/JP6607253B2/ja
Publication of WO2016185738A1 publication Critical patent/WO2016185738A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G12/00Accommodation for nursing, e.g. in hospitals, not covered by groups A61G1/00 - A61G11/00, e.g. trolleys for transport of medicaments or food; Prescription lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an image analysis apparatus, an image analysis method, and an image analysis program.
  • Patent Document 1 proposes a method of detecting the behavior of the watching target person based on whether or not at least a part of the body of the watching target has deviated from a preset monitoring area.
  • a rectangular monitoring region is set in a predetermined range on the bed surface, and a hemispherical monitoring region is set in a range covering the entire bed.
  • Patent Document 2 a method for extracting a foreground area from a captured image including the depth of each pixel and estimating the behavior of the watching target person based on the positional relationship between the extracted foreground area and a bed or the like. Proposed.
  • each estimation condition is set on the assumption that the extracted foreground area is related to the behavior of the person being watched over, and whether or not the positional relationship between the extracted foreground area and the bed satisfies each estimation condition. By determining whether or not, the behavior of the person being watched over can be detected.
  • the watching target person's rising, end sitting position, over fence, dropping, and getting out of the floor are exemplified as the detection target actions.
  • the slip-down refers to an action when the person being watched over changes from the state of being on the bed to the state of falling under the bed, for example, taking a time of about several minutes.
  • an elderly person whose muscle strength has declined significantly and is difficult to get up by himself, when trying to get out of bed by himself, gets his feet off the bed and tries to get up while clinging to the bed fence. At this time, the elderly person cannot raise his / her upper body and may gradually fall out of the bed from the lower body while holding onto the bed fence. A series of actions that gradually fall out of the bed corresponds to the sliding behavior of the person being watched over.
  • the present invention has been made in consideration of such points, and an object of the present invention is to provide a technique for detecting a person being watched over.
  • the present invention adopts the following configuration in order to solve the above-described problems.
  • the image analysis device includes an image acquisition unit that acquires a captured image captured by a capturing device installed to watch the behavior of the person being watched over in the bed, and the acquired captured image. And a foreground extraction unit that extracts a foreground region of the acquired captured image by calculating a difference between the background image set as a background of the captured image and the position of the extracted foreground region A behavior detection unit that detects the behavior of the watching target person in the bed by analyzing the behavior of the watching target person.
  • the behavior detection unit is a detection area for falling that is set to detect the falling of the person being watched from the bed, and is set within a predetermined height range from the bed surface at the bedside.
  • a captured image captured by a capturing device installed to monitor the behavior of the person being watched over in the bed is acquired, and the acquired captured image and the background image set as the background of the captured image The difference is calculated.
  • a foreground region is extracted from the acquired captured image (so-called background subtraction method).
  • the foreground area is extracted from the background image at a place where a change has occurred. Therefore, when the watching target person performs some action, a foreground region corresponding to the watching target person is extracted at a location where the watching target person has moved due to the action. Therefore, the behavior of the watching target person can be analyzed based on the position of the extracted foreground region.
  • the slipping is an action when the person being watched over changes from the state of falling on the bed to the state of falling below the bed, for example, gradually over a period of several minutes. Therefore, when the watching target person slides down from the bed, a foreground area having a predetermined size or more corresponding to the watching target person can continuously appear in a range of a predetermined height from the bed surface on the bedside.
  • a detection area for the fall is set within a predetermined height range from the bed surface on the bedside. Then, it is determined whether or not a foreground region having a predetermined size or more continuously appears in the slip detection region for a predetermined time or more. As a result, when it is determined that a foreground area of a predetermined size or more has continuously appeared in the detection area for sliding down for a predetermined time or longer, it is possible to detect sliding down of the watching target person from the bed. Therefore, according to the above configuration, it is possible to provide a technique for detecting a fall of the person being watched over.
  • the behavior detection unit is configured such that a person other than the watching target person is present on the bedside based on the analysis of the extracted foreground region. It may be determined whether one condition and a second condition in which the person being watched over performs a behavior other than sleeping on the bed are satisfied. And when the said action detection part determines that at least any one of the said 1st condition and the said 2nd condition is satisfy
  • the slip-down detection area for detecting slip-off from the bed of the person being watched over is set at the bedside. Therefore, if a person other than the person being watched over is present on the bedside, the foreground area corresponding to the person other than the person being watched over is mistaken for the foreground area corresponding to the person being watched, and There is a possibility of misdetecting the slipping-down.
  • the person to be watched is in a state other than sleeping on the bed (for example, the person to be watched is getting up on the bed, leaving the bed, etc.)
  • the foreground area appears in the detection area for slipping down by an action other than. Therefore, when the watching target person is in a state other than sleeping on the bed, there is a possibility that an action other than the falling of the watching target person is erroneously detected as the falling action. Further, in such a case, there is almost no possibility that the person being watched over will fall off the bed, and the necessity of detecting the person being watched off from the bed will be extremely low.
  • the first condition that a person other than the person to be watched exists on the bedside and the action other than the person to be watched go to sleep on the bed are performed. Whether or not the second condition is satisfied is determined. And when at least any one of the 1st condition and the 2nd condition is fulfilled, execution of the judgment processing which detects slipping off of a person to watch over from a bed is stopped. On the other hand, when both the first condition and the second condition are not satisfied, the execution of the determination process for detecting the passing of the watching target person from the bed is started. Thereby, according to the said structure, the misdetection of the slipping which arises in the above cases can be prevented.
  • the image analysis device when it is determined that both the first condition and the second condition are not satisfied, the bed of the person being watched over.
  • the image processing apparatus when starting execution of a determination process for detecting slipping off from the camera, the image processing apparatus further includes a background image update unit configured to update the background image by setting the captured image acquired at this time as the background image. Also good.
  • the background state may change in the bedside area
  • a case where a person other than the person being watched over arranges a wheelchair, a drip stand or the like on the bedside corresponds to a case where the background state is changed in the bedside region.
  • the foreground area appears in the detection area for slipping set in the bedside, and erroneously detects the falling of the person being watched off from the bed.
  • the execution of the determination process which detects the fall of a monitoring subject when the execution of the determination process which detects the fall of a monitoring subject is started, it sets to the background image, and it uses for extraction of a foreground area
  • the background image is updated. Therefore, even if the background state is changed in the bedside area for the reasons described above, the background image is updated with the captured image obtained after the change has occurred. It is possible to prevent the foreground area from being extracted. Therefore, according to the said structure, the misdetection of the slipping which arises in the above cases can be prevented.
  • the sliding detection area may be set at a position separated from the bed by at least the thickness of the futon.
  • the detection area for sliding-down for detecting sliding-off from the bed of the person being watched over is set at the bedside.
  • the detection area for slipping is set in the position away from the bed at least for the thickness of the futon. Therefore, according to the said structure, the misdetection of the slipping down resulting from the futon protruding from the bed can be prevented.
  • the image acquisition unit may acquire a captured image including depth data indicating the depth of each pixel in the captured image.
  • the foreground extraction unit may extract the foreground region of the acquired captured image by calculating a difference between the background image including depth data and the acquired captured image.
  • the behavior detection unit analyzes the behavior of the person being watched over based on the position of the foreground region specified in the real space by referring to the depth of each pixel included in the extracted foreground region.
  • the behavior of the watching target person in the bed may be detected.
  • the captured image and the background image each include depth data indicating the depth of each pixel.
  • the depth of each pixel expresses the depth of the subject shown in each pixel. Therefore, according to the depth data, the position of the subject in the real space can be specified. Therefore, according to the said structure, the position in the real space of the target object which exists in watching space, such as a monitoring object person and a bed, can be specified, without depending on the installation place and visual field direction of an imaging device. Therefore, according to the said structure, it is not limited to the installation conditions of an imaging device, A watched person's action can be detected accurately.
  • the image analysis device detects that the person being watched off has fallen from the bed. You may further provide the notification part which performs the notification for notifying that it exists in a state. According to the said structure, it can notify that the slipping-down from the bed of a monitoring subject was detected.
  • the notification destination and the notification method can be appropriately selected according to the embodiment.
  • the notification may be made to the watcher.
  • a watcher is a person who watches over the behavior of the person being watched over. When the person to be watched is an inpatient, a facility resident, a care recipient, etc., the watcher is, for example, a nurse, a facility staff, a caregiver, or the like.
  • an information processing system that realizes each of the above-described configurations, an information processing method, or a program may be used. It may be a storage medium that can be read by a computer, other devices, machines, or the like in which such a program is recorded.
  • the computer-readable recording medium is a medium that stores information such as programs by electrical, magnetic, optical, mechanical, or chemical action.
  • the information processing system may be realized by one or a plurality of information processing devices.
  • the computer acquires a captured image captured by an imaging device installed to watch the behavior of the person being watched over in the bed, and the acquired image capturing Extracting a foreground region of the acquired captured image by calculating a difference between the image and a background image set as a background of the captured image, and monitoring based on the extracted position of the foreground region Detecting the behavior of the person to be watched in the bed by analyzing the behavior of the person to be watched, and detecting the behavior of the person to be watched from the bed in the step of detecting the behavior of the person to be watched This is a slip-down detection area that is set to detect a fall, and is set within a predetermined height range from the bed surface at the bedside.
  • an image analysis program is obtained by acquiring a photographed image photographed by a photographing device installed in a computer so as to watch the behavior of the person being watched over in the bed. Extracting a foreground region of the acquired captured image by calculating a difference between the captured image and a background image set as a background of the captured image, and based on the extracted position of the foreground region Analyzing the behavior of the person being watched over, detecting the action in the bed of the person being watched over, and detecting the action of the person being watched over from the bed of the person being watched over This is a detection area for slippage that is set to detect slippage, and has a predetermined height from the bed surface at the bedside.
  • FIG. 1 schematically illustrates an example of a scene to which the present invention is applied.
  • FIG. 2 illustrates a hardware configuration of the image analysis apparatus according to the embodiment.
  • FIG. 3 illustrates the relationship between the depth acquired by the camera according to the embodiment and the subject.
  • FIG. 4 illustrates a functional configuration of the image analysis apparatus according to the embodiment.
  • FIG. 5 exemplifies a processing procedure related to behavior analysis of the watching target person in the image analysis apparatus according to the embodiment.
  • FIG. 6 illustrates a captured image acquired by the camera according to the embodiment.
  • FIG. 7 illustrates the coordinate relationship in the captured image according to the embodiment.
  • FIG. 8 illustrates the positional relationship between an arbitrary point (pixel) of the captured image and the camera in the real space according to the embodiment.
  • FIG. 9 exemplifies a three-dimensional distribution of the subject within the shooting range specified based on the depth data included in the shot image.
  • FIG. 10 illustrates a three-dimensional distribution of the foreground region extracted from the captured image.
  • FIG. 11A schematically illustrates a scene in which a watching target person is detected on the bed.
  • FIG. 11B schematically illustrates a scene in which the watching target person is detected on the bed.
  • FIG. 12A schematically illustrates a scene in which an end sitting position in the bed of the person being watched over is detected.
  • FIG. 12B schematically illustrates a scene in which the sitting position of the watching target person in the bed is detected.
  • FIG. 13A schematically illustrates a scene in which a person being watched is detected getting out of bed.
  • FIG. 13B schematically illustrates a scene in which the person being watched over is detected getting out of bed.
  • FIG. 14A schematically illustrates a scene in which a person being watched over is detected from falling off the bed.
  • FIG. 14B schematically illustrates a scene in which a person being watched over is detected from falling off the bed.
  • FIG. 15 is a state transition diagram illustrating a change in the determination processing mode for slip detection.
  • FIG. 16 exemplifies a processing procedure related to the control of the determination processing mode for slip detection.
  • FIG. 17A illustrates a scene where a third party exists on the bedside.
  • FIG. 17B illustrates a scene where a third party exists on the bedside.
  • this embodiment will be described with reference to the drawings.
  • this embodiment described below is only an illustration of the present invention in all respects. It goes without saying that various improvements and modifications can be made without departing from the scope of the present invention. That is, in implementing the present invention, a specific configuration according to the embodiment may be adopted as appropriate.
  • data appearing in the present embodiment is described in a natural language, more specifically, it is specified by a pseudo language, a command, a parameter, a machine language, or the like that can be recognized by a computer.
  • FIG. 1 schematically illustrates a scene where the image analysis apparatus 1 according to the present embodiment is used.
  • the image analysis apparatus 1 according to the present embodiment is an information processing apparatus that detects a person to be watched by photographing the person to be watched by the camera 2 and analyzing the captured image 3 obtained thereby. Therefore, the image analysis apparatus 1 according to the present embodiment can be widely used in a scene where the watching target person is watched.
  • the image analysis apparatus 1 photographs a person to be watched by a camera 2.
  • the camera 2 corresponds to the “photographing device” of the present invention.
  • the person to be watched over is a person to be watched over, for example, an inpatient, a facility resident, or a care recipient.
  • the person to be watched is sleeping on the bed, for example, and the camera 2 is installed to photograph the action of the person to be watched in such a state.
  • the camera 2 may be arranged at any location as long as it can shoot the range in which the person to be watched is watched.
  • the image analysis apparatus 1 acquires a captured image 3 captured by such a camera 2. Then, the image analysis apparatus 1 calculates the difference between the acquired captured image 3 and a background image (background image 4 described later) set as the background of the captured image 3, thereby acquiring the acquired captured image 3. Extract the foreground area. That is, the image analysis apparatus 1 extracts the foreground region of the captured image 3 based on a so-called background difference method.
  • the foreground area is extracted at a place where there is a difference between the photographed image 3 and the background image, in other words, a place where a change has occurred from the background image. Therefore, when the watching target person performs some action, a foreground region corresponding to the watching target person is extracted at a location where the watching target person has moved due to the action. For example, as illustrated in FIG. 1, when the watching target person gets up on the bed, the foreground area appears in a predetermined height range on the bed surface. Therefore, the image analysis device 1 detects the behavior of the watching target person in the bed by analyzing the behavior of the watching target person based on the extracted position of the foreground region.
  • sliding down is an action when the person being watched over gradually changes from a state of falling on the bed to a state of falling under the bed over a relatively long time. Therefore, when the watching target person slides down from the bed, the body of the watching target person gradually shifts from the top of the bed to the bottom of the bed at the bedside. Therefore, if the person being watched over falls from the bed, the path of the body of the person being watched moves, in other words, within a predetermined height range from the bed surface on the bedside, to the body part of the person being watched over. A corresponding foreground area of a predetermined size or larger may continuously appear.
  • a fall detection area (a fall detection area DD to be described later) within a predetermined height range from the bed surface on the bedside. Is set.
  • the image analysis apparatus 1 determines whether or not a foreground region having a predetermined size or more continuously appears in the detection region for slippage for a predetermined time or more. And when it determines with the foreground area
  • the arrangement location of the image analysis apparatus 1 can be appropriately determined according to the embodiment as long as the captured image 3 can be acquired from the camera 2.
  • the image analysis apparatus 1 may be disposed so as to be close to the camera 2 as illustrated in FIG.
  • the image analysis apparatus 1 may be connected to the camera 2 via a network, or may be disposed at a place completely different from the camera 2.
  • FIG. 2 illustrates the hardware configuration of the image analysis apparatus 1 according to the present embodiment.
  • the image analysis apparatus 1 stores a control unit 11 including a CPU, a RAM (Random Access Memory), a ROM (Read Only Memory), and the like, a program 5 executed by the control unit 11, and the like.
  • Unit 12 a touch panel display 13 for displaying and inputting images, a speaker 14 for outputting sound, an external interface 15 for connecting to an external device, a communication interface 16 for communicating via a network, and This is a computer to which a drive 17 for reading a program stored in the storage medium 6 is electrically connected.
  • the communication interface and the external interface are described as “communication I / F” and “external I / F”, respectively.
  • the components can be omitted, replaced, and added as appropriate according to the embodiment.
  • the control unit 11 may include a plurality of processors.
  • the touch panel display 13 may be replaced with an input device and a display device that are separately connected independently.
  • the speaker 14 may be omitted.
  • the speaker 14 may be connected to the image analysis device 1 as an external device instead of as an internal device of the image analysis device 1.
  • the image analysis apparatus 1 may incorporate a camera 2.
  • the image analysis device 1 may include a plurality of external interfaces 15 and may be connected to a plurality of external devices.
  • the camera 2 is connected to the image analysis apparatus 1 via the external interface 15.
  • This camera 2 corresponds to the photographing apparatus of the present invention.
  • the camera 2 according to the present embodiment includes a depth sensor 21 for measuring the depth of the subject.
  • the type and measurement method of the depth sensor 21 may be appropriately selected according to the embodiment.
  • the depth sensor 21 may be a sensor of TOF (TimeFOf Flight) method or the like.
  • the configuration of the camera 2 does not have to be limited to such an example, and can be appropriately selected according to the embodiment.
  • the camera 2 may be a stereo camera. Since the stereo camera shoots the subject within the shooting range from a plurality of different directions, the depth of the subject can be recorded.
  • the camera 2 may be replaced with the depth sensor 21 alone.
  • the depth sensor 21 may be an infrared depth sensor that measures the depth based on infrared irradiation so that the depth can be acquired without being affected by the brightness of the shooting location.
  • relatively inexpensive imaging devices including such an infrared depth sensor include Kinect from Microsoft, Xtion from ASUS, and Stucture® Sensor from Occipital.
  • FIG. 3 shows an example of a distance that can be handled as the depth according to the present embodiment.
  • the depth represents the depth of the subject.
  • the depth of the subject may be expressed by, for example, a straight line distance A between the camera 2 and the object, or a perpendicular distance B from the horizontal axis with respect to the subject of the camera 2. It may be expressed as
  • the depth according to the present embodiment may be the distance A or the distance B.
  • the distance B is treated as the depth.
  • the distance A and the distance B can be converted into each other by using, for example, the three-square theorem. Therefore, the following description using the distance B can be applied to the distance A as it is.
  • the image analysis apparatus 1 according to the present embodiment can specify the position of the subject in the real space.
  • This camera 2 is installed so that the area around the bed can be photographed in order to watch the behavior of the person being watched on in the bed.
  • the camera 2 is arranged so as to photograph the person being watched over from the footboard side of the bed.
  • the camera 2 is configured to be able to measure the depth of the subject.
  • the position of the subject in the real space can be specified without depending on the installation location and the shooting direction of the camera 2. Therefore, the installation location and the shooting direction of the camera 2 do not have to be limited to such an example, and may be appropriately selected according to the embodiment as long as the area around the bed can be shot.
  • the storage unit 12 stores the program 5.
  • the program 5 is a program for causing the image analysis apparatus 1 to execute each process related to setting of a background image, which will be described later, and corresponds to the “image analysis program” of the present invention.
  • the program 5 may be recorded on the storage medium 6.
  • the storage medium 6 stores information such as a program by an electrical, magnetic, optical, mechanical, or chemical action so that information such as a program recorded by a computer or other device or machine can be read. It is a medium to do.
  • the storage medium 6 corresponds to the “storage medium” of the present invention.
  • 2 illustrates a disk-type storage medium such as a CD (Compact Disk) or a DVD (Digital Versatile Disk) as an example of the storage medium 6.
  • the type of the storage medium 6 is not limited to the disk type and may be other than the disk type. Examples of the storage medium other than the disk type include a semiconductor memory such as a flash memory.
  • an image analysis apparatus 1 may be, for example, an apparatus designed exclusively for a service to be provided, or a general-purpose apparatus such as a PC (Personal Computer) or a tablet terminal. Furthermore, the image analysis apparatus 1 may be implemented by one or a plurality of computers.
  • FIG. 4 illustrates a functional configuration of the image analysis apparatus 1 according to the present embodiment.
  • the control unit 11 of the image analysis device 1 expands the program 5 stored in the storage unit 12 in the RAM.
  • the control part 11 interprets and runs the program 5 expand
  • the image analysis apparatus 1 functions as a computer including the image acquisition unit 111, the foreground extraction unit 112, the behavior detection unit 113, the background update unit 114, and the notification unit 115.
  • the image acquisition unit 111 acquires a captured image 3 captured by the camera 2. Since the camera 2 includes the depth sensor 21, the acquired captured image 3 includes depth data indicating the depth of each pixel in the captured image 3.
  • the foreground extraction unit 112 extracts the foreground region of the acquired captured image 3 by calculating the difference between the acquired captured image 3 and the background image 4 set as the background of the captured image 3. That is, the foreground extraction unit 112 extracts a foreground region that is a region changed from the background in the captured image 3 based on a so-called background subtraction method.
  • the behavior detection unit 113 detects the behavior of the person being watched over based on the analysis of the extracted foreground region.
  • the foreground region is extracted from the background image 4 at a place where a change has occurred. Therefore, when the watching target person performs some action, a foreground region corresponding to the watching target person is extracted at a location where the watching target person has moved due to the action. Therefore, the behavior detection unit 113 detects the behavior of the watching target person in the bed by analyzing the behavior of the watching target person based on the extracted position of the foreground region.
  • the behavior detection unit 113 is configured to be able to detect a fall of the watching target person from the bed. Specifically, in order to detect a fall of the person being watched over from the bed, a detection area for the fall is set within a predetermined height range from the bed surface on the bedside. This slip-down detection area may be set by the user or may be set in advance on the system.
  • the behavior detection unit 113 determines whether or not a foreground region having a predetermined size or more continuously appears in the detection region for slipping down for a predetermined time or more. When the behavior detection unit 113 determines that a foreground area of a predetermined size or more has continuously appeared in the detection area for sliding down for a predetermined time or longer, the behavior detecting unit 113 detects the falling of the watching target person from the bed.
  • the background update unit 114 updates the background image 4 used by the foreground extraction unit 112 with the acquired captured image 3.
  • the background update unit 114 sets the captured image 3 acquired at this time to the background image 4 when starting the execution of the determination process for detecting slipping off the bed of the person being watched over.
  • the background image 4 held in the analysis device 1 is updated.
  • the notification unit 115 detects that the watching target person has slipped off the bed, the notification unit 115 performs a notification for notifying that the watching target person is falling off the bed.
  • FIG. 5 illustrates a processing procedure related to the behavior analysis of the person being watched over by the image analysis apparatus 1.
  • the processing procedure related to the behavior analysis of the person to be watched described below corresponds to the “image analysis method” of the present invention.
  • the processing procedure related to the behavior analysis of the watching target person described below is merely an example, and each process may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
  • Step S101 In step S ⁇ b> 101, the control unit 11 functions as the image acquisition unit 111 and acquires the captured image 3 captured by the camera 2. And if the picked-up image 3 is acquired, the control part 11 will advance a process to following step S102.
  • the camera 2 includes the depth sensor 21. Therefore, the captured image 3 acquired in this step S101 includes depth data indicating the depth of each pixel.
  • the control unit 11 acquires the captured image 3 illustrated in FIG. 6 as the captured image 3 including the depth data.
  • FIG. 6 shows an example of the captured image 3 including depth data.
  • the captured image 3 illustrated in FIG. 6 is an image in which the gray value of each pixel is determined according to the depth of each pixel.
  • a black pixel is closer to the camera 2.
  • a white pixel is farther from the camera 2.
  • This captured image 3 may be referred to as a depth image.
  • the control unit 11 can specify the position of each pixel in the real space. That is, the control unit 11 can specify the position in the three-dimensional space (real space) of the subject captured in each pixel from the coordinates (two-dimensional information) and the depth of each pixel in the captured image 3. .
  • FIG. 7 illustrates the coordinate relationship in the captured image 3.
  • FIG. 8 illustrates the positional relationship between an arbitrary pixel (point s) of the captured image 3 and the camera 2 in the real space. 7 corresponds to a direction perpendicular to the paper surface of FIG. That is, the length of the captured image 3 shown in FIG. 8 corresponds to the length in the vertical direction (H pixels) illustrated in FIG. Further, the length in the horizontal direction (W pixels) illustrated in FIG. 7 corresponds to the length in the vertical direction of the photographed image 3 that does not appear in FIG.
  • the coordinates of an arbitrary pixel (point s) of the captured image 3 are (x s , y s ), the horizontal angle of view of the camera 2 is V x , and the vertical image. Assume that the corner is V y . Further, it is assumed that the number of pixels in the horizontal direction of the captured image 3 is W, the number of pixels in the vertical direction is H, and the coordinates of the center point (pixel) of the captured image 3 are (0, 0).
  • the control unit 11 can acquire information indicating the angle of view (V x , V y ) of the camera 2 from the camera 2.
  • the method for acquiring information indicating the angle of view (V x , V y ) of the camera 2 is not limited to such an example, and the control unit 11 is information indicating the angle of view (V x , V y ) of the camera 2. May be acquired based on user input, or may be acquired as a preset setting value.
  • the control unit 11 can acquire the coordinates (x s , y s ) of the point s and the number of pixels (W ⁇ H) of the captured image 3 from the captured image 3.
  • the control unit 11 can acquire the depth Ds of the point s by referring to the depth data included in the captured image 3.
  • the control unit 11 can specify the position of each pixel (point s) in the real space by using these pieces of information. For example, the control unit 11 performs vector S (S x , S y , S z) from the camera 2 to the point s in the camera coordinate system illustrated in FIG. , 1) can be calculated. Thereby, the position of the point s in the two-dimensional coordinate system in the captured image 3 and the position of the point s in the camera coordinate system can be mutually converted.
  • the vector S is a vector of a three-dimensional coordinate system centered on the camera 2.
  • the camera 2 may be tilted with respect to the horizontal direction. That is, the camera coordinate system may be tilted from the world coordinate system in the three-dimensional space (real space). Therefore, the control unit 11 applies the projective transformation using the roll angle, pitch angle ( ⁇ in FIG. 8), and yaw angle of the camera 2 to the vector S, so that the vector S of the camera coordinate system is converted to the world coordinate system. And the position of the point s in the world coordinate system may be calculated.
  • the data format of the captured image 3 including the depth data may not be limited to such an example, and may be appropriately selected according to the embodiment.
  • the captured image 3 may be data (for example, a depth map) in which the depth of the subject within the imaging range is two-dimensionally distributed.
  • the captured image 3 may include an RGB image together with the depth data.
  • the captured image 3 may be a moving image or one or a plurality of still images.
  • the control unit 11 may acquire such a captured image 3 in synchronization with the video signal of the camera 2. Then, the control unit 11 may immediately execute processing from steps S102 to S105 described later on the captured image 3 acquired in synchronization with the camera 2.
  • the image analysis apparatus 1 can perform real-time image processing by continuously executing such an operation continuously, and can perform real-time monitoring of a person to be watched existing in the shooting range of the camera 2. In the following, an example in which the watching target person is watched by the captured images 3 continuously acquired as described above will be described.
  • Step S102 Returning to FIG. 5, in the next step S ⁇ b> 102, the control unit 11 functions as the foreground extraction unit 112, and the difference between the captured image 3 acquired in step S ⁇ b> 101 and the background image 4 set as the background of the captured image 3. Is calculated to extract the foreground region of the captured image 3.
  • the background image 4 is data used to extract a foreground region in the captured image 3 based on the background difference method.
  • the control unit 11 can create the background image 4 by using the captured image 3 acquired at an arbitrary timing, for example, when watching of the watching target person is started.
  • the method for creating the background image 4 from the captured image 3 may be selected as appropriate according to the embodiment.
  • the control unit 11 may set the captured image 3 for one frame obtained by the camera 2 as the background image 4 as it is.
  • the control unit 11 may create the background image 4 by calculating the average of the captured images 3 for several frames obtained by the camera 2. Thereby, the background image 4 including the depth data indicating the depth of each pixel can be generated.
  • step S102 the control unit 11 extracts a foreground region in the captured image 3 acquired in step S101 by calculating a difference between the background image 4 thus created and the captured image 3 acquired in step S101. To do.
  • the control unit 11 can extract the foreground region in the captured image 3 as illustrated in FIGS. 9 and 10.
  • FIG. 9 illustrates a three-dimensional distribution of each pixel (subject) within the shooting range specified based on the depth data included in the shot image 3.
  • FIG. 10 illustrates a three-dimensional distribution of the foreground region extracted from the captured image 3 among the subjects illustrated in FIG.
  • the position of each pixel of the captured image 3 in the three-dimensional space (real space) can be specified based on the depth data. Therefore, by plotting each pixel in the three-dimensional space according to the position and depth in the captured image 3, a three-dimensional distribution as illustrated in FIG. 9 can be created.
  • the background image 4 also includes depth data, similar to the captured image 3. Therefore, also for the background image 4, the three-dimensional distribution of each pixel as exemplified in FIG. 9 can be specified.
  • control unit 11 compares the depths of the corresponding pixels of the captured image 3 and the background image 4. Then, the control unit 11 extracts pixels having different depths as the foreground region between the captured image 3 and the background image 4. Thereby, the control unit 11 can extract the foreground region as exemplified in FIG.
  • the control unit 11 advances the processing to the next step S103.
  • Step S103 the control unit 11 functions as the behavior detection unit 113, and analyzes the behavior of the person being watched over based on the position of the foreground region extracted in step S102.
  • the captured image 3 includes depth data. Therefore, the control unit 11 specifies the position of the foreground area in the real space as exemplified in FIG. 10 by referring to the depth of each pixel included in the foreground area. Then, the control unit 11 analyzes the behavior of the person being watched over based on the position of the foreground region in the real space. Thereby, the control part 11 detects the action in a monitoring subject's bed.
  • the method of analyzing the behavior of the person being watched over based on the position of the foreground region can be set as appropriate according to the embodiment. That is, the method for detecting the behavior of the person being watched over, the predetermined condition for detecting each behavior, and the behavior to be detected may be appropriately selected according to the embodiment.
  • a method for detecting the behavior of the watching target person a method for detecting the watching target person's rising, end-sitting position, getting out of bed, and sliding down based on the positional relationship between the foreground region and the bed will be described.
  • FIG. 11A schematically illustrates a state in which a watching target person gets up on the bed as seen from the bedside side.
  • FIG. 11B schematically illustrates a state where the watching target person gets up on the bed as viewed from the footboard side.
  • the direction connecting the headboard and the footboard of the bed is referred to as “front-rear direction”
  • the direction connecting the side frames is referred to as “left-right direction”
  • the vertical direction of the bed is “ It is referred to as “vertical direction”.
  • the rising detection area DA may be set at a position having a predetermined height HA from the bed surface SU. Then, the control unit 11 may detect rising on the bed of the person being watched over when a foreground area exceeding a predetermined size appears in the detection area DA for rising.
  • the range of the bed surface SU may be set in advance on the system, or may be set as appropriate by the user.
  • the captured image 3 includes depth data. Therefore, the bed surface SU may be appropriately set on the three-dimensional space of the shooting range of the camera 2 illustrated in FIGS.
  • the rising detection area DA may be set in advance on the system or may be set as appropriate by the user. Further, the rising detection area DA may be set as appropriate on the three-dimensional space of the shooting range of the camera 2.
  • the predetermined height HA that defines the range of the detection area DA for rising and the ranges in the front-rear direction, the left-right direction, and the vertical direction of the detection area DA for rising are foreground areas that are extracted when the watching target person gets up on the bed. Is set as appropriate.
  • the threshold value of the size of the foreground area which is a reference for detecting rising, may be appropriately set according to the embodiment.
  • the size of the foreground area may be given by the number of pixels included in the foreground area.
  • the depth of the subject appearing in the captured image 3 is acquired with respect to the surface of the subject, the area of the surface portion of the subject corresponding to each pixel of the captured image 3 does not always match between the pixels.
  • the control unit 11 may calculate the area of the extracted foreground region in the real space using the depth of each pixel in step S103 in order to exclude the influence of the subject's perspective.
  • the area of the foreground area in the real space can be calculated as follows, for example. That is, the control unit 11 first determines the length of the arbitrary point s (one pixel) illustrated in FIGS. 7 and 8 in the real space in the real space on the basis of the following relational expressions 4 and 5. Each of w and / or length h in the vertical direction is calculated.
  • control unit 11 calculates the area of one pixel in the real space at the depth Ds by the square of w calculated in this way, the square of h, or the product of w and h. And the control part 11 calculates the area in the real space of a foreground area
  • the size of the foreground region can be obtained by excluding the influence of the distance of the subject. The same applies to the size of the foreground area calculated when detecting an action other than rising.
  • control unit 11 may use the average of the areas for several frames.
  • the control unit 11 determines the corresponding region. You may exclude from a process target.
  • control unit 11 may monitor the subject appearing in the foreground area of a predetermined size that appears in the rising detection area DA and estimate the subject as a target person. Then, when each process of the behavior analysis is repeated thereafter, the control unit 11 is extracted from each captured image 3 by tracking (tracking) the foreground area estimated as the area in which the watching target person is captured. Or you may identify the foreground area
  • FIG. 12A schematically illustrates a state in which the person being watched is viewing the end sitting position on the bed from the bedside side.
  • FIG. 12B schematically illustrates a state in which a watching target person is performing an end sitting position on the bed as viewed from the footboard side.
  • the end sitting position detection area DB may be set at a position near the side frame on the bed surface SU.
  • the control unit 11 may detect the end sitting position on the bed of the person being watched over when a foreground area exceeding a predetermined size appears in the end sitting position detection area DB.
  • the control unit 11 may detect the end sitting position on the bed of the person being watched over when the foreground area being tracked enters the end sitting position detection area DB after detecting the rising.
  • the edge sitting position detection area DB may be set in advance on the system, or may be set as appropriate by the user. Further, the end-sitting position detection area DB may be set as appropriate on a three-dimensional space of the imaging range of the camera 2. The position of the edge sitting position detection area DB and the range in each direction are appropriately set so as to include the foreground area extracted when the person to be watched performs the edge sitting position. Furthermore, the threshold value of the size of the foreground region that is a reference for detecting the end sitting position may be appropriately set according to the embodiment. This threshold value may be different from the threshold value when detecting the rising.
  • the end sitting position detection area DB is set above the bed surface SU.
  • the setting range of the end sitting position detection area DB may not be limited to such an example, and the end sitting position detection area DB may include an area below the bed surface SU.
  • the end-sitting position detection areas DB are set on both sides of the bed.
  • the end sitting position detection region DB may be set only on one of the sides.
  • FIG. 13A is a perspective view schematically illustrating a scene where the watching target person gets out of the bed.
  • FIG. 13B schematically illustrates a state in which the watching target person leaves the bed as seen from the footboard side.
  • the detection distance HC for getting out of the floor may be set in order to detect the getting-off of the person being watched over based on the foreground region appearing in such a place.
  • control part 11 appeared in the position where the foreground area
  • the value of the detection distance HC for getting out of the bed may be set in advance on the system, or may be set as appropriate by the user.
  • the value of the detection distance HC for getting out of the bed is set as appropriate so that the person to watch over can get out of the bed.
  • the distance in the left-right direction between the side of the bed surface SU and the foreground area can be calculated as appropriate.
  • the control unit 11 calculates the distance in the left-right direction between the side of the bed surface SU and the foreground area by calculating the distance in real space between an arbitrary point included in the foreground area and the side of the bed surface SU. May be.
  • an arbitrary point for calculating the distance can be set as appropriate.
  • the arbitrary point may be the center of gravity of the foreground area, or may be the pixel closest to the side of the bed surface SU in the foreground area.
  • the image analysis apparatus 1 is configured to be able to detect the person being watched from the bed on both sides of the bed.
  • the image analysis apparatus 1 may be configured to be able to detect the person leaving the watched person on either side of the bed. That is, the control unit 11 may omit the detection process of the watching target person on any one side of the bed.
  • a detection area for getting out of the bed may be set in place of the detection distance HC for getting out of the bed as in the case of the above-described rising and end-sitting position.
  • the control unit 11 can detect getting out of the bed of the person being watched over in the same manner as the above-described rising and end sitting positions.
  • the threshold value of the size of the foreground area that serves as a reference for detecting bed leaving may be set as appropriate according to the embodiment. This threshold value may be different from each threshold value when detecting the rising and the end sitting position.
  • FIG. 14A schematically illustrates a state in which the person being watched over slides down from the bed as viewed from the bedside side.
  • FIG. 14B schematically illustrates a state in which the watching target person is seen from the footboard side while sliding off the bed.
  • the bedside is an area in a direction away from the bed from each end (side frame) in the left-right direction of the bed. Therefore, as illustrated in FIGS. 14A and 14B, when the watching target person slides down from the bed, the body part of the watching target corresponds to the predetermined height range from the bed surface SU on the bedside. It is assumed that a foreground area having a predetermined size or larger continuously appears.
  • the detection area for the fall is within a predetermined height range from the bed surface SU on the bedside.
  • DD is set.
  • the control unit 11 determines whether or not a foreground area exceeding a predetermined size continuously appears in the slip detection area DD for a predetermined time or more. And when it determines with the foreground area
  • the slip-down detection area DD may be set in advance on the system, or may be set as appropriate by the user. Further, the slip-down detection area DD may be set as appropriate on the three-dimensional space of the shooting range of the camera 2. The position and the range in each direction of the slip-down detection area DD are appropriately set so as to include a foreground area that is extracted when the watching target person slips off the bed.
  • the length HD3 in the vertical direction from the bed surface SU of the slip-down detection area DD may be set to 300 mm.
  • the range in the vertical direction of the slip-down detection area DD may be set based on the height of the bed surface SU as described above, or may be set without using the height of the bed surface SU as a reference.
  • the length HD2 in the left-right direction of the slip-down detection area DD may be set to 450 mm.
  • the length HD4 in the front-rear direction of the slip-down detection area DD may be set corresponding to the length of each side of the bed surface SU.
  • the slip-down detection area DD is set so as to be in contact with or close to the side of the bed surface SU, the foreground area corresponding to the futon appears in the slip-down detection area DD when the futon protrudes from the bed.
  • the slip-down detection area DD is set at a position away from the bed at least by the thickness of the futon.
  • the slip-down detection area DD is set to be separated from the bed by a distance HD1 in the left-right direction. This distance HD1 may be appropriately set according to the embodiment.
  • the distance HD1 is set to 50 mm, for example. As a result, the accuracy of slip-off detection can be increased.
  • the threshold value of the size of the foreground area that is a reference for detecting slipping may be set as appropriate according to the embodiment. This threshold value may be different from each threshold value when detecting the rising, the end sitting position, and getting out of bed.
  • the slip-down detection areas DD are set on both sides of the bed.
  • the sliding detection area DD may be set only on one of the sides.
  • the control unit 11 detects each behavior of the watching target person by analyzing the behavior of the watching target person based on the position of the foreground region as described above. That is, when it determines with the control part 11 satisfy
  • a state for example, a sleeping state
  • the method of detecting the behavior of the person being watched over may not be limited to the above method, and may be appropriately selected according to the embodiment.
  • the control unit 11 may calculate the average position of the foreground region by taking the average of the position and depth in the captured image 3 of each pixel included in the foreground region. Then, the control unit 11 detects the behavior of the watching target person by determining whether or not the average position of the foreground region is included in the detection region set as a condition for detecting each behavior in the real space. May be.
  • control unit 11 may specify a body part that appears in the foreground area based on the shape of the foreground area.
  • the foreground area indicates a change from the background image. Therefore, the body part shown in the foreground region corresponds to the motion part of the person being watched over.
  • the control unit 11 may detect the behavior of the person being watched over based on the positional relationship between the specified body part (motion part) and the bed surface SU. Similarly, even if the control unit 11 detects the behavior of the person being watched over by determining whether or not the body part shown in the foreground area included in the detection area of each action is a predetermined body part. Good.
  • Step S104 the control unit 11 determines whether or not the watching target person has fallen off from the bed in step S ⁇ b> 103. Then, when it is detected in step S103 that the watching target person has slipped off the bed, the control unit 11 proceeds to the next step S105. On the other hand, in the case where it has not been detected in step S103 that the person being watched over has slipped off the bed, the control unit 11 omits the process of the next step S105 and ends the process according to this operation example.
  • Step S105 In the next step S105, the control unit 11 functions as the notification unit 115, and when it is detected in step S103 that the person to be watched is dropped from the bed, the person to be watched is in a state of being dropped from the bed. A notification is made to notify that there is something. Thereby, the processing according to this operation example is completed.
  • the control unit 11 may perform the notification to the watcher who watches the behavior of the watching target person or the watching target person himself.
  • the watcher is, for example, a nurse, a facility staff, a caregiver, or the like.
  • the image analysis apparatus 1 when used in a facility such as a hospital, the image analysis apparatus 1 can be connected to equipment such as a nurse call system via the external interface 15.
  • the control unit 11 may notify the slip detection in cooperation with the nurse call system or the like. That is, the control unit 11 may control the nurse call system via the external interface 15. And the control part 11 may perform the call by the said nurse call system as notification of a slip-off detection. Accordingly, it is possible to appropriately notify a nurse or the like who watches over the person being watched over that the person being watched over is falling from the bed.
  • control unit 11 may display a screen on the touch panel display 13 for notifying that the person to be watched has detected a fall from the bed as a notice of the fall detection. Further, for example, the control unit 11 may notify the slip detection by outputting a predetermined sound from the speaker 14 connected to the image analysis apparatus 1. By installing the touch panel display 13 and the speaker 14 in the watcher's room, it is possible to appropriately notify the watcher that the watch target person has slipped off the bed.
  • control unit 11 may notify such a slip detection using an e-mail, a short message service, a push notification, or the like.
  • the e-mail address, telephone number, and the like of the user terminal that is the notification destination may be registered in advance in the storage unit 12. Then, the control unit 11 may notify slip detection using the pre-registered e-mail address, telephone number, and the like.
  • the control part 11 may perform notification for notifying that the said action was detected also when actions other than the fall of a watching target person from the bed are detected.
  • the control unit 11 determines whether an action to be notified is detected in step S103. If the control unit 11 determines in step S103 that an action to be notified has been detected, the control unit 11 performs notification to notify that the action has been detected in step S105.
  • the image analysis apparatus 1 can notify the watcher and the like that the action has been detected even when the action other than slipping down from the bed of the person being watched over is detected.
  • said step S104 and this step S105 may be abbreviate
  • the slip-down detection area DD for detecting a slip-off from the bed of the person being watched over is set at the bedside. Therefore, if there is a person other than the person being watched over (hereinafter referred to as “third party”) on the bedside, the foreground area corresponding to this third person is mistaken for the foreground area of the person to be watched, and the person being watched over There is a possibility of false detection of a person falling off the bed.
  • the image analysis apparatus 1 determines that the watched person falls off from the bed in step S103 only in the following case. It may be configured to perform processing. That is, the image analysis apparatus 1 includes a first condition in which a third party other than the watching target person is present on the bedside, and a second condition in which the watching target person performs an action other than sleeping on the bed, When at least one of the above is satisfied, the execution of the slip detection determination process may be stopped in step S103. On the other hand, when both the first condition and the second condition are not satisfied, the image analysis apparatus 1 may be configured to execute (start) the slip detection determination process in step S103.
  • start an example of control in a determination process mode that determines whether or not to execute the determination process of the slipping detection will be described with reference to FIGS. 15 and 16.
  • FIG. 15 illustrates state transitions in the determination processing mode according to the present embodiment.
  • the execution mode is a mode in which the determination process for detection of a person being watched over is performed in step S103.
  • the stop mode is a mode in which the determination process of the detection of the person being watched over is omitted in step S103.
  • Information indicating such a determination processing mode is held on a RAM, for example.
  • the control unit 11 switches the determination processing mode by updating the information.
  • the control unit 11 sets the determination processing mode to the execution mode at the time when watching is started.
  • step S103 based on the analysis of the foreground area extracted in step S102, the control unit 11 includes a first condition in which a third party other than the watching target person exists on the bedside, and the watching target person. It is determined whether or not the second condition for performing an action other than sleeping on the bed is satisfied. Then, when it is determined that at least one of the first condition and the second condition is satisfied, the control unit 11 switches the determination processing mode from the execution mode to the stop mode. Thereafter, when both the first condition and the second condition are not satisfied, the control unit 11 switches the determination processing mode from the stop mode to the execution mode. Thereby, the control part 11 controls the determination processing mode of slip-off detection.
  • FIG. 16 illustrates a processing procedure relating to control of the determination processing mode according to the present embodiment.
  • the processing procedure relating to the control of the determination processing mode for slip detection described below is merely an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
  • Step S201 the control unit 11 functions as the behavior detection unit 113, and based on the analysis of the foreground area extracted in step S102, a first condition in which a third party other than the watching target person exists on the bedside. And the second condition in which the person being watched over performs a behavior other than sleeping on the bed is determined.
  • step S202 when the control unit 11 determines in step S201 that either one of the first condition and the second condition is satisfied, the process proceeds to step S203. On the other hand, if the control unit 11 determines in step S201 that both the first condition and the second condition are not satisfied, the process proceeds to step S204. Whether the first condition and the second condition are satisfied can be determined as follows.
  • FIG. 17A is a perspective view schematically illustrating a scene in which a third party other than the watching target person exists on the bedside.
  • FIG. 17B schematically illustrates a state in which a third party other than the watching target person is present on the bedside as viewed from the footboard side.
  • a third-party detection range HE may be set in order to detect the presence of a third party based on the foreground area that appears in such a place.
  • the foreground region exceeding a predetermined size appears in the third party detection range HE from the right side of the bed surface SU to the right or from the left side of the bed surface SU to the left Alternatively, it may be determined that the first condition is satisfied. On the other hand, when the foreground area exceeding the predetermined size does not appear in the third party detection range HE, it may be determined that the first condition is not satisfied.
  • the third party detection range HE may be set in advance on the system or may be set as appropriate by the user. Moreover, the length of each direction of the third-party detection range HE may be set as appropriate on the three-dimensional space of the shooting range of the camera 2. This third party detection range HE is appropriately set so that the presence of a third party can be detected at the bedside.
  • the image analysis apparatus 1 is configured to be able to detect the presence of a third party on both sides of the bed.
  • the image analysis apparatus 1 may be configured to be able to detect the presence of a third party only on one side of the bed. That is, the control part 11 may abbreviate
  • control unit 11 may mistake the third person as a third person who has left the bed and returned to the bed again. Therefore, in order to deal with such misrecognition, the control unit 11 may recognize the watching target person as follows.
  • the third-party detection range HE when a person who enters the third-party detection range HE is a person to be watched over, it is assumed that this person moves on the bed. On the other hand, when the person who entered the third-party detection range HE is a third party other than the person to be watched over, it is assumed that this person does not move on the bed.
  • the control unit 11 may temporarily estimate the foreground area that appears within the detection range HE for the third party as a foreground area corresponding to the third party. In addition, in the subsequent processing, the control unit 11 may track the foreground region. When the foreground area enters the rising detection area DA, the foreground area does not correspond to a third party, and the foreground area corresponds to the person being watched over. May be. Accordingly, it is possible to prevent the control unit 11 from erroneously recognizing the foreground area corresponding to the watching target person and the foreground area corresponding to the third party.
  • the control unit 11 determines that the second condition is satisfied when it can be estimated that the watching target person is performing an action other than sleeping. On the other hand, the control unit 11 determines that the second condition is not satisfied when it can be estimated that the watching target person is sleeping.
  • control unit 11 estimates that the person to be watched is performing an action other than sleeping when the person to be watched rises, sits at the end, or gets out of bed in the same manner as in step S103. It may be determined that the second condition is satisfied. On the other hand, the control unit 11 may estimate that the watching target person is asleep when the watching target person is awakened, cannot detect the sitting position and getting out of bed, and may determine that the second condition is not satisfied.
  • control unit 11 may estimate that the watching target person is sleeping in the following case. That is, when the watching target person is sleeping on the bed, it is assumed that the foreground area corresponding to the watching target person appears above the bed surface SU and below the rising detection area DA.
  • the control unit 11 moves the person to be watched on the bed. You may estimate that you are sleeping. For example, the control unit 11 does not have a foreground area exceeding a predetermined size in the rising detection area DA, and exceeds a predetermined size between the rising detection area DA and the bed surface SU. When the foreground area appears, it may be estimated that the watching target person is sleeping on the bed. Based on these estimations, the control unit 11 may determine that the second condition is not satisfied.
  • Step S203 the control unit 11 sets the slippage detection determination processing mode to the stop mode, and ends the processing procedure according to this operation example. For example, if the judgment processing mode for slip detection is set to the stop mode, the control unit 11 keeps the judgment processing mode for slip detection in the stop mode. On the other hand, when the judgment processing mode for slip detection is set to the execution mode, the control unit 11 switches the judgment processing mode for slip detection from the execution mode to the stop mode. As a result, in step S ⁇ b> 103 executed thereafter, the control unit 11 omits the determination process of slip detection.
  • step S204 the control unit 11 sets the slip detection detection processing mode to the execution mode, and proceeds to the next step S205. For example, if the determination processing mode for slip detection is set to the stop mode, the control unit 11 switches the determination processing mode for slip detection from the stop mode to the execution mode. On the other hand, if the determination processing mode for slip detection is set to the execution mode, the control unit 11 keeps the determination processing mode for slip detection in the execution mode. As a result, in step S103 to be executed thereafter, the control unit 11 executes (restarts) the slip detection determination process.
  • Step S205 the control unit 11 functions as the background update unit 114, updates the background image 4 with the captured image 3 acquired in step S101 at this time, and ends the processing procedure according to this operation example. . Specifically, when the control unit 11 determines that both the first condition and the second condition are not satisfied in steps S201 and S202, the control unit 11 slips off the bed of the person being watched over in step S204. The execution of the determination process for detecting the error is started. At this time, the control unit 11 sets the captured image 3 acquired in step S101 at this time as the background image 4. As a result, the control unit 11 updates the background image 4 used in step S102 executed thereafter. This step S205 may be executed before step S204. Further, when the background image 4 is not updated, this step S205 may be omitted.
  • each process relating to the control of the judgment processing mode of the slip detection may be executed at an arbitrary timing.
  • each process relating to the control of the determination process mode of the slip detection may be performed after the process of step S102 and before the determination process of the slip detection is performed in step S103.
  • the image analysis apparatus 1 may not be configured to stop execution of the slip detection determination process in the above case, and always performs the slip detection determination process in step S103. It may be configured.
  • the image analysis apparatus 1 extracts the foreground region from the captured image 3 captured by the camera 2 based on the background difference method in steps S101 and S102.
  • the image analysis apparatus 1 detects the behavior of the person being watched over based on the extracted position of the foreground region.
  • the fall detection area DD is set in a range of a predetermined height from the bed surface SU on the bedside.
  • the control unit 11 determines whether or not a foreground area having a predetermined size or more continues to appear in the slip detection area DD for a predetermined time or longer. And when it determines with the foreground area
  • step S201 the control unit 11 performs actions other than the first condition in which a third party other than the watching target person is present on the bedside and the watching target person sleep on the bed. It is determined whether or not the second condition being performed is satisfied. If it is determined that at least one of the first condition and the second condition is satisfied, the control unit 11 determines the detection of the person being watched over in step S103 by the process of step S203. Stop execution of the process. On the other hand, when it is determined that both the first condition and the second condition are not satisfied, the control unit 11 executes the determination process for the detection of the person being watched over in step S103 by the process of step S204. Start.
  • the background state is changed in the bedside area due to the action of the third party or the person being watched over.
  • a case where a third party other than the person being watched over arranges a wheelchair, a drip stand or the like on the bedside corresponds to a case where the background state is changed in the bedside region.
  • the foreground area continuously appears in the detection area DD for falling down set on the bedside, and the falling of the person being watched over from the bed is prevented. There is a possibility of false detection.
  • step S205 the control unit 11 acquires the step S101 at this time when starting the execution of the determination process for detecting slippage from the bed of the person being watched over.
  • the background image 4 is updated by setting the captured image 3 as the background image 4. For this reason, even if the background state is changed in the bedside area for the reason described above, the background image 4 is updated with the captured image 3 acquired after the change has occurred. In this process, it is possible to prevent the foreground area from being extracted in the area where the background has changed. Therefore, according to the present embodiment, it is possible to prevent erroneous detection of slipping that occurs in the above case.
  • the control unit 11 acquires the captured image 3 including depth data.
  • the captured image that is a target for analyzing the behavior of the person being watched over may not be limited to such an example, and may not include depth data. That is, the image analysis device 1 may be configured to analyze the behavior of the watching target person based on the two-dimensional image.
  • the two-dimensional image may be a two-dimensional thermography, an infrared image, or the like taken by an infrared camera, an infrared sensor, or the like.
  • the image analysis apparatus 1 when it is detected that the person being watched over is slipped off, the image analysis apparatus 1 performs a notification for notifying the fact.
  • the image analysis apparatus 1 may be configured to perform an operation for preventing an accident.
  • the bed used by the person being watched over is an electric bed that can be controlled by a computer
  • the electric bed may be connected to the image analysis apparatus 1 via the external interface 15.
  • the control unit 11 controls the electric bed via the external interface 15 to reduce the height of the electric bed. You may operate as follows. Thereby, it is possible to prevent the person being watched over from falling from a high place and to prevent an accident due to falling from the bed.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nursing (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Alarm Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Invalid Beds And Related Equipment (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne une technique de détection de glissement vers le bas par une personne à surveiller. Afin de détecter le glissement vers le bas depuis un lit d'une personne à surveiller, le dispositif d'analyse d'image relatif à un aspect de la présente invention détermine si une région d'avant-plan ayant une taille prédéterminée ou supérieure apparaît en continu pendant au moins une durée prédéterminée dans un ensemble de régions de détection de glissement vers le bas dans une plage de hauteur prédéterminée depuis une surface de lit au chevet d'un malade, et le glissement vers le bas depuis le lit par la personne à surveiller est détecté lorsqu'il est déterminé que la région d'avant-plan ayant une taille prédéterminée ou supérieure apparaît pendant au moins une durée prédéterminée dans la région de détection de glissement vers le bas.
PCT/JP2016/053347 2015-05-20 2016-02-04 Dispositif d'analyse d'image, procédé d'analyse d'image, et programme d'analyse d'image WO2016185738A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017518775A JP6607253B2 (ja) 2015-05-20 2016-02-04 画像解析装置、画像解析方法、及び、画像解析プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015102465 2015-05-20
JP2015-102465 2015-05-20

Publications (1)

Publication Number Publication Date
WO2016185738A1 true WO2016185738A1 (fr) 2016-11-24

Family

ID=57319748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/053347 WO2016185738A1 (fr) 2015-05-20 2016-02-04 Dispositif d'analyse d'image, procédé d'analyse d'image, et programme d'analyse d'image

Country Status (2)

Country Link
JP (1) JP6607253B2 (fr)
WO (1) WO2016185738A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019220735A (ja) * 2018-06-15 2019-12-26 エイアイビューライフ株式会社 情報処理装置
JPWO2020003952A1 (ja) * 2018-06-26 2021-08-02 コニカミノルタ株式会社 コンピューターで実行されるプログラム、情報処理装置、および、コンピューターで実行される方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008206868A (ja) * 2007-02-27 2008-09-11 Paramount Bed Co Ltd 寝台装置
JP2011086286A (ja) * 2009-09-17 2011-04-28 Shimizu Corp ベッド上及び室内の見守りシステム
JP2012005745A (ja) * 2010-06-28 2012-01-12 Tateyama System Laboratory Ltd 寝床看護システム
JP2012071004A (ja) * 2010-09-29 2012-04-12 Omron Healthcare Co Ltd 安全看護システム、および、安全看護システムの制御方法
JP2012094140A (ja) * 2010-10-28 2012-05-17 General Electric Co <Ge> 人及び物体の位置を監視するシステム及び方法
US20140092247A1 (en) * 2012-09-28 2014-04-03 Careview Communications, Inc. System and method for monitoring a fall state of a patient while minimizing false alarms
JP2014174627A (ja) * 2013-03-06 2014-09-22 Nk Works Co Ltd 情報処理装置、情報処理方法、及び、プログラム
JP2014236896A (ja) * 2013-06-10 2014-12-18 Nkワークス株式会社 情報処理装置、情報処理方法、及び、プログラム

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008206868A (ja) * 2007-02-27 2008-09-11 Paramount Bed Co Ltd 寝台装置
JP2011086286A (ja) * 2009-09-17 2011-04-28 Shimizu Corp ベッド上及び室内の見守りシステム
JP2012005745A (ja) * 2010-06-28 2012-01-12 Tateyama System Laboratory Ltd 寝床看護システム
JP2012071004A (ja) * 2010-09-29 2012-04-12 Omron Healthcare Co Ltd 安全看護システム、および、安全看護システムの制御方法
JP2012094140A (ja) * 2010-10-28 2012-05-17 General Electric Co <Ge> 人及び物体の位置を監視するシステム及び方法
US20140092247A1 (en) * 2012-09-28 2014-04-03 Careview Communications, Inc. System and method for monitoring a fall state of a patient while minimizing false alarms
JP2014174627A (ja) * 2013-03-06 2014-09-22 Nk Works Co Ltd 情報処理装置、情報処理方法、及び、プログラム
JP2014236896A (ja) * 2013-06-10 2014-12-18 Nkワークス株式会社 情報処理装置、情報処理方法、及び、プログラム

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019220735A (ja) * 2018-06-15 2019-12-26 エイアイビューライフ株式会社 情報処理装置
JP7232497B2 (ja) 2018-06-15 2023-03-03 エイアイビューライフ株式会社 情報処理装置
JPWO2020003952A1 (ja) * 2018-06-26 2021-08-02 コニカミノルタ株式会社 コンピューターで実行されるプログラム、情報処理装置、および、コンピューターで実行される方法
JP7327397B2 (ja) 2018-06-26 2023-08-16 コニカミノルタ株式会社 コンピューターで実行されるプログラム、情報処理システム、および、コンピューターで実行される方法

Also Published As

Publication number Publication date
JP6607253B2 (ja) 2019-11-20
JPWO2016185738A1 (ja) 2018-03-08

Similar Documents

Publication Publication Date Title
JP6115335B2 (ja) 情報処理装置、情報処理方法、及び、プログラム
JP6500785B2 (ja) 情報処理装置、情報処理方法、及び、プログラム
JP6167563B2 (ja) 情報処理装置、情報処理方法、及び、プログラム
JP6780641B2 (ja) 画像解析装置、画像解析方法、及び、画像解析プログラム
JP6432592B2 (ja) 情報処理装置、情報処理方法、及び、プログラム
JP6171415B2 (ja) 情報処理装置、情報処理方法、及び、プログラム
JP6504156B2 (ja) 情報処理装置、情報処理方法、及び、プログラム
WO2016151966A1 (fr) Dispositif de surveillance de nourrisson, procédé de surveillance de nourrisson et programme de surveillance de nourrisson
JP6489117B2 (ja) 情報処理装置、情報処理方法、及び、プログラム
JP6638723B2 (ja) 画像解析装置、画像解析方法、及び、画像解析プログラム
JP2011198161A (ja) 対象物認識システム及び該システムを利用する監視システム、見守りシステム
JP2011209794A (ja) 対象物認識システム及び該システムを利用する監視システム、見守りシステム
JP6607253B2 (ja) 画像解析装置、画像解析方法、及び、画像解析プログラム
JP2019008515A (ja) 見守り支援システム及びその制御方法
JP6737262B2 (ja) 異常状態検知装置、異常状態検知方法、及び、異常状態検知プログラム
JP6645503B2 (ja) 画像解析装置、画像解析方法、及び、画像解析プログラム
JP6606912B2 (ja) 浴室異常検知装置、浴室異常検知方法、及び浴室異常検知プログラム
JP6565468B2 (ja) 呼吸検知装置、呼吸検知方法、及び呼吸検知プログラム
JP6780639B2 (ja) 画像解析装置、画像解析方法、及び、画像解析プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16796133

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017518775

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16796133

Country of ref document: EP

Kind code of ref document: A1