SG188111A1 - Condition detection methods and condition detection devices - Google Patents

Condition detection methods and condition detection devices Download PDF

Info

Publication number
SG188111A1
SG188111A1 SG2013008602A SG2013008602A SG188111A1 SG 188111 A1 SG188111 A1 SG 188111A1 SG 2013008602 A SG2013008602 A SG 2013008602A SG 2013008602 A SG2013008602 A SG 2013008602A SG 188111 A1 SG188111 A1 SG 188111A1
Authority
SG
Singapore
Prior art keywords
region
area
person
image
various embodiments
Prior art date
Application number
SG2013008602A
Inventor
Xinguo Yu
Kittipanya-Ngam Panachit
How Lung Eng
Liyuan Li
Original Assignee
Agency Science Tech & Res
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency Science Tech & Res filed Critical Agency Science Tech & Res
Priority to SG2013008602A priority Critical patent/SG188111A1/en
Publication of SG188111A1 publication Critical patent/SG188111A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/043Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Emergency Management (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Theoretical Computer Science (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

CONDITION DETECTION METHODS AND CONDITION DETECTION DEVICES AbstractA condition detection method may include: acquiring an image including a person; detecting a first region of the image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person; removing from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold, to obtain a second region; determining a first geometrical shape that fits the first region according to a pre-determined first matching criterion; determining a second geometrical shape that fits the second region according to a pre-determined second matching criterion; and determining a condition of the person on the image based on the first geometrical shape and the second geometrical shape.FIG. 2

Description

CONDITION DETECTION METHODS AND CONDITION DETECTION
DEVICES
Technical Field
[0001] Embodiments relate to condition detection methods and condition detection devices.
Background
[0002] © Accidental falls among elderly people is one of major health issues. It is estimated that over one third of elderly of ages 65 and older fall each year. Fall detection may be very important because the immediate treatment may be very critical in saving life and reducing the injuries of fallers.
[0003] Many research efforts have made in fall detection due to the big potential : market value of fall detection technology. A series of technologies have already been developed in recent years. According to what and how sensors are used they can be divided into three approaches: wearable sensor, ambience sensor, and vision-based.
Generally speaking, camera is one kind of sensors, however it may be differentiated between a camera and other sensors. The wearable sensor approach may be to use sensors that are worn or held by users to detect the motion of the body of wearer and use classifiers to identify suspicious events including fall {3, 8, 13, 17]. However, it may not discern whether the detected suspicious event is benign or harmful. Using wearable sensor to detect fall may be inaccurate because its assumption that devices and person keep in the certain spatial relative relation may be frequently broken. Another big disadvantage of the wearable sensor is that it may be intrusive to users. The general : comment from practicing doctors is that most of patients have low will to wear device for fall detection because they feel well before fall scourrence, However, the advantage of the wearable sensors except sensor-armed garment may be cheap. Ambience sensor approach may be to use many ambience sensors which collects the data related person ‘when person are close to them [1]. This approach may be good for some scenarios. E.g., some sensors may be installed on bed, chair, and on wall to detect fall in confined areas. . : The advantages of this approach may be low cost and non-intrusion, however it may also suffer from inaccuracy and limited coverage. These sensors may not find out who is in the monitoring space. Another sharing demerit of sensor approaches may be that they may not be visually verified and there may be no video record for post check and analysis. Cameras may be increasingly included in in-home assistive system because they may have multiple advantages over sensor approaches and the price of cameras decreases rapidly. First, a camera-based approach may be able to be used to detect multiple events simultaneously. Second, they may be less intrusive because they are installed on building.
Finally, the recorded video may be used for remote and post verification and analysis.
There were many findings and algorithms in camera-based fall detection, but there are still challenges to be overcome.
[0004] There were scores of camera-based fall detection algorithms in the literature.
Generally speaking, each algorithm uses some of the characteristics of falls. According to the used principles relating to the characteristics of fall they can be divided into four categories: inactivity detection, (body) shape change analysis, 3D head motion analysis, _ temporal contrast vision. -
[0005] - In inactivity detection algorithms, the principle that a fall will end with an inactivity period on the floor may be used. Nait-Charif and McKenna [10] uses omni- camera in the system. The algorithm overhead tracks person so to obtain the motion traces of the person. Then it classifies the activities based on the motion traces and context information. Inactivity may be one of classes and an inactivity will be said to be a fall if it occurs in certain context. Jansen and Deklerck [7] use a stereo camera for fall detection. They use the stereo camera to acquire depth image (called 3D image in their
Co paper). Then they identify the body area and find the body's orientation. Finally they use - the orientation change of body to detect inactivity, fall is detected if inactivity occurs in certain context.
[0006] In shape change analysis algorithms, the principle that the shape of falling person will change from standing to lying may be used. B. U. Téreyin et al [16] presented an HMM based fall detection algorithm. In this paper an HMM uses video features to differ fall from walking. The features are wavelet coefficients of the ratio of height to width of the bounding box of body shape. Another HMM uses audio feature to differ falling sound from talking. D. Anderson et al [2] use an HMM-based algorithm to detect fall. The HMMs use the multiple features extracted from the silhouette: height of bounding box, magnitude of motion vector, determinant of covariance matrix, and ratio of width to height of bounding box of person. The HMMs are trained to distinguish walking, kneeling, getting-up, and falling. Thome and Miguet [14] use an HHMM-based ) algorithm to detect fall. The single feature of HHMM is the orientation of the blob of the body. The state level of HHEMM is the postures of body. The other two levels of the . HHMM represent behavior pattern and global motion pattern respectively. S. G. Miaou et - al [9] uses the rule-based algorithm to detect fall. The rules infer the fall occurrence based on the ratio of width to height of the bounding box of body in image. Other points are that it uses the omni-camera and it also uses the context information in deciding fall. R.
Cucchiara et al [4] uses 3D shape of body to detect fall. 3D body shape is obtained by muitiple cameras that are calibrated in prior. Thome et al [15] fused the fall assessment results of two views to form multiview fall detection system, which is much better performance than one view systerh. }
[0007] : In 3D head motion analysis algorithms, the principle that vertical motion is faster than horizontal motion in a fall may be used. Rougier [11,12] develop an approach to detect fall using monocular 3D head tracking. The tracking component first locates the head, next estimates the head pose using particle filters, and then obtain the 3D position of head. The fall detection component computes the vertical and horizontal velocity of the head and then uses two appropriate thresholds to distinguish falling from walking.
[0008] In temporal contrast vision algorithm [6], the principle that the fall will form the certain patterns of temporal contrast change in vision sequence may be used, In this algorithm, the camera outputs the address-events of temporal contrast change. Then it uses classification algorithm based on address-events to identify the various activities such as fall, crouch, get-up, and walk.
[0009] In body feature extraction for fall detection, the existing methods [5, 15] use an ellipse to fit the whole body shape. Then the parameters of fit ellipse are used as features for fall assessment. oo 4 o
[0010] Nowadays, there are many solutions proposed to detect falls such as ambience devices, wearable device’ and camera-based [19]. Many techniques of human body detection and human body posture estimation have been proposed [20, 21, 22]. However, none of them seem to be suitable for fall detection with monocular camera because of their limitations such as assuming prior knowledge of appearance (pose and color), expecting accurate initialization and calibration, and using information from multiple - cameras for 3D reconstruction.
[0011] Many works assume the upright pose of human as they applied trained classifiers to locate body and head such as using classifiers trained by haarlets features co and HOG(21). Obviously, this assumption is not applicable to fall detection because those features are not orientation-invariant.
[0012] In contrast, some simple — are provided for detecting falls as they are simple and well connected to fall incidents such as head position, the ratio between height and width of bounding box, and the angle of the object to the ground. Based on the way of classifying falls, vision-based detectors may be categorized into rule-based and machine learning approach. Rule-based approach techniques detect falls by measuring the key features and using assumption rules to classify fall. This approach may be faster and simpler in the process of decision making. Huang et.a/[23] suggested to measure the big change of features from bounding box including width, height and the ratio of width and height. The change greater than fixed thresholds may trigger the alarm. Rougier et.al
[12] detected the position of head in three dimensional space from a single camera using three particle filters. The big velocity of head height moving downward is considered a fall. Machine learning approach techniques focus on fall/non-fall classification. These . .
methods focus on classification of posture rather than only fall and non-fall by training classifiers with visual attributes such as positions of foreground pixels, Fourier and PCA oo descriptors and texture inside foreground. Wang [18] applied ellipse fitting to foreground object, and extracted three key features from sillhouette including inner-distance shape context (IDSC), Fitted Ellipse(FE), and Projection Histogra(PH). Next Procrustes shape analysis was applied to model all features and measure the similarity between reference and target postures. Juang and Chang [24] applied Neural Fuzzy Network to learn and classify postures based on Discrete Fourier Transform(DFT) descriptor of X and Y projection histogram of foreground silhouette. Foroughi et al. [5] exploited Support : | Vector Machine(SVM) to classify posture based on three key features, approximated ellipse covering foreground object, DFT of projection histogram of foreground object, and head position(the very top of estimated ellipse). Thome et al, [15] proposed to use
Hidden Markov Model (HMM) to learn the state of human behavior using the orientation of object in three dimensional space. All of the above works [18, 24, 5, 15, 23] obtained foreground object using background subtraction techniques.
[0013] Document [1] is M. Alwan, P.J. Rajendran, S. Kell, D. Mack, S. Dalal, M.
Wolfe, and R. Felder, A smart and passive floor-vibration based fall detector for elderly,
ICTTA '06 ( 2nd Information and Communication Technologies), Vol. 1, pp.1003 ~ 1007, 24-28 April 2006. : [0014] Document {2] is D. Anderson, J.M. Keller, M. Skubic, X. Chen, and Z. He,
Recognizing falls from silhouettes, EMBS 2006 (28th Int'l Conf. of the IEEE Eng. in , Medicine and Biology Society), pp. 6388 — 6391, Aug. 2006.
[0015] Document (3's J. Chen, K. Kwong, D. Chang; J. Luk, and R. Bajcsy,
Wearable sensors for reliable fall detection, EMBS, pp. 3551-3554, 2005.
[0016] ) Document [4] is R. Cucchiara, A. Prati, and R. Vezzani, A multi-camera vision system for fall detection and alarm Creation, Expert Systems Journal, vol, 24, n. 5, : pp. 334-345, 2007.
[0017] Document [5] is H. Foroughi, A. Rezvanian, and A. Paziraee, Robust fall detection using human shape and multi-class support vector machine, Computer Vision,
Graphics and Image Processing, 2008. ICV GIP *08. Sixth Indian Conference on, pp.413— 420, Dec. 2008.
[0018] : Document [6] is Z. Fu , E. Culurciello, P. Lichtsteiner, T. Delbruck. Fall detection using an address-event temporal contrast vision sensor, IEEE Int’l Symposium on Circuits and Systems, 2008 (ISCAS2008), pp. 424-427, 18-21 May , 2008.
[0019] Document [7] is B. Jansen and R. Deklerck, Context aware inactivity recognition for visual fall detection, Pervasive Health Conference and Workshops 2006, pp.1- 4, Nov. 29-Dec. 1, 2006 . [0020} Document [8] is S. Luo, and Q. Hu, A dynamic motion pattern analysis approach to fall detection, ISCAS 2004, Vol 1, pp:5-8, 1-3 Dec. 2004. ~~ [0021} Document [9] is S. G. Miaou, P. H. Sung, C. Y. Huang, A customized human fall detection system using omni-camera images and personal information, D2H2 2006,1st Transdisciplinary Conf. on Distributed Diagnosis and Home Healthcare, pp.39- . 42,2006.
[0022] . Document [10] is H. Nart-Charif and S. J. McKenna. Activity summarisation “and fall detection in a supportive home environment, ICPR 2004.
[0023] Document [1 1 is C. Rougier, J. Meunier, A. St-Amaud, and J. Rousseau, Fall detection from human shape and motion history using video surveillance, 21st Int'l Conf. on Advanced Information Networking and Applications Workshops, Vol 2, pp. 875-880, 2007.
[0024] Document {12] is C. Rougier, J. Meunier, A, St-Arnaud, and J. Rousseau,
Monocular 3D head tracking to detect falls of elderly people, EMBS 2006, pp.6384 — © 6387, Aug. 2006. [0025) | Document [13] is A. Sixsmith and N. Johnson. A smart sensor to detect the - } falls of the elderly, IEEE Pervasive Computing, pp42-47, No 2, 2004.
[0026] Document [14] is N, Thome, and S. Miguet, A HHMM-Based approach for robust fall detection, ICARCV 06 (9th Int'l Conf. on Control, Automation, Robotics and
Vision), pp.1 — 8, 5-8 Dec. 2006.
[0027] Document [15] is N. Thome, S. Miguet, and S. Ambellouis, “A real-time, multiview fall detection system: A LHMM-based approach, IEEE Trans. Circuits and
Systems for Video Technology, vol. 18, no. 11, pp. 1522-1532, Nov. 2008.
[0028] Document {16] is B. U. Toreyin, ¥. Dedeoglu, and A. E. Cetin. HMM based falling person detection using both audio and video, IEEE 14th Signal Processing and
Communications Applications, 17-19 April 2006. :
[0029] Document [17] is T. Zhang, J, Wang, L. Xu, and P. Liu, Using wearable sensor and NMF algorithm to realize ambulatory fall detection , LNCS, Vol 4222/2006, pp.488-491.
[0030] Document [18] is L. Wang, From blob metrics to posture classification to activity profiling, Pattern Recognition, 2006. ICPR 2006. 18" Int'l Conf. on, vol. 4, pp. 736-739, 0-0 2006. : [0031] Document [19] is Xinguo Yu, Approaches and principles of fall detection for elderly and patient, in JEEE HealthCom, pp. 42-47, July 2008.
[0032] Document [20] is M. Van den Bergh, E. Koller-Meier, and L. Van Gool, Fast body posture estimation using volumetric features, in JEEE WMVC, Jan. 2008, pp. 1-8. © [0033] © Document [21] is K. Onishi, T. Takiguchi, and Y. Ariki, 3d human posture
Co estimation using the hog features from monocular image, in JEEE JCPR, Dec. 2008, pp- 1-4.
[0034] Document [22] is MunWai Lee and R. Nevatia, Human pose tracking in monocular sequence using multilevel structured models, in IEEE PAMI, vol. 31, no. 1, pp. 27-38, Jan. 2009. :
[0035] Document [23] is Bin Huang, Guohui Tian, and Xiaolei Li, A method for fast fall detection, CICA, pp. 3619-3623, June 2008.
[0036] Document [24] is Chia-Feng Juang and Chia-Ming Chang, Human body posture classification by a neural fuzzy network and home care system application, in
IEEE Transactions on SMCP, Part A,, vol. 37, no. 6, pp. 984-994, Nov. 2007.
Summary
[0037] In various embodiments, a condition detection method may be provided, The condition detection method may include: acquiring a two-dimensional image including a person; computing a three-dimensional position of a pre-determined feature of the person from the two-dimensional image based on the assumption that a pre-determined component of the three-dimensional position has a pre-determined value; determining whether the computed three-dimensional position fulfills a pre-determined criterion; and determining a condition of the person on the two-dimensional image based on whether .the pre-determined criterion is fulfilled.
[0038] In various embodiments, a condition detection method may be provided. The condition detection method may include: acquiring an image including a person; detecting a first region of the image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person; removing from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold, to obtain a second region; determining a first geometrical shape that fits the first region according to a pre-determined first matching criterion; determining a second geometrical shape that fits the second region according to a pre-determined second matching criterion; and determining a condition of the person on the image based on the first geometrical shape . and based on the second geometrical shape.
[0039] In various embodiments, a condition detection method may be provided. The condition detection method may include: acquiring an image including a person; : detecting a region of the image, so that the ratio of the area of the region including the _ person to the area of the region not including the person is higher than the ratio of the © area of the image including the person to the area of the image not including the person; providing a sampling area template; providing a plurality of sampling areas of the image,
oo wherein each sampling ara may ‘correspond to the sampling area template, and wherein each ‘sampling area may correspond to an orientation of the sampling area template; determining, for each of the sampling areas, the area of the region in the sampling area; and determining a condition of the person on the image based on the determined area.
[0040] In various embodiments, a condition detection method may be provided. The a condition detection method may include: acquiring a two-dimensional image including © .a person; computing a three-dimensional position of a pre-determined feature of the
Co person from the two-dimensional image based on the assumption that a pre-determined
Co | comporient of the three-dimensional position has a pre-determined value; determining whether the computed three-dimensional position fulfills a pre-determined criterion; detecting a first region of the two-dimensional image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the two-dimensional image including the person to the area of the two-dimensional image not including the person; removing from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold, to obtain a second region; determining a first geometrical shape that fits the first region according to a pre-determined first matching criterion; determining a second geometrical shape that fits the second region according to a pre-determined second matching criterion; providing a sampling area template; providing a plurality of sampling areas of the two-dimensional image, wherein each sampling area may correspond to the sampling area template, and wherein each sampling _ area may correspond to an orientation of the sampling area template; determining, for each of the sampling areas, the area of the first region in the sampling area; and
Il
. determining a condition of the person on the two-dimensional image based on whether : .. the pre-determined criterion is fulfilled, based on the first geometrical shape, based on the : i : second geometrical shape, and based on the determined area. : [0041] | In various embodiments, a condition detection device may be provided. The . condition detection device may include: a two-dimensional image acquirer configured to acquire a two-dimensional image including a person; a computing circuit configured to compute a three-dimensional position of a pre-determined feature of the person from the two-dimensional image based on the assumption that a pre-determined component of the three.dimensional position has a pre-determined value; a criterion determiner configured to determine whether the computed three-dimensional position fulfills a pre-determined criterion; and a condition determiner configured to determine a condition of the person on the two-dimensional image based on whether the pre-determined criterion is fulfilled,
[0042] In various embodiments, a condition detection device may be provided. The condition detection device may include: an image acquirer configured to acquire an image including a person; a detector configured to detect a first region of the image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person; a remover configured to - remove from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold, to obtain a second region; a first
BE geometrical shape determiner configured to determine a first geometrical shape that fits the first region according to a pre-determined first matching criterion; a second geometrical shape determiner configured to determine a second geometrical shape that a fits the second region according to a pre-determined second matching criterion; and a condition determiner configured to determine a condition of the person on the image . based on the first geometrical shape and based on the second geometrical shape.
[0043] In various embodiments, a condition detection device may be provided. The . condition detection device may include: an image acquirer configured to acquire an image including a person; a region detector configured to detect a region of the image, so i that the ratio of the area of the region including the person to the area of the region not includirig the person is higher than the ratio of the area of the image including the person to the area of the image not including the person; a sampling area template provider configured to provide a sampling area template; a sampling areas provider configured to provide a plurality of sampling areas of the image, wherein cach sampling area may correspond to the sampling area template, and wherein each sampling area may correspond to an orientation of the sampling area template; an area determiner configured to determine, for each of the sampling areas, the area of the region in the sampling area; and a condition determiner configured to determine a condition of the person on the image based on the determined area.
[0044] In various embodiments, a condition detection device may be provided. The condition detection device may include: a two-dimensional image acquirer configured to . acquire a two-dimensional image including a person; a computing circuit configured to compute a three-dimensional position of a pre-determined feature of the person from the two-dimensional image based on the assumption that a pre-determined component of the three-dimensional position has a pre-determined value; a criterion determiner configured to determine whether the computed three-dimensional position fulfills a pre-determined criterion; a first region defector configured to detect a first region of the two-dimensional .image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the two- dimensional image including the person to the area of the two-dimensional image not
Co including the person; a remover configured to remove from the first region a sub-region ~ of the first region with a distance to the border of the first region below a pre-determined threshold, to obtain a second region; a first geometrical shape determiner configured to determine a first geometrical shape that fits the first region according to a pre-determined first matching criterion; a second geometrical shape determiner configured to determine a second geometrical shape that fits the second region according to a pre-determined second matching criterion; a sampling area template provider configured to provide a sampling area template; a sampling areas provider configured to provide a plurality of sampling areas of the two-dimensional image, wherein each sampling. area may correspond to the sampling area template, and wherein each sampling area may correspond to an orientation of the sampling area template; an area determiner configured to determine, for each of the sampling areas, the area of the first region in the sampling area; and a condition determiner configured to determine a condition of the person on the two-dimensional image based on whether the pre-determined criterion is fulfilled, based on the first geometrical shape, based on the second geometrical shape, and based on the ) determined area,
Brief Description of the Drawings _ [0045] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of various embodiments.
In the following description, various embodiments of the invention are described with . ’ reference to the following drawings, in which:
Co © FIG. 1 shows a flow diagram illustrating a condition detection method in : accordance with an embodiment;
FIG. 2 shows a flow diagram illustrating a condition detection method in accordance with an embodiment;
FIG. 3 shows a flow diagram illustrating a condition detection method in accordance with an embodiment; : FIG. 4 shows a flow diagram illustrating a condition detection method in : accordance with an embodiment;
FIG. 5 shows a condition detection device in accordance with an embodiment;
FIG. 6 shows a condition detection device in accordance with an embodiment;
FIG. 7 shows a condition detection device in accordance with an embodiment;
FIG. 8 shows a condition detection device in accordance with an embodiment;
Co | FIG. 9 shows an illustration of sliding windows for condition detection in “accordance with an embodiment; .
FIG. 10 shows an illustration of a position detection method in accordance with an embodiment;
: FIG. 11 shoiws examples of results of position detection methods in - accordance with an embodiment;
FIG. 12 shows a block diagram of a condition detection system in accordance with an embodiment;
FIG. 13 shows a block diagram of a condition detection system in accordance } withan embodiment; }
EA : : FIG. 14 shows a block diagram illustration a method for creating a lookup table in accordance with an embodiment; :
FIG. 15 shows an illustration of a method of acquiring corresponding pairs for camera calibration in accordance with an embodiment;
FIG. 16 shows a flowchart of a condition detection method in accordance with an embodiment; : FIG. 17 shows a flow diagram illustrating a body shape feature extraction method in accordance with an embodiment; : FIG. 18 shows a flowchart of a condition detection method in accordance with an embodiment;
FIG. 19 shows an illustration of a use case in accordance with an embodiment;
FIG. 20 shows an illustration of a use case in accordance with an embodiment; oo . FIG. 21 shows an illustration of obtaining of body trunk, head top and foot bottom points in accordance with'an embodiment; ’ : FIG. 22 shows examples of results of position detection methods in accordance with an embodiment;
FIGS. 23A anid 23B show examples of results of position detection methods in accordance with an embodiment; . oo FIGS. 24A and 24B show an example of a normalized directional distribution histogram in accordance with an embodiment:
FIGS. 25A and 25B show an example of a normalized directional distribution : | histogram in accordance with an embodiment; : ) : : FIGS. 26A and 26B show sampling areas in accordance with an embodiment; ol FIG. 27 shows a framework of a condition detection ‘device in accordance - with an embodiment; i
FIG. 28 shows a diagram in accordance with an embodiment;
FIGS. 29A and 29B show examples of results of position detection methods in accordance with an embodiment; and
FIG. 30 shows various postures in accordance with an embodiment. : Description
[0046] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the invention. The various embodiments are not necessarily mutually exclusive, as : some embodiments can be combined with one or more other embodiments to form new embodiments.
: [0047] : The condition determination device may include a memory which is for example used in the processing carried out by the condition determination device. A memory used in the embodiments may be a volatile memory, for example a DRAM : (Dynamic Random Access Memory) or a non-volatile memory, for example a PROM } (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory, e.g., a floating gate memory, a charge } trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory). :
[0048] In an embodiment, a "circuit" may be understood as any kind of a logic implementing entity, which may be special purpose circuitry or a processor executing software stored in a memory, firmware, or any combination thereof. Thus, in an . embodiment, a "circuit" may be a hard-wired logic circuit or a programmable logic circuit such as a programmable processor, e.g. a microprocessor (e.g. a Complex
Instruction Set Computer (CISC) processor or a Reduced Instruction Set Computer (RISC) processor). A "circuit" may also be a processor executing software, e.g. any kind of computer program, e.g. a computer program using a virtual machine code such as e.g.
Java. Any other kind of implementation of the respective functions which will be described in more detail below may also be understood as a "circuit" in accordance with an alterriative embodiment. .
[0049] “The word "exemplary" is used herein to mean “serving as an example, instance, or illustration". Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
[0050] FIG. 1 shows a flow diagram 100 illustrating a condition detection method in - accordance with an embodiment. In 102, a two-dimensional image including (or . showing) a person may be acquired. In 104, a three-dimensional position of a pre- determined feature of the person may be computed from the two-dimensional image
Co based on the assumption that a pre-determined component of the three-dimensional on position has a pre-determined value. In 106, it may be determined whether the computed } three-dimensional position fulfills a pre-determined criterion. In 108, a condition of the person on the two-dimensional image may be determined based on whether the pre- determined criterion is fulfilled.
[0051] According to various embodiments, the two-dimensional image may include or may be at least one of a digital color image and a digital black-and-white image.
[0052] According to various embodiments, the pre-determined feature of the person may include or may be at least one of the position of the head of the person, the position of at least one foot of the person, and the position of at least one hand of the person.
[0053] According to various embodiments, the pre-determined value may include or may be at least one of the height of the person when standing and the height of the person ) when lying on the floor. For example, the pre-determined value may be in the range of ) ~~ 1.5m to 2m, or the pre-determined value may be in the range of Om to 0.2m.
[0054] . According to various embodiments, computing the three-dimensional position "ofthe pre-determined feature of the person may include solving an optimization problem.
[0055] According to various embodiments, solving the optimization problem may include minimizing the difference between a value of a pixel of the two-dimensional ° image and a projected three-dimensional position,
[0056] According to various embodiments, computing the three-dimensional position of the pre-determined feafure of the person may include evaluating a calibration model. : [0057] According to various embodiments, the pre-determined criterion may include a criterion based on a motion model.
Co [0058] - According to various embodiments, the condition of the person may include at least oe of a condition of whether the person has fallen, of whether the person is standing, or of whether the person is sitting,
[0059] | FIG. 2 shows a flow diagram 200 illustrating a condition detection method in accordance with an embodiment. In 202, an image including {or showing) a person may be acquired. In 204, a first region of the image may be detected, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person. In 206, a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold may be removed from the first region, to obtain a second region. In 208, a first geometrical shape that fits the first region according to a pre-determined first matching criterion may be determined. - In210, a second geometrical shape that fits the second region according to a pre- : determined second matching criterion may be determined. In 212, a condition of the oC pois on the image may be determined based on the first geometrical shape and based on the second geometrical shape,
[0660] According to various embodiments, a geometrical shape that fits a region may be understood as a geometrical shape with a pre-determined set of parameters, wherein the values of the parameters may be set in a way that every change in the values of the parameters may lead to a geometrical shape that is further “away” from the region than the geometrical shape with the set values of the parameters. According to various embodiments, the measure of distance for determining how far “away” the region is from the geometrical shape, may be any of commonly used measures, for example the amount : oe of the area of the difference area, for example in any norm, for example the one norm, or a for nine the two-nom. According to various embodiments, the matching criterion "may be an optimization criterion, for example a minimization problem, based on any one of commonly used measures of distance, for example the criterion of minimizing the amount of the area of the difference area, for example in any norm, for example the one- norm or for example the two-norm.
[0061] According to various embodiments, the image may include or may be at least one of a digital color image and a digital black-and-white image. oo (0062] . According to various embodiments, detecting the first region of the image may include extracting foreground with a background subtracting method. : [0063] According to various embodiments, detecting the first region of the image : may include image segmentation. :
[0064] According to various embodiments, image segmentation may include a region growing method.
[0065] g According to various embodiments, image segmentation may include an edge detection method.
[0066] . According to various embodiments, image segmentation may include a level set method.
[0067] According to various embodiments, removing from the first region a sub- region of the first region with a distance to the border of the first region below a pre- ; determined threshold may include: performing a distance transform of the first region and removing from the first region the sub-region of the first region with a value of the ’ distancé transformed image below a pre-determined removal threshold. . 10068] - According to various embodiments, distance transform may be a method for transforming a region of an image into a region, wherein the value of a pixel in the transformed region may indicate the distance to the border of the region. According to various embodiments, for example, when starting with a region, each value of the transformed region may be set to infinity. According to various embodiments, then, each pixel in the transformed region located next to the border of the region may be set to a pre-determined value, for example to 1. According to various embodiments, then, in an - iterative way, each pixel is set to the minimum of its current value and the values of its neighboring pixels plus 1. According to various embodiments, this iterative setting of ’ pixel values may be repeated until there is no further change in the pixel values,
[0069] According to various embodiments, removing from the first region a sub- oC region of the first region with a distance to the border of the first region below a pre- determined threshold may further include determining a maximum value in the distance ’ transformed first region, and the pre-determined removal threshold may be based on the maximum value in the distance transformed first region.
[0070] - According to various embodiments, the first geometrical shape may include an ellipse.
(0071) According fo various embodiments, the pre-determined first matching criterion may include or may be a criterion of correlating the first geometrical shape and the first region. oo 1{0072) : According to various embodiments, the pre-determined first matching criterio may include or may be a criterion of minimizing the area of the difference between the interior of the first geometrical shape and the first region.
[0073] According to various embodiments, the pre-determined first matching criterion may include or may be a criterion of the interior of the first geometrical shape : including the first region. }
[0074] According to various embodiments, the second geometrical shape may include or may be an ellipse.
[0075] According to various embodiments, the pre-determined second matching criterion may include or may be a criterion of correlating the second geometrical shape and the second region.
[0076] According to various embodiments, the pre-determined second matching criterion may include or may be a criterion of minimizing the area of the difference between, the interior of the second geometrical shape and the second region.
[0077] ~ According to various embodiments, the pre-determined second matching criterion may include or may be a criterion of the interior of the second geometrical shape including the second region,
[0078] . According to various embodiments, determining a condition of the person may include: determining a third geometrical shape based on the first geometrical shape and based on the second geometrical shape.
[0079]. According to various embodiments, the third geometrical shape may include or may be an ellipse.
[0080] According to various embodiments, determining the third geometrical shape ) may include: determining at least one geometrical parameter of the first geometrical } shape; determining at least one geometrical parameter of the second geometrical shape; and determining the third geometrical shape based on the at least one geometrical parameter of the first geometrical shape and on the at least one geometrical parameter of the second geometrical shape.
[0681] According to various embodiments, the at least one geometrical parameter of . the first geometrical shape may include at least one of a center point of the first geometrical shape, an orientation of the first geometrical shape, an horizontal size of the first geometrical shape, and a vertical size of the first geometrical shape. (0082) According to various embodiments, the at least one geometrical parameter of the second geometrical shape may include at least one of a center point of the second geometrical shape, an orientation of the second geometrical shape, an horizontal size of the second geometrical shape, and a vertical size of the second geometrical shape. : | [0083]. According to various embodiments, the first geometrical shape may include or . may be a first ellipse, and the at least one geometrical parameter of the first ellipse may include at least one of a center point of the first ellipse, an orientation of the first ellipse, a semi-major axis of the first ellipse, and a semi-minor axis of the first ellipse.
[0084] According to various embodiments, the second geometrical shape may include or may be a second ellipse, and the at least one geometrical parameter of the second ellipse may include at least one of a center point of the second ellipse, an orientation of the second ellipse, a semi-major axis of the second ellipse, and a semi-minor axis of the . second ellipse. :
[0085] According to various embodiments, the condition of the person may include at : ) least whe of a condition of whether the person has fallen, of whether the person is oo standing, or of whether the person is sitting.
[0086] } FIG. 3 shows a flow diagram 300 illustrating a condition detection method in ; accordance with an embodiment. In 302 an image including (or showing) a person may be acquired. In 304, a region of the image may be detected, so that the ratio of the area of the region including the person to the area of the region not including the person is higher } than the ratio of the area of the image including the person to the area of the image not including the person. In 306, a sampling area template may be provided. In 308, a plurality of sampling areas of the image may be provided, wherein each sampling area may correspond to the sampling area template, and wherein each sampling area may . correspond to an orientation of the sampling area template. In 310, for each of the sampling areas, the area (or the size of the area) of the region in the sampling area may be determined. In 312, a condition of the person on the image may be determined based on the determined area.
[0087] : According to various embodiments, the area may be determined by counting the pixels inside the area.
[0088] According to various embodiments, the image may include or may be a digital color image or a digital black-and-white image.
[0089] According to various embodiments, detecting the region of the image may include extracting foreground with a background subtracting method.
[0090] According to various embodiments, detecting the region of the image may include image segmentation, : [0091] - According to various embodiments, image segmentation may include a region
Co growing method. : (0092) ) According to various embodiments, image segmentation may include an edge detection method.
[0093] | According to various embodiments, image segmentation may include a level set method.
[0054] According to various embodiments, the condition detection method may further include determining a geometrical shape that fits the region according to a pre- determined matching criterion, and providing the sampling template may include providing the sampling template based on the determined geometrical shape. . [0095] According to various embodiments, the geometrical shape may include or may be an ellipse.
[0096] According to various embodiments, the pre-determined matching criterion may include or may be a criterion of correlating the geometrical shape and the region.
[0097] ©. According to various embodiments, the pre-determined matching criterion ) may include or may be a criterion of minimizing the area of the difference between the interior of the geometrical shape and the region.
[0098] According to various embodiments, the pre-determined matching criterion may include or may be a criterion of the interior of the geometrical shape including the region. oo 26
[0099] According to various embodiments, each of the sampling areas of the plurality of sampling areas may be congruent to the sampling area template. 100100] According to various embodiments, each of the sampling areas of the plurality of sampling areas may be rotated by a pre-determined angle with respect to the sampling area template.
[00101] According to various embodiments, the condition of the person may include at least one of a condition of whether the person has fallen, of whether the person is standing, or of whether the person is sitting. {00102] FIG. 4 shows a flow diagram 400 illustrating a condition detection method in accordance with an embodiment. In 402, a two-dimensional image including (or : showing) a person may be acquired. In 404, a three-dimensional position of a pre- determined feature of the person may be computed from the two-dimensional image based on the assumption that a pre-determined component of the three-dimensional position has a pre-determined value. In 406, it may be determined whether the computed three-dimensional position fuifills a pre-determined criterion. In 408, a first region of the i two-dimensional image may be detected, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the two-dimensional image including the person to the area of the two-dimensional image not including the person. In 410, a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold may be removed from the first region, to obtain a second region. In 412, a first geometrical shape that fits the first region according to a pre-determined first matching criterion may be . determined. In 414, a second geometrical shape that fits the second region according to a pre-determined second matching criterion may be determined. In 416, a sampling area +. template may be provided. In 418, a plurality of sampling areas of the two-dimensional nL image may be provided, wherein each sampling area may correspond to the sampling area rerplate, and wherein each sampling area may correspond to an orientation of the sampling area template. In 420, for each of the sampling areas, the area of the first region in the sampling area may be determined. In 422, a condition of the person on the two- . dimensional image may be determined based on whether the pre-determined criterion is fulfilled, based on the first geometrical shape, based on the second geometrical shape, and based on the determined area. ) [00103] According to various embodiments, a computer program configured to, when run on a computer, execute one of the method explained above and below, may be provided.
[00104] FIG. 5 shows a condition detection device 500 in accordance with an embodiment. The condition detection device 500 may include a two-dimensional image acquirer 502 configured to acquire a two-dimensional image including (or showing) a person; a computing circuit 504 configured to compute a three-dimensional position of a predetermined feature of the person from the two-dimensional image based on the assurhption that a pre-determined. component of the three-dimensional position has a pre- determined value; a criterion determiner 506 configured to determine whether the computed three-dimensional position fulfills a pre-determined criterion; and a condition determiner 508 configured to determine a condition of the person on the two-dimensional image based on whether the pre-determined criterion is fulfilled, The two-dimensional image acquirer 502, the computing circuit 504, the criterion determiner 506, and the condition determiner 508 may be coupled with each other, e.g. via an electrical . connection 510 such as e.g. a cable or a computer bus or via any other suitable electrical conection to exchange electrical signals. oo 00105] According to various embodiments, the two-dimensional image may include
BE § may bea digital color image and/or a digital black-and-white image.
[00106] According to various embodiments, the pre-determined feature of the person may include at least one of the position of the head of the person, the position of at least one foot of the person, and the position of at least one hand of the person.
[00107] According to various embodiments, the pre-determined value may include or } may be the height of the person when standing and/or the height of the person when lying on the floor. For example, the pre-determined value may be in the range of 1.5m to 2m, or the pre-determined value may be in the range of Om to 0.2m,
[00108] According to various embodiments, the computing circuit may further be configured to compute the three-dimensional position of the pre-determined feature of the : person based on solving an optimization problem.
[00109] According to various embodiments, solving the optimization problem may include ‘minimizing the difference between a value of a pixel of the two-dimensional image and a projected three-dimensional position.
[00110] According to various embodiments, computing the three-dimensional position oo of the pre-determined feature of the person may include evaluating a calibration model,
[00111] According to various embodiments, the pre-determined criterion may include : or may be a criterion based on a motion model.
{00112] According to sarious embodiments, the condition of the person may include at . least one of a condition of whether the person has fallen, of whether the person is oC standing, or of whether the person is sitting has fallen.
[00113] FIG. 6 shows a condition detection device 600 in accordance with an embodiment. The condition detection device 600 may include: an image acquirer 602 configured to acquire an image including (or showing) a person; a detector 604 configured to detect a first region of the image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the image including the person to the area of the image ) not including the person; a A 606 configured to remove from the first region a sub- region of the first region with a distance to the border of the first region below a pre- determined threshold, to obtain a second region; a first geometrical shape determiner 608 configured to determine a first geometrical shape that fits the first region according to a pre-determined first matching criterion; a second geometrical shape determiner 610 configured to determine a second geometrical shape that fits the second region according © to a pre-determined second matching criterion; and a condition determiner 612 configuted to determine a condition of the person on the image based on the first } geometrical shape and based on the second geometrical shape. The image acquirer 602, the detector 604, the remover 606, the first geometrical shape determiner 608, the second ‘ geometrical shape determiner 610, the condition determiner 612 may be coupled with each other, e.g. via an electrical connection 614 such as e.g. a cable or a computer bus or via any other suitable electrical connection to exchange electrical signals.
[00114] According to various embodiments, the image may include or may be a digital . color image and/ or a digital black-and-white image. : (001is] According to various embodiments, the detector 604 may further be configured to extract a foreground with a background subtracting method,
[00116] According to various embodiments, the detector 604 may further be configured to perform image segmentation.
[00117] According to various embodiments, image segmentation may include a region growing method.
[00118] According to various embodiments, image segmentation may include an edge detection method. :[00119] According to various embodiments, image segmentation may include a level set method.
[00120] According to various embodiments, the remover 606 may . further be configured to: perform a distance transform of the first region; and remove from the first region the sub-region of the first. region with a value of the distance transformed image below a pre-determined removal threshold. 00121] According to various embodiments, the remover 606 may further be configured to: determine a maximum value in the distance transformed first region; and the pre-determined removal threshold may be based on the maximum value in the distance transformed first region.
[00122] According to various embodiments, the first geometrical shape may include or may be an ellipse.
[00123] According to’ variinds embodiments, the pre-determined first matching
Ce criterion may include or may be a criterion of correlating the first geometrical shape and the first region,
[00124] According to various embodiments, the pre-determined first matching aritarion may include or may be a criterion of minimizing the area of the difference between the interior of the first geometrical shape and the first region.
[00125] According te various embodiments, the pre-determined first matching criterion may include or may be a criterion of the interior of the first geometrical shape : including the first region. .
[00126] According to various embodiments, the second geometrical shape may include or may be an ellipse. (00127) According to various embodiments, the pre-determined second matching criterion may include or may be a criterion of correlating the second geometrical shape and the second region.
[00128] According to various embodiments, the pre-determined second matching criterion may include or may be a criterion of minimizing the area of the difference between the interior of the second geometrical shape and the second region.
[00129] According to various embodiments, the pre-determined second matching criterion may include or may be a criterion of the interior of the second geometrical shape including the second region.
[00130] According to various embodiments, the condition determiner 612 may further be configured to determine a third geometrical shape based on the first geometrical shape
EB and based on the second geometrical shape.
[00131] According to various embodiments, the third geometrical shape may include . . or may be an ellipse. ) : } | [00132] According to various embodiments, the condition determiner 612 may further be configured to: determine at least one geometrical parameter of the first geometrical shape; determine at least one geometrical parameter of the second geometrical shape; and determine the third geometrical shape based on the at least one geometrical parameter of the first geometrical shape and on the at least one geometrical parameter of the second geometrical shape.
[00133] According to various embodiments, the at least one geometrical parameter of the first geometrical shape may include at least one of a center point of the first geometrical shape; an orientation of the first geometrical shape; an horizontal size of the first geometrical shape; and a vertical size of the first geometrical shape. © ]00134] According to various embodiments, the at least one geometrical parameter of the second geometrical shape may include at least one of a center point of the second - geometrical shape; an orientation of the second geometrical shape; an horizontal size of © the second seometrical shape; and a vertical size of the second geometrical shape.
[00135] According to various embodiments, the first geometrical shape may include or i may be a first ellipse; and the at least one geometrical parameter of the first ellipse may include at least one of: a center point of the first ellipse; an orientation of the first ellipse; a semi-major axis of the first ellipse; and a semi-minor axis of the first ellipse.
[00136] According to various embodiments, the second geometrical shape may include ~ or may be a second ellipse; and the at least one geometrical parameter of the second ellipse may include at least one of: a center point of the second ellipse; an orientation of
So the second ellipse; a semi-major axis of the second ellipse; and a semi-minor axis of the ; second ellipse.
[00137] According to various embodiments, the condition of the person may include "or may be at least one of a condition of whether the person has fallen, of whether the person is standing, or of whether the person is sitting.
[00138] FIG. 7 shows a condition detection device 700 in accordance with an embodiment. The condition detection device 700 may include: an image acquirer 702 configured to acquire an image including (or showing) a person; a region detector 704
So configiired to detect a region of the image, so that the ratio of the area of the region including the person to the area of the region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person; a sampling area template provider 706 configured to provide a sampling area ’ : template; a sampling areas provider 708 configured to provide a plurality of sampling areas of the image, wherein each sampling area may correspond to the sampling area template, and wherein each sampling area may correspond to an orientation of the : sampling area template; an area determiner 710 configured to determine, for each of the : sampling areas, the area (or the size of the area) of the region in the sampling area; and a + condition determiner 712 configured to determine a condition of the person on the image based on the determined area. The image acquirer 702, the region detector 704, the sampling area template provider 706, the sampling areas provider 708, the area determiner 710, and the condition determiner 712 may be coupled with each other, e.g. via an electrical connection 714 such as e.g. a cable or a computer bus or via any other suitable electrical connection to exchange electrical signals,
[00139] According to ‘various embodiments, the area determiner 710 may further be : configured to determine the area by counting the pixels inside the area.
[00140] According to various embodiments, the image may include or may be a digital color image and/or a digital black-and-white image.
[00141] According to various embodiments, the region detector may be configured to detect the region of the image based on extracting foreground with a background subtracting method.
[00142] According to various embodiments, detecting the region of the image may : include image segmentation. | . }
[00143] According to various embodiments, image segmentation may include a region growing method.
[00144] According to various embodiments, image segmentation may include an edge detection method.
[00145]. According to various embodiments, image segmentation may include a level set method. (00146] According to various embodiments, the condition detection device 700 may further include a geometrical shape determiner (not shown) configured to determine a geometrical shape that fits the region according to a pre-determined matching criterion; and the sampling template provider 706 may further be configured to provide the sampling template based on the determined geometrical shape.
[00147] According to various embodiments, the geometrical shape may include or _ may be an ellipse. Co
-. [00148] According to ‘various embodiments, the pre-determined matching criterion may indlude or may be a criterion of correlating the geometrical shape and the region.
[00149] According to various embodiments, the pre-determined matching criterion may include or may be a criterion of minimizing the area of the difference between the interior of the geometrical shape and the region.
[00150] © According to various embodiments, the pre-determined matching criterion : may include or may be a criterion of the interior of the geometrical shape including the region.
[00151] According to various embodiments, each of the sampling areas of the plurality of sampling areas may be congruent to the sampling area template. {00152] According to various embodiments, each of the sampling areas of the plurality of sampling areas may be rotated by a pre-determined angle with respect to the sampling area template.
[00153] According to various embodiments, the condition of the person may include or may be at least one of a condition of whether the person has fallen, of whether the person is standing, or of whether the person is sitting,
[00154] FIG. 8 shows a condition detection device 800 in accordance with an embodiment. The condition detection device 800 may include: a two-dimensional image acquirer 802 configured to acquire a two-dimensional image including (or showing) a person; a computing circuit 804 configured to compute a three-dimensional position of a : pre-determined feature of the person from the two-dimensional image based on the assumption that a pre-determined component of the three-dimensional position has a pre- determined value; a criterion determiner 806 configured to determine whether the computed (hree-dimensional position fulfills a pre-determined criterion; a first region oo detestor 808 configured 10 detect a first region of the two-dimensional image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the two-dimensional image including the person to the area of the two-dimensional image not including the person; a remover 810 configured to remove from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold, to
. * obtain a second region; a first geometrical shape determiner.812 configured to determine a first geometrical shape that fits the first region according to a pre-determined first matching criterion; a second geometrical shape determiner 814 configured to determine a second geometrical shape that fits the second region according to a pre-determined second matching criterion; a sampling area template provider 816 configured to provide a
} sampling area template; a sampling areas provider 818 configured to provide a plurality of sampling areas of the two-dimensional image, wherein each sampling area may correspond to the sampling area template, and wherein each sampling area may correspond to an orientation of the sampling area template; an area determiner 820 configured to determine, for each of the sampling areas, the area of the first region in the sampling area; and a condition determiner 822 configured to determine a condition of the person on the two-dimensional image based on whether the pre-determined criterion is fulfilled, based on the first geometrical shape, based on the second geometrical shape, and based on the determined area.
The two-dimensional image acquirer 802, the computing circuit 804, the criterion determiner 806, the first region detector 808, the remover 810, the first geometrical shape determiner 812, the second geometrical shape determiner 814, the sampling area template provider 816, the sampling areas provider 318, the area determiner 820, and the condition determiner 822 may be coupled with each other, e.g. via an electrical connection 824 such as e.g. a cable or a computer bus or via any other suitable electrical connection to exchange electrical signals.
[00155] According to various embodiments, various combinations of the devices and methods explained above may be used. For example, the output (or result) of one of these devices or methods may be used as the input for another one of these devices or methods, as will be explained in more detail below.
[00156] According to various embodiments, systems and methods of fall detection for elderly and patient may be provided.
[00157] According to various embodiments, systems may be provided including the following: a data acquisition circuit, a feature extraction circuit, a fall assessment circuit and a fall alert circuit, as will be explained in more detail below. According to various embodiments, methods for —— the features of body shape and methods for assessing fall from walking or standing on the floor in normal lighting may be provided.
According to various embodiments, methods for fall detection from or around bed in night may be provided. According to various embodiments, for body shape feature _ extraction, the body trunk may be obtained via skeleton analysis. According to various embodiments, the body shape features which may include the body trunk ellipse, the head top point, and the foot bottom point, may be obtained. According to various embodiments, for detecting fall from walking and standing, a quasi-3D (shorted as Q-3D) position calculation method may be provided, which may calculate Q-3D positions from 2D positions of human head. According to various embodiments, fall detection methods using ihe calculated 03D positions of head combining with 2D body shape may be provided. According to various embodiments, direct 3D head tracking, which may be a hard task, may be avoided, and events, for example fall events, may be detected and alerted accurately, According to various embodiments, to realize the Q-3D position calculation, a 2D-3D lookup table may be provided and used.
[00158] According to various embodiments, devices, for example systems, and methods for fall detection and alert, for example for healthcare for elderly and patient, that éreite an alert when a fall occurs, may be provided. :
[00159] According to various embodiments, devices and methods may be provided for solving the fall detection problems under various embodiments and thus, methods and method for fall detection based on sensors other than cameras and cameras may be provided. According to various embodiments, devices, for example systems, and methods for fall detection for improving care for elderly and patient may be provided. According to various embodiments, a system may include the following: a data acquisition circuit, a feature extraction circuit, a fall ‘assessment circuit, and a fall alert circuit, as will be explained in more detail below. According to various embodiments, a system may be used in an embodiment and its components may be customized according to the concrete embodiment. For example, a data acquisition component may include only one electrical ) sensor or multiple sensors and multiple cameras according to the coverage and the kind of fall that is to be detected. According to various embodiments, fall detection methods and methods may be provided and the strengths of both sensors other than cameras and cameras may be provided. According to various embodiments, devices and methods for
: detecting and alerting fall occurrences immediately may be provided, since the immediate : treatment of the injured by fall is critical. :
[00160] For example, fall, which for example may be common among elderly and patient, may cause Serious consequences, for example serious injury, and quick responses . may be important, for example, victims of fall may need an immediate treatment to minimize the injury. For example, among elderly and patient, fall may occur frequently due fo various factors. According to various embodiments, fall detection may be provided for detection of fall from bed, char, walking, with single or multiple persons and so on at hospital, nursery, or home. According to various embodiments, success in detecting fall may provide quick responses to the injuries in order to give the treatments as soon as possible and reduce the level of seriousness in the consequences.
[60161] According to various embodiments, devices and methods for fall detection "and prevention systems for various embodiments with various fall detection methods and devices may be provided. According to various embodiments, robot-based fall detection : and prevention may be provided. According to various embodiments, a direct vision- based approach may be provided.
[00162] According to various embodiments, quasi-3D head position acquisition, ~~ quasi-3D fall detection method based on 2D torso ellipse, 2D-3D lookup table creation and torso ellipse extraction may be provided, as will be explained in more detail below.
[00163] According to various embodiments, fall detection may be performed based on wearable devices, camera-based or based on ambience devices. According to various embodiments, wearable devices may include posture devices and motion devices.
According to various embodiments, camera-based detection may include body shape change analysis, inactivity detection and 3D head motion analysis. According to various . embodiments, ambience devices may include presence devices and posture devices.
[00164] In the following table, properties of various methods are described, wherein “intru” may stand for intrusive, ‘acc’y” for accuracy, and “R/V/V” for remote and visual verification. fl — heey
Ce | me Tem Ter ie se [VY]
Tomar souiod] posture doves | cheap | yes | mo | yes easy | mo : | motiondevice | cheap | yes | no | yes | easy | no ramos doviod Presence device | cheaplmedtum| 10 | no | yes | easy | no or 1 posturedevice |cheap/medium| no | no | yes | easy | no more based | Civ detection | _medum | no | depend | yes | medium | yes _ : ¢ shape change analysis | medium | no [depend | depend | depend | yes 3D head motion depend | depend yes analysis : [00165]. According to various embodiments, quasi-3D location may be used to perform shape state classification, quasi-3D location may be used to calculate measurements for fall detection, and fall assessment may be performed based on detected shape state and calculated measurements, as will be explained in more detail below.
[00166] According to various embodiments, for various mock-up videos, detection rates of more than 90% may be provided, with at most a false alarm a day and with easy "setup. According to various embodiments, real-time processing may be provided on a : 2Ghz PC with 1 GB of memory. ° - [00167] According to various embodiments, an -effective method of quasi 3D fall detection may be provided, as will be explained below.
[00168] According to various embodiments, methods and devices for robust body detection for fast fall detection may be provided. ,
- [00169] According to Yarious. embodiments, a robust quasi-3D fall detection method . based on 2D head position and body location and orientation may be provided, as will be explained in more detail below. According to various embodiments, fall detection design for fall from or around bed at night (for example sensor and camera and lighting control) may be provided, as will be explained in more detail below. According to various embodiments, a fast way to prepare a 2D-3D lookup table may be provided (or acquired), a as will be explained in more detail below. : [00170] According to various embodiments, synérgy of sensor, camera, and lighting control for detecting fall from or around bed at night may be provided. (00171) According to various embodiments, devices and methods that may be used, for example in fall detection system and activity monitoring, in home, nursery and/or hospital, for example for detecting fall of elderly and patient, may be provided.
[00172] According to various embodiments, the following techniques may be provided, as will be explained in"'more detail below: body trunk based feature extraction, 2D-3D lookup method, fall detection method, and semi-auto lookup table creation.
[00173] According to various embodiments, condition detection (for example fall detection) may be performed over time intervals according to sliding time windows, - [00174] FIG. 9 shows an illustration 900 of sliding windows for condition detection (for example for fall detection) in accordance with an embodiment, as will be explained - in more detail below. A time axis 902 is shown which may include a plurality of time intervals, the boundaries of which may be indicated by horizontal lines. A first time window 904 and a second time window 906 are shown. Further time windows may be present as indicated by dots 908, for example a further time window 910 may be present.
Co 42
[00175] According to various embodiments, a quasi 3D (Q-3D) position calculation method may be provided. }
[00176] FIG. 10 shows an illustration 1000 of a position detection method in accordance with an embodiment.
[00177] According to various embodiments, assuming that a projection matrix P of a camera’ has been obtained, the image point of any 3D point may be computed when a flued camera is used. According to various embodiments, for explanation of creation of the lookup table, a coordinate system 1004 may be set as follows: Let the floor be the XY plane 1002 and the upward direction from the floor the Z axis. The real-world k point w 1012 may be represented by a homogenous 4-vector (X, Y, Z, 1", m 1010 may be the image point in the image plane 1006 (which may be acquired by a camera C 1008) represented by a homogenous 3-vector (x, y, 1)7, and P may be the 3x4 camera projection matrix. Then for a basic pinhole camera, the mapping between the 3D world and the 2D image may be written compactly as m= Pw, (1) where = may mean that two sides can differ by an arbitrary scale (or scaling) factor.
[00178] According to various embodiments, it may be assumed to have a 3D - point(X,Y, Z)", and then (U,V, W)" may be obtained if (X,Y, Z,1)7 is put into the - right side of equation (1). Then, an image point may be obtained as follows. : | w= andv=oo (2)
[00179] In other words, there may be the following two functions: u= fx, Y,Z) { =g(X,Y,Z) 4) a
[00180] Let E be the sit of all possible (X, Y). According to various embodiments, the corresponding 3D point of an image point (r, c) may be looked at, assuming that the value of Z (=Zp) is known, as follows. p(X, Y, r,o)=(-f(X,Y, Zn? +(c-g(X,Y, ZO) (X,Y, Zy) = ang (ming(X, Y, re) (X,Y)eE} 4)
[00181] where E may be the set of all (X,Y) inside of camera coverage. Thus, according to various embodiments, a table may be obtained that may be used to look up (X, Y)- from (u, v), provided (in other words: under the assumption) that Z is at Z. : According to various embodiments, for any fixed Z0, X and Y may be computed, for : example (X,Y) may be computed as the argument of the minimization.
[00182] The acquired 3D position (X,Y, Z,)" may be called quasi-3D position because it may differ from the real 3D position of the image point. Though Q-3D position may not be the real 3D position of the concerned image point, it may be useful for fall detection.
[00183] The line 1014 between the camera C 1008 and the real world point 1012 may intersect the image plain 1006 at image point m 1010.
[00184] According to various embodiments, a method may be provided that may judge whether a time window, as illustrated above, of surveillance video includes a fall based on detected micro-actions, as wil be explained below, and calculated measures in this window. According to various embodiments, the micro-action detections and measure calculations may be based on the Q-3D positions of head as explained above and detected body shape in the 2D image as will be explained below. According to various embodiments, this method may tolerate the inaccuracy of head location and micro-action detection in some degree. According to various embodiments, micro-action detection and fall detection methods may be provided.
[00185] For example, a human natural fall may last a pre-determined time duration (statistics may show that it may last at most one and half a seconds, normally about one second). In other words, a fall may occur in at least one such time window. Thus, fall oo detection may be transformed to judge whether a time window encloses a fall. A time - window may be a set of consecutive frames in a time window, as has been explained above. According to various embodiments, one frame may be slided each time, thus each frame may correspond to one frame as illustrated above.
[00186] According to various embodiments, devices and methods for body shape feature extraction may be provided to extract appropriate features through analyzing body shape to provide the input of fall assessment method. According to various embodiments, the features to be obtained may include or may be the ellipse of body trunk, head top point, and foot bottom point, According to various embodiments, pre-determined change patterns of these values may indicate the fall accidents. According to various embodiments, a difference from commonly used body shape features extraction methods may be that the arms may be discarded and that features may be extracted from the remaining body trunk. For example, arms may mislead the feature extraction procedure,
According to various embodiments, two techniques may be provided for obtaining the
Co accurate features: “skeleton-based ellipse fitting (SKEF) and top point from ellipse (TopE), as will be explained in more detail below. According to various embodiments,
SKEF may use skeleton pixels to constrain an ellipse fitting to human body trunk and
TopE may be used to estimate the head and feet position given fitted ellipse.
[60187] According 0 various embodiments, SKEF may include skeletonisation, binarisation (for example by 0.4*maximum distance), and getting a geometrical shape (for example an ellipse) covering the skeleton to constrain commonly used ellipse fitting, as will be explained in more detail below.
[00188]. According to various embodiments, TopE may include head direction ) estimation and top point search, as will be explained in more detail below. (00189) According to various embodiments, SKEF and TopE may be simple, robust and accurate in estimation of body and head position. According to various embodiments,
SKEF and TopE may provide features that may provide good performance for fall detection. (00190) According to various embodiments, devices and methods for robust and accurate estimation of body width and height, for robust and accurate estimation of head in 21, and simple and fast techniques may be provided.
[06191] FIG. I1 shows examples 1100 of results of position detection methods in accordance with an ‘embodiment, and a comparison on body shape ellipse and body trunk ellipse. In image 1102, which may be an input image, a person 1104 is included (or shown). In image 1114, an ellipse 1116 fitted to the person 1104 is shown. The ellipse 1116 may be referred to a body shape ellipse. For illustration purposes, a line 11 18 indicating the longitudinal axis of the person 1104 and a circle 1120 indicating the head of the person 1104 are shown. In image 1122, an ellipse 1124 fitted to the trunk, as will be explained in more detail below, of the person 1104 is shown. The ellipse 1124 may be referred to a body trunk ellipse. For illustration purposes, a line 1126 indicated the longitudinal axis of the person 1104 and a circle 1128 indicating the head of the : Co Co : 46 person 1104 are shown. In image 1106, a foreground image 1108 of the person 1104, and . lines 1110 and 1112 indicating the posture of the person are shown. According to various embodiments, the first line 1110 and the second line 1112 may be lines of the person's skeleton. The first line 1110 may be the central line for the arms of the skeleton. The secorid line 1112 may be the central line for the — body of the skeleton
[00192] According to various embodiments, a foreground image may be a region of the (overall) image, so that the ratio of the area of the region including the person to the area of the region not including the person is higher than the ratio of the area of the (overall) image including the person to the area of the (overall) image not including the person.
[00193] FIG. 12 shows a block diagram 1200 of a condition detection system in - accordance with an embodiment. For example, the detection system may be a fall detection and alert system for elderly and patient. A data acquisition circuit 1210 may receive data from a first sensor 1202, from a first camera 1204, from a second sensor 1206, and from a second camera 1208. According to various embodiments, the first sensor and the second sensor may be sensors different from a camera. The data acquisition circuit 1210 may acquires signals from the first sensor 1202 and the second sensor 1206 and image sequences from the first camera 1204 and the second camera 1208, and may provide information to a feature extraction circuit 1212. The
Co feature extraction circuit 1212 may extract features from signals and images and may : provide information to a fall detection circuit 1214. The fall detection circuit 1214 may identify micro-actions and measurements related to fall and then may use the existences of micro-actions and the calculated measurements to infer fall occurrence, and may alert
Ce 47 devices 1220, for example devices held by caregivers, as indicated by arrow 1218. The alert may be communicated by wire and/or by wireless communication 1216.
[00194] FIG. 13 shows a block diagram of a condition detection system 1300 in accordance with an embodiment. Various parts of the condition detection system 1300 may be similar to the condition detection system 1200 as described with reference to Fig. 12, the J. reference signs may be used and duplicate description may be omitted.
[00195] For example, the block diagram may be a block diagram of fall detection and alert system for elderly and patient for the scenario that a person falls from bed or around bed at night. The system 1300 may include both prevention and detection functions for
E fall. Pressure sensors, for example a pressure sensor 1306, on the bed may identify person actions. related to bed such as entering into and leaving from bed (which may be performed in a person-in-bed-detection circuit 1304), and may provide data for evaluating sleeping quality. The abnormal turnover of sleeper may trigger an alert to nurse. When the pressure sensors detect a sleeper leaving from bed, then the lamps may : be automatically turned on (as indicated by block 1302) to reduce the risk of fall because of fumbling for switch and bad lighting condition; vision-based fall detection function may also be automatically tumed on to detect fall.
[00196] FIG. 14 shows a block diagram 1400 illustration a method for creating a
Co lookup table in accordance with an embodiment. For example, the method may be a _procedure for creating 2D-3D lookup table using camera calibration approach, which may be used to get (partial correct) quasi 3D (shorted as Q-3D) positions of head from the corresponding 2D image locations. According to various embodiments, “partial correct” positions may be understood as positions that are correct under the assumption
Ce 48 that the person is standing; according to various embodiments, this assumption may not ‘hold for all the frames, and the position may not be absolutely correct but may be a good approximation for a correct position. According to various embodiments, a 2D-3D lookup table may be used. With the 2D-3D lookup table, the Q-3D positions of head may be acquired fom the 2D head locations in images. In this way of acquiring Q-3D position, unreliable yet time-consuming 3D head tracking may be avoided.
[00197] According to various embodiments, in 1402, marks (or markers) may be . placed in the scene, and 3D positions of the marks may be obtained. In 1404, a video may . be recorded by the fixed camera and 2D locations of marks may be obtained. In 1406, . correspondences may be prepared, and camera calibration may be performed. In 1408, a
Co 2D-3D lookup table may be computed. (00198) According to various émbodiments, a camera may be set up and markers may be prepared, then markers may be put in the scene, the coordinates of markers may be measured in the scene, and a video may be recorded, then the image coordinates of markers and the corresponding pairs of markers may be obtained, and then camera calibration may be performed and the 2D-3D lookup table may be produced.
[00199] FIG. 15 shows an illustration 1500 of a method of acquiring corresponding . pairs for camera calibration in accordance with an embodiment. According to various - embodiments, a fixed marker 1502 on top of a stand and a plump 1504 fixed on the top of the marker may be provided. According to various embodiments, corresponding pairs of a point in real world and its projected point in image space may be acquired for camera calibration. According to various embodiments, a ball in red color and a plumb may be used. The ball and plumb may form a vertical line segment in the scene and image. Thus,
the bail and the plumb may have the same (X, Y) in the scene if the XY-plane is set on - the horizontal floor. According to various embodiments, the (X, Y) of the plumb may be measured. According to various embodiments, the enough not co-plane markers in the scene may be acquired by putting this tool at several places in the scene. According to various embodiments, more than six markers in the scene may be desired.
[00200] FIG. 16 shows a flowchart 1600 of a condition detection method in accordance with an embodiment, for example a flowchart of the video-based fall detection method for single fixed camera 1602. In 1604, a video may be input. In 1606, to acquire the body shape, a background model may be maintained and background : subtraction may be used to obtain the foreground. Thus, according to various embodiments, the body shape may be identified and an ellipse may be found that tightly encloses the body shape. According to various embodiments, next the head location may be searched in an area computed according to the ellipse and then the body ellipse and 2D head location may be used to detect fall event in 1612. [00201) According to various embodiments, in 1608, video feature extraction may be performed. In 1610, fall assessment may be performed.
[00202] FIG. 17 shows a flow diagram 1700 illustrating a body shape feature extraction method in accordance with an embodiment. In 1704, a video may be input from- a camera 1702. In 1706, background maintenance may be performed. In 1708, foreground segmentation may be performed. In 1710, body detection and tracking may be performed. In 1712, body skeleton extraction may be performed. In 1714, body trunk extraction may be performed. In 1716, body feature and body ellipse may be provided. oo In 1718, head feature and head ellipse may be provided. a 50
[00203] According to various embodiments, the flow diagram 1700 may be a flowchart of body shape feature extraction based on body trunk fitting, for example body trunk ellipse fitting. The input may be an image sequence. According to various embodiments, background maintenance and foreground segmentation may be performed and thei the skeleton of body shape may be found. Based on the body shape and skeleton, Distance Transform may be performed to discard pixels on arms. According to various embodiments, the remaining part may be called the body trunk. According to various embodiments, an ellipse may be used to fit the body trunk. According to various embodiments, the ellipse may be cut into two parts (for example two halves) to find the . . head top and foot bottom, as will be explained in more detail below. According to various embodiments, the body trunk ellipse, head top, and foot bottom may form the features of body shape, and may be the output of this method.
[00204] FIG. 18 shows a flowchart 1800 of a condition detection method in accordance with an embodiment. In 1802, a body feature and body ellipse may be provided. In 1804, the body image-height may be looked up. In 1806, a head feature and head ellipse may be provided. In 1808, a 3D head position may be looked up. In 1810, micro-actions and measurements may be detected. In 1812, fall event assessment may be provided, In 1814, a fall event may be detected. {00205] According to varibus embodiments, the flowchart 1800 may be a flowchart of the video-based fall assessment method based on the body ellipse and head 2D location for single fixed camera with the aid of 2D-3D lookup table. According to various embodiments, for the current frame, from the 2D head location in image, the Q-3D locations of head may be acquired through looking up the 2D-3D table under assumptions that person is standing and lying. According to various embodiments, two
Q-3D positions of a head may be acquired. According to various embodiments, one Q-3D position may be under the assumption that person is standing and the other position may -. be under the assumption that person is lying, According to various embodiments, after the body ellipse is known, the ellipse center may be used to look up the person height in image if the center of person is projected at the center of ellipse. According to various embodiments, then in the current time window a set of micro-actions or statuses of person such as standing, lying, crouching, lying-down, standing-up, and walking may be : +. detected. According to various embodiments, a set of measures of fall speed, the distance -. lying head to standing foot, the number of standing frames, the number of lying frames, : the nurrber of frames of reducing head height, and the reduced height of head in 3D may be calculated. According to various embodiments, the fall event may be inferred through the presence of micro-actions and measurements. :
[00206] FIG. 19 shows an illustration 1900 of a use case in accordance with an embodiment. In a first image 1902, a person 1904 having fallen is shown as indicated by arrow 1906. In a second image 1908 and a third image 1910, an enlarged view is shown.
[00207] FIG. 20 shows an illustration 2000 of a use case in accordance with an ) embodiment. In the illustration 2000, a schematic view of a fall at three different instances of time is shown, The posture 2002 of the person and the position 2004 of the head of the person at a first instance of time are shown, The posture 2006 of the person and the position 2008 of the head of the person at a second instance of time are shown. ) The posture 2010 of the person and the position 2012 of the head of the person at a third instance of time are shown.
[00208) FIG. 21 shows an illustration 2100 of obtaining of body trunk, head top and foot bottom points as results of SKEF and TopE in accordance with an embodiment. : [00209] According to various embodiments, skeleton-based ellipse fitting (SKEF) may be provided. As the silhouette of arms may cause misjudging in fall as shown in the image 2102, where the region 2104 corresponding or showing the person is shown, a way to refine the global detection to get the body trunk according to various embodiments, may not include arm areas, According to various embodiments, in order to obtain the body trunk, Distance Transform may be applied to the foreground contours and then only
So pixels with long distance may be kept. According to various embodiments, the remaining area may be assumed to be the body trunk. According to various embodiments, to estimate the ellipse covering body trunk, the remaining area and the foreground contour may be considered together to obtain the fitting ellipse of body trunk.
[00210] According to various embodiments, skeleton-based ellipse fitting (SKEF) may be performed in the following steps:
[00211] Input: the foreground of each image;
[00212] Output: the ellipse of body trunk; oC [00213] Step 1: Estimate an ellipse covering at least a part of the foreground image (F), ellipse parameters, XL,LYI1, LI, HI, and 61 may be estimated;
[00214] If the current frame is the first frame, then prevX = XI, prev¥=Y]1, ~ prevL=L1, prevH = HI, 02= 01;
[00215] Step 2: Calculate distance image, D = Distance Transform(F). 00216) Estimate main skeleton, K = M&F.
[00217] Fit K by an ellipse and X2, Y 2, L2, H2 and 62 may be parameters of the ellipse.
[00218] Step 3: if |62 — 61] < 30° then (00219) Draw ellipse image (E) with (X2, Y 2, 12, H2, 62).
[00220] Else Draw ellipse image (E) with (prevX, prevY, prevL, prevH, 62), parameters from previous frame and newly estimated 92.
[00221]. Step 4: Estimate final area, Final = E&F.
[00222] Estimate an ellipse covering Then, fX, fY , fL, fH, and f® may be estimated.
I [00233] Save ellipse parameters to prevX, prevy , prevL, prevH, and prev and output these values .
[00224] According to various embodiments, SKEF may be performed according to the following table:
1: Estimate an ellipse covering the foreground image(F),
X1.YL, Ll. H1. and 61 are estimated. oo 2. if 1st Frame then :
EE }% Save estimated ellipse parameters to prevX, prev. prevl, prevH, prevd and go to next frame. - 4 else
Cr 5. Caculate distance image, I? = Distance Transform(F), : ; 6. Threshold the distance image(D) by 0.4 * M azimum distance to obtain long distance area(M) aka skeleton. 7. Select the skeleton(Ad)locating in foreground area,
K=MEF, : 8: Estimate an ellipse covering KX, X2. Y2, L2. H2 and : 62 are estimated. 9:. if|62 — 61] < 30 then : 10: Draw ellipse image(E) with (X'2, V2, £2, H1, 81). " Using centre point, orientation. and the width of
Co main area to select useful pixels in foreground
Co 11: else .
I 12: Draw ellipse image(E) with (prevX, prevY,
J Lizpreul, 1.1zprevH, 62). enlarge ellipse from previous frame with new angle 62. 13: endif : 14: Estimate final area, Final = E&F, where foreground area is constrained. 15. Estimate an ellipse covering Final, Xf. YF, Lf. Hf and 8 f are estimated. 16: Save ellipse parameters to prevX, prevY. prevl, prev, and prev and go to next frame. i 17: end if
[00225] According to various embodiments, a method that may be referred to as Top - point from ellipse (TopE) may be provided. }
[00226] According to various embodiments, given an ellipse fitted to the foreground object, it may be a challenging task to indicate the positions of parts on body. According to various embodiments, key positions for fall detection may be the positions of head and feet. According to various embodiments, the estimation of the positions may indicate the height of head and the body height and the change of these values can be used in detecting fall, According to various embodiments, techniques for estimating the head position and feet position may be provided. (00227) According to various embodiments, the processes of head direction estimation and of top point fitting may be provided, According to various embodiments, given an - ellipse fitted, the head direction estimation may be aimed at indicating which half of the given ellipse is covering the upper part of the body. According to various embodiments, after the upper part of body is pointed out, the end point of ellipse on that side may be used as the start point for head top searching. In image 2106, the lower part 2108 of the body is shown, and in image 2110, the upper part 2112 of the body is shown.
[00228] According to various embodiments, a head direction estimation may be provided. According to various embodiments, given a fitted ellipse, devices and methods to point out the half side of ellipse covering upper part of the body may be provided.
According to various embodiments, given a fitted ellipse and a foreground image, the foreground object may be divided into two parts by the minor axis of the ellipse.
According to various embodiments, then the area of each part may be measured by counting the number of pixel. According to various embodiments, the part giving higher : point may be considered the lower part. According to various embodiments, assumptions . maybe applied for robustness. According to various embodiments, an assumption may be is that in early frames, the body is upright and then the upper body may be the upper part of the ellipse. According to various embodiments, another assumption may be that the difference of the center positions of each part between two frames may not be more than +. . half of the minor axis, which may be for example approximately half of the body width.
[00229] According to various embodiments, top point fitting may be provided, as will be explained below. According to various embodiments, given the knowledge of the upper side and the lower side of the body, it may roughly be known which end of the major axis of the ellipse is close to the head and which end is close to the feet. According to various embodiments, on the side covering legs and feet, the end of ellipse may be not very far from the correct position of feet but on the side covering the upper body, the end of ellipse may not fit to the head because arm areas may affect the estimation. According to varidus embodiments, the end of major axis on the upper side of body may be used a start point. According to various embodiments, it may be searched along the major axis from this starting point until it finds the foreground pixel. According to various embodiments, the result may be supposed to be the head position. According to various embodiments, for taking care of robustness of the technique, assumptions similar to those of head direction estimation may be made. According to various embodiments, an assumption may be that in early frames, the body may be upright, and then the upper body may be the upper part of ellipse. According to various embodiments, another assumption may be that the difference of head position between two frames may not be more than quarter of minor axis, which may be for example approximately half of the © head width. 00230] In image 2114, the fitted ellipse 2118, the upper part 2116 of the body, the starting point 2120 for TopE and the determined head point 2122 are shown.
[00231] According to various embodiments, micro-actions may be detected, as will be explained below. Let Fy, Fa, Fa, i” Fw.1, Fw be W frames in a time window. Let Hy, Hy,
Co Haj, Hy, Hy be the head location in corresponding frame, i.e. 2D image head positions. According to various embodiments, according to the lookup table as explained - above, the Hs, Hey, Hss, ..., Hsw.1, Hsw and Hy, His, Hyg, ..., Huw, Hiw being the : head 3D position in the real world when the person stands still or lies on the floor respectively may be obtained.
[00232] According to various embodiments, the head motion in a fall may include the three micro-actions of standing (head may be at the status of person standing), decreasing : (head may decrease its height), and inactivity (head may lie still on the floor). According - to various embodiments, various micro-actions, for example walking, rising, and i | | crouching, may. refute a fall, According to various embodiments, to judge whether a (time) window contains a fall, for example one or more of the following micro-actions may be detected: decreasing, inactivity, walking, rising, and crouching. According to various embodiments, besides micro-action detection, a set of measures may be calculated to help judge whether a time window includes a fall.
[00233] According to various embodiments, standing detection may be performed.
Let F be a frame within the concerned time window. According to various embodiments, assuming that a body detection procedure provides the center of body as (Xpr, Yge), then the body image height if the person stands, denoted by Hsuy(Xar, Yar) may be calculated. . Let 8; be the angle between the major axis of body ellipse and the horizontal axis and let Lem be the length of major axis of ellipse. Then, according to various embodiments, it
L may be concluded that the person is standing in this frame if the following conditions hold: -
LT 6p Z1< Bey and | Low Honor, Yon) < fir. ©)
[00234] According to various embodiments, As. (for example with various indices) © © may bell kinds of thresholds. oo
[00235] According to various embodiments, walking detection may be performed. Let . Fi and Fy be two consecutive frames within the concemed time window. Let Dg(k) and
Dik) denote the distances from Hg.) to Hex and from Hix, to Hix respectively.
According to various embodiments, when a person is walking, two distances may have a _ “peer” change. According to various embodiments, when a person falls down, two distaiices may have a relatively large difference. According to various embodiments, it - may be- determined that the concemed (time) window is a window where walking is shown, if it meets the following condition:
Bir <IDs) - DUK)| < By for 0S kS W ©) and most frames in this window are standing (in other words: are windows, in which the person is determined to be standing). :
[00236] According to various embodiments, inactivity head detection may be performed. According to.various embodiments, in a fall, the head may lie on the floor for : a while motionlessly. According to various embodiments, a fall window may include a head inactivity of a pre-determined time, for example of at less 3 to 5 frames if frame rate is 25 frames per second. According to various embodiments, it may be determined that a window possesses an inactivity if it meets the following condition:
Dyk) <B,,, for3 consecutive frames. (7)
Co 00237} According to various embodiments, sometimes head inactivity may be caused - by a still-standing, According 0 various embodiments, to further remove some such } cases, the distance from the motionless head to the foot in standing frames, which is one of the first frames of the concerned window, may be calculated. Let (Xg Y¢, 0) be the foot place in the standing frame and (Xnike Yui, 0) be the head place in k frame. Let
Dui(k) orate the distance from (Xue, Yh, 0) to (Xi Yr, 0). According to various embodiments, it may be determined that a lying head is present if Dy(k) meets the following condition:
Bry <Pruk) < fy, . (8)
[00238] According to various embodiments, a head decreasing detection may be
So performed. According to various embodiments, once the head-lying-on-floor frames have ~ been_ found, the head position decreasing segment of time window may be found. .
According to various embodiments, since possible standing head and lying head positions may be known, it may be calculated, how many pixels the head should decrease in image space, denoting this value by Hgp. According to various embodiments, a set of consecutive head decreasing frames may be found, Let Hyp be the total reduced height.
According to various embodiments, a segment may be determined to be a head decreasing segment if
Hep < fpr x Hep. (10) . [00239] According to vartous embodiments, a head rising detection may be performed.
According to various embodiments, a set of consecutive head rising frames may be found and Hyg may be defined as the total risen height. According to various embodiments, a segment may be determined to be a head rising segment if ~~ oo Hyg > figs x Heo. | an
[00240] According to various embodiments, a crouching detection may be performed.
According to various embodiments, a crouching may be determined by calculating the horizontal motion of the head location in image space. According to various embodiments, DHM(k) may denote the horizontal motion of the head from the first frame in the ‘image space. According to various embodiments, a time window may be determined to be a window containing a crouching if
Dp (K) < Big for all k, and a2)
CL | Ding (0) Diy (k= 1 |< Bayo forall k [00241) According to various embodiments, fall related measures may be calculated.
According to various embodiments, measures may be used to complement the micro- actions. According to various embodiments, these measures may include or may be the “ratio of the reduced height in the considered time window to the height of the person, the distance from the inactivity (lying) head to the foot in the standing frame before falling, the average falling speed during head decreasing, the number of frames that head rises its height during the decreasing segment, i.e. how many frames violate the consistent head decreasing, and/ or the number of the frames that head has a distance to the foot in the : first standing larger than the height of the person.
[00242] According to various embodiments, after the existences of micro-actions have ". been detected and the measures have been calculated, these values may form a vector.
According to various embodiments, the elements for the micro-actions may be 0 (not existence} or 1 (existence) and the elements for the measures may be the calculated values. According to various embodiments, with these vectors of time windows, fall detection may become a two-class classification problem. According to various embodies, various methods may be used to determine whether a fall occurs. For example, a rule-based method, such as for example SVM (Support Vector Machine), may be used, or any other suitable method may be used.
(00243 According to various embodiments, devices and methods for fall detection may be provided. According to various embodiments, devices and methods of fall detection including multiple techniques may be provided, According to various embodiments, the camera calibration and 2D-3D lookup table creation may be the preparation (or the basis) of the fall detection devices and methods. According to various embodiments, the devices and methods themselves may include the techniques such as "head detection and tracking, micro-action detection, and fall determination. An overview - of the £1 detection is provided in the following table:
Load the 2D-3D lookup table;
Obtain the 2D head and body shape by 2D head and body detection and
Judge whether the person has moved out the coverage of the system; : . Using lookup table to get Q-3] position of the head; i
Update the data of the time window; } oo Detect micro-actions in the time window;
Calculate the measures of the time window;
i features to a fall determination procedure. - | - An alert may be reported if there is a fall detected in the current time window and another fall may be detected in a previous time window that is close to the current time window. ‘ An alarm may also be triggered if the system does not detect the exit of : person out of the coverage of the system and the time that the system 1 | : cannot identify the person is longer than a fixed threshold.
[00244] According to various embodiments, semi-auto lookup table creation may be provided. According to various embodiments, a 2D-3D lookup table may be created using camera calibration as illustrated above. According to various embodiments, this approach may require camera calibration, which may be done only by professionals.
According to various smbodimens, devices and methods may be provided to semi- automatically create the lookup table as explained above. According to various embodiments, a tripod and a plumb with a ball added at the point may be provided, where the ball may be desired to be in a color significantly differing from the floor color.
According to various embodiments, a ball and the plumb may be fixed on the tripod. . According to various embodiments, when the tripod is fixed at a status, the length : between the ball and the point of the plumb may be fixed if the point of the plumb is just off the floor. According to various embodiments, a video may be recorded in which the tool nay be moved around. According to various embodiments, the balls and plumb to form enough 2D-3D pairs may be automatically identified. According to various embodiments, camera calibration and creation of 2D-3D lookup table may be performed with the devices and methods as explained above.
[00245] According to various embodiments, devices and methods for fall detection for the embodiments of falling from and around bed at night as explained above may be provided. According to various embodiments, fall from bed may be one of frequently- occurring events. For example, the difficulty of this embodiment may lie in that there ) may be not enough lighting at night for camera to take clear images without lamp.
According to various embodiments, to overcome this difficulty, sensors other than cameras may be provided in bed and around bed. According to various embodiments, the } lamps may be turned on and a vision-based detection may be triggered only when sensors other than cameras detect some suspicious actions of person that may desire further investigation. According to various embodiments, a system may include both prevention and detection functions for fall. According to various embodiments, pressure sensors on the bed ‘may identify a person entering and leaving a bed, and may furthermore provide data for evaluating sleeping quality. According to various embodiments, too much of turnover of a sleeper may trigger an alert to nurse. According to various embodiments, the prompt intervention by a nurse will prevent fall and may lead to a proper treatment.
According to various embodiments, when the pressure sensors detect a leaving of bed, then the lamps may be automatically turned on to reduce the risk of fall because of
Ce fumbling for switch and bad lighting condition; according to various embodiments, a oo camera nay also be automatically turned on to detect a fall. i
[00246] The following table shows test results, wherein “Seq” may be the number of the test sequence of images, “#F” may be the number of frames in the respective sequence, “#T” may be the number of falls in the sequence, “#D" may be the number of detected falls in the sequence, “#A" may be the number of false alarms in the sequence, and “#M"” may be the number of missed falls in the sequence. uF State Detection (Detected/Total) Fall Detection __| standing | Bending | Lying |#T[#D[#A [#M 1 | 502 | 153/291 | 280/81 69/84 16[4)o0]2]
Lo <2 | 1400] 116/449 |1059/678| 225/221 sar
Co 38/136. | 187/86 | 144/192 | 2 | 1 | 0 | 1
SE 14 8707 [2597/6064 [5195/2137] 381/422 [11] 9 [0 | 2 | :
Co ls T1196] 255/483 | 597/381 | 343/331 17] 16[ 0 [1]
Co 1 6 [1599 | 259/707 | 968/550 | 293/266 pit] ofof2] 1490 | 179/611 | 890/510 | 421/369 { 9 [9 [0 | 0] sum |15308]3597/8741 [9176/4423] 1876/1885] 63 [56 | 2 | 9 {00247] The following tables provide results of experimental results of various embodiments. According to various embodiments, point fitting and skeleton techniques may be provided. The table may show evaluation of combinations of using those techniques by comparing estimated results and human-annotated data. The figures in the table are the different in pixels between estimated values and annotated values. TopX and
TopY may represent the position of the top of the head on X and Y axes while BotX and
BotY may be for the lower part of body. AngleDifference may be the estimated angle of body compared to annotated data.
Type shape Points fitting } Top Eot
TT TT Ta
Yes 8.42 5.3 16.04 17.21 12.11 29.02 ee a
Yer | ves] ser] msm] wis] ws] ese]
Type shape | Points fitting BotX co oo Mean Mean Med 90%ile ft
Nei en mena were sl aw] el or] wel we eo | er] wa] me ese] ew | wn]
Type shape | Points fitting Height
’ . in un | em] ew] we
[00248] FIG. 22 shows examples of results 2200 of position detection methods in } accordance with an embodiment. In image 2202, an ellipse 2206 obtained by commonly used ellipse fitting of a person 2204 is shown. In image 226, an ellipse 2228 obtained by ellipse fitting of the person 2204 according to various embodiments is shown. In image 2208, an ellipse 2212 (corresponding to ellipse 2228 in image 2226) is-shown, and a line 2214 that divides the ellipse into an upper part and a lower part. The trunk 2210 may be used to define which is the upper part and which is the lower part, as has been explained above. In image 2216, a part 2220 of the ellipse (corresponding to ellipse 2228 in image 2226) is shown, and the region 2218 including or showing the person is shown. . According to various embodiments, the major axis 2222 of the ellipse 2220 may be used to define the point of the head 2224, for example by using TopE, as has been explained oF above, j
[00249] For example, as has been explained above, the movements of arms may affect the change the ellipse parameters, especially the main orientation because the ellipse may not cover only the main area of body but also those arms, as shown in image 2202.
According to various embodiments, a way to improve by emphasizing the main parts of the body while neglecting arm may be provided. According to various embodiments, to estimate the main part of body, it may be assumed that the main part of the human body is the area furthest away from the contour. Thus, according to various embodiments, the
Distance Transform may be applied to the foreground contours and then only pixels with "long distance may be selected ‘by thresholding to represent the area of main part. : According to various embodiments, to estimate the ellipse covering main body, the area oo of main, part may be used to constrain ellipse fitting. According to various embodiments, the details of the process may be like has been explained above, and ellipse fitting technique may be one of commonly used ellipse fitting techniques. According to various embodiments, the ellipse parameters may include the centre point(X,Y ), the minor axis (L), the major axis (A) and the orientation of the major axis(4).
[00250] According to various embodiments, devices and methods for upper body indication may be provided. According to various embodiments, given an ellipse covering the main body as has been explained above, the ellipse may be divided into two " halves by minor axis. According to various embodiments, it may be assumed that the torso area may include most of high values of output image of Distance Transform.
According to various embodiments, therefore, the half that include most of the pixels from skeleton image (M) mentioned above may be considered to be the upper body. An example of ‘comparison according to various embodiments is shown in Image 2208. oo According to various arlsoionents skin-color pixels may be used to improve the performance but it may behave as noise in the frame where the face may not be seen,
[00251] According to various embodiments, devices and methods for extraction of . head position and body orientation and estimation of feet positions may be provided.
According to various embodiments, the body orientation may be estimated by skeleton- based ellipse fitting as has been explained above. According to various embodiments, then the intersection point of the major axis of the ellipse with the contour of foreground ) on the upper body side may be considered as the head location while the intersection ) point of the major axis of the ellipse with the ellipse contour on the lower body side may © be assuined the tip of two feet. According to various embodiments, then the centre point of body may be assumed the middle point between head and feet positions. According to . various embodiments, then the ellipse may be linearly scaled to fit the tips of head and feet.
[00252] An example of locating head is shown in image 2216 and a result of the method according to various embodiments is illustrated in image 2226.
[00253] According to various embodiments, devices and methods providing an N- directional distribution histogram for fall detection may be provided. According to various embodiments, devices and methods providing an N-directional distribution histogram for posture analysis may be provided.
[00254] According to various embodiments, devices and methods for improving + estimation of positions of head and feet tips and orientation of the main human body may be provided. According to various embodiments, with better alignment provided by improved position estimation technique, a feature which may be referred to as n- oo directional distribution histogram (N-DDH) may be provided to help in fall analysis.
According to various embodiments, results may show improvements on measurements of key values and important connection between NDDH feature and falls. According to various embodiments, the estimated head position, centre of body and body orientation obtained from ellipse fitting may be used to align all shape before extracting the n- directional distribution histogram(NDDH), as will be explained in more detail below, to - use in fall analysis, - [00255] According to various embodiments, devices and methods for extraction of head position and: body orientation and estimation of feet position may be provided. : According to various embodiments, devices and methods may be provided for extracting head position and body orientation and estimating feet position and centre of body for using in shape alignment for extraction of N-Directional Distribution Histogram feature as will be explained in more detail below. According to various embodiments, first, the object of interest may be located by extracting foreground with any of a commonly used background subtraction method. According to various embodiments, then the positions of head and feet and orientation of body may be estimated from the shape of foreground by the one or more (or all) of following steps: Skeleton-based ellipse fitting(SKEF),
Indicating the side of ellipse containing upper body, Extraction of head and body orientation and Estimating feet positions, as has been explained above and will be explained below.
[00256] According to various embodiments, given an estimation of body orientation, »_ all foreground shape may be. aligned by rotating until the body orientation is co perpendicular to the horizontal line. According to various embodiments, then the N-
Directional Distribution Histogram (N-DDH) technique may be applied. According to various embodiments, N-Directional Distribution Histogram (N-DDH) may be a technique to learn the pattern of distribution of foreground image in each direction.
According to various embodiments, the number of foreground pixels in the sampling area of each direction may be counted and the number may be put into a bin of that direction.
According to various embodiments, a pre-determined number of directions, for example eight directions, may be equally spreading over 360 degrees. For example for eight directions, each direction may be 45 degree different. oo ) [00257] FIGS. 23A and 23B show examples of results of position detection methods in - : accoidince with an embodiment.
[00258] FIG. 23A shows an image 2300 illustrating a sampling area of NDDH sampling in eight directions. There may be eight bins in total sampling. The sampling areas of all eight directions (for example a first sampling area 2302, a second sampling area 2304, a third sampling area 2306, a fourth sampling area 2308, a fifth sampling area 2310, a sixth sampling area 2312, a seventh sampling area 2314 and an eighth sampling area 23 16) may be shown in FIG 23A. -
[00259] FIG. 23B shows an image 2318 illustrating the sampling areas applied on the foreground, for example on a foreground object, for example the region 2320 corresponding to the person.
[00260] According to various embodiments, the sampling area in each direction may be defined as follows: : (00261) 1. The shape (for example sampling area template) may be a rectangle with the height (R) equal to the distance from centre point to the head point (which may be estimated as has been explained above) and the width equal to 2 R cos(67.5) or 0.765 R.
[00262] 2. The centre point of body may be on the base side of the rectangle and may be able to divide that side equally.
[00263] 3. The base may be the width of the rectangle.
[00264] 4. The direction of sampling area of the first bin may start from the centre . point of the body toward the head position.
[00265] According to various embodiments, the sampling area be chosen in any other way, for example the height of the sampling area may be chosen to be equal to the
Ce distance from the feet to the head, or not the center of the person may be put into the origin, but any other portion (for example the feet or the head), and the size of the boxes to be rotated may be chosen accordingly for computation of the histogram.
[00266] According to various embodiments, when M denotes the region of the acquired image showing the person (for example M(x,y) equals 1 if the pixel at position
Lo (x,y) shows the person, and M(x,y) equals 0 if the pixel at position (x,y) does not show the person), and A; denotes the i-th sampling area, then the number fj of pixels in the i-th bin may. be computed as follows: [i= M(x, n)(x,y)e 4, resp. f= SM). (1.64 ’ [00267] According to various embodiments, after obtaining the number of pixels
En contained in each bin, normalization may be performed by dividing the value in each bin by the maximum value. For example, when denoting the number of pixels in the i-th bin with fj and N the total number of bins, then the normalized value for the i-th bin may be obtained as follows: ma (1)
[00268] According to various embodiments, normalization may be performed to
BE normalize the sum of all values to one. For example, when denoting the number of pixels in the i-th bin with f; and N the total number of bins, then the normalized value for the i- " thbin may be obtained as follows:
Co hed . XU =
[00269] FIGS. 24A and 24B show an example of a normalized directional distribution histogram in accordance with an embodiment. In the example of NDDH feature extraction, FIG. 24A shows the image 2318 of FIG. 23B showing a sampling area in eight directions on a standing posture, and FIG. 24B shows a normalized direction distribution histogram 2400, for example the distribution in each direction of standing posture, in accordance with an embodiment. In an area 2402, angular sectors are shown.
For example, a first arrow 2404 may represent the normalized number of pixels in the sampling area corresponding to the direction of the first arrow 2404, for example in the first sampling area 2322. For example, a second arrow 2406 may represent the normalized number of pixels in the sampling area corresponding to the direction of the second arrow 2406, for example in the second sampling area 2324, For example, a third
Bh arrow 2408 may represent the normalized number of pixels in the sampling area corresponding to the direction of the third arrow 2408, for example in the third sampling area 2326. For example, a fourth arrow 2410 may represent the normalized number of pixels in the sampling area corresponding to the direction of the fourth arrow 2410, for example in the fourth sampling area 2328. For example, a fifth arrow 2412 may represent the normalized number of pixels in the sampling area corresponding to the direction of the fifth arrow 2412, for example in the fifth sampling area 2330. For example, a sixth arrow 2414 may ‘represent the normalized number of pixels in the sampling area corresponding to the direction of the sixth arrow 2414, for example in the sixth sampling area 2332, For example, a seventh arrow 2416 may represent the normalized number of . : pixels in the sampling area corresponding to the direction of the seventh arrow 2416, for example in the seventh sampling area 2334. For example, an eighth arrow 2418 may represent the normalized number of pixels in the sampling area corresponding to the direction of the eighth arrow 2418, for example in the eighth sampling area 2336.
[00270] FIGS. 25A and 25B show an example of a normalized directional distribution i . histogram in accordance with an embodiment. In the example of NDDH feature extraction, FIG. 25A shows a image 2500 showing a sampling area in eight directions on a kneeling posture including a first sampling area 2502, a second sampling area 2504, a third sampling area 2506, a fourth sampling area 2508, a fifth sampling area 2510, a sixth sampling area 2512, a seventh sampling area 2514, an eighth sampling area 2516, an a foreground image 2518, for example a region corresponding to a kneeling person.
[00271] FIG. 25B shows a normalized direction distribution histogram 2520, for } b example the distribution in each direction of kneeling posture, in accordance with an embodiment. In an area 2522, angular sectors are shown. For example, a first arrow 2524 may represent the normalized number of pixels in the sampling area corresponding to the : direction of the first arrow 2524, for example in the first sampling area 2502. For example, a second arrow 2526 may represent the normalized number of pixels in the sampling area corresponding to the direction of the second arrow 2526, for example in the second sampling area 2504. For example, a third arrow 2528 may represent the normalized number of pixels in the sampling area corresponding to the direction of the : third arrow 2528, for example in the third sampling area 2506. For example, a fourth _ + arrow 2530 may ‘represent the normalized number of pixels in the sampling area corresponding to the direction of the fourth arrow 2530, for example in the fourth samplirig area 2508. For example, a fifth arrow 2532 may represent the normalized number, of pixels in the sampling area corresponding to the direction of the fifth arrow 2532, for example in the fifth sampling area 2510. For example, a sixth arrow 2534 may represent the normalized number of pixels in the sampling area corresponding to the’ direction of the sixth arrow 2534, for example in the sixth sampling area 2512. For example, a seventh arrow 2536 may represent the normalized number of pixels in the sampling area corresponding to the direction of the seventh arrow 2536, for example in the seventh sampling area 2514. For example, an eighth arrow 2538 may represent the normalized number of pixels in the sampling area corresponding to the direction of the eighth arrow 2538, for example in the eighth sampling area 2516.
[00272] In FIG. 24B znd FIG. 25B, it may be noticed that the feature from different postures may be significantly different. According to various embodiments, the majority - of pixels in standing postures may be distributed in the direction of body orientation.
According to various embodiments; unlike the standing posture, the distribution of pixels of kneeling posture may be more or less equal in each direction.
[00273] According to various embodiments, devices and methods may be provided for detecting falls using an NDDH feature.
[00274] FIGS. 26A and 26B show sampling areas in accordance with an embodiment.
[00275] FIG. 26A shows an image 2600 showing a region 2602 corresponding to the person 1104 shown on image 1102 of FIG. 11, and a first sampling area 2604, a second . sampling area 2606, a third sampling area 2608, a fourth sampling area 2610, a fifth
Co sampling area 2612, a sixth sampling area 2614, a seventh sampling area 2616, an eighth sempliig area 2618, a ninth sampling area 2620, a tenth sampling area 2622, an eleventh sampling area 2624, a twelfth sampling area 2626, a thirteenth sampling area 2628, a fourteenth sampling area 2630, a fifteenth sampling area 2632, and a sixteenth sampling area 2634.
[00276] FIG. 26B shows an image 2636 showing the region 2602 and only eight X sampling areas, for example only each second sampling area of the sampling areas described with reference to FIG. 26A. The same reference signs may be used and duplicate description may be omitted. [00277} FIG. 27 shows a framework 2700 of a condition detection device in accordance with an embodiment. For example, a general framework of fall detection and alert may be provided. A feature extraction circuit 2702, which for example may be provided instead of or in addition to the feature extraction circuit 1212 as described with : reference to FIG. 12 above, may include a body detection circuit 2704 for robust body detection, as has been explained above, and a NDDH classification circuit 2706 for classifying possible postures of a person on an image. The body detection circuit 2704 may provide data 2712 to the NDDH classification circuit 2706 and may provide geometrical measurements 2710 to a fall detection circuit 2708, which may be provided instead of or in addition to the fall detection circuit 1214 as described with reference to
FIG. 12 above. The NDDH classification circuit 2706 may provide information 2714 ~ about possible postures to the fall detection circuit 2708. - [00278] According to various embodiments, devices and methods may be provided for : detecting falls which start from upright position using a NDDH feature as has been : described above. According to various embodiments, it may be assumed that either the ~ body bending posture or the kneeling down posture, for example as shown in FIG. 24A, may be the posture occurring between upright posture and fall posture. Thus, according to various embodiments, the sudden change from upright posture to bending or kneeling posture may be set to activate the alarm. : [00279] According to various embodiments, and as illustrated in FIG. 24B and
FIG. 258, the summation of all bins may be selected to be monitored for differentiating falls from normal posture.
[00280] FIG. 28 shows a diagram 2800 in accordance with an embodiment. On a horizontal axis 2802 of the diagram 2800 showing the change of sum of all bins of
NDDH feature resp. the summation of all bins of NDDH feature over a test sequence, the frame number of the corresponding frame in a sequence of images is shown. In a vertical : axis 2804, the percentage of sum of all bins with respect to the maximum possible sum is : | shown. “For example, if eight sampling areas (and eight bins) are present, then the maximum possible sum, when normalizing with respect to the maximum value as explained above, would be 8 (in each bin, the value would be less or equal than 1).
[00281] For example, a sequence recording a man performing various action including walking, squatting, stretching arms and falling as explained above with reference to
FIG. 22 may be used as the test sequence. The sequence may be 800 frames long with a frame rate of 15 fps. According to various embodiments, the resolution of each frame may be 320x240 pixels. According to various embodiments, an ellipse fitting technique - according to various embodiments may be applied to extract the correct body orientation, oo head position and -the distance from centre to the head position (width and height of a oo sampling area of NDDH). According to various embodiments, then NDDH features may be extracted in each frame before the summation of all bins may be calculated. The summation of all bins of NDDH feature over time may be illustrated like in FIG. 28.
According to various embodiments, a threshold 2806, for example at 75%, may be set, and changes higher than the threshold (for example higher than 75%) may be considered } to be a fall or fall-like,
[00282] For example, in FIG. 28, there may be six periods higher than 75%. These may be found to be the following: In a first period 2808, the subject may be standing on the right-hand side of the frame and falling toward the left-hand side of the frame. In a second period 2810, the subject may be trying to stand up after falling. In a third period 2812, the subject may be facing the camera and then falling backward away from the camera. In a fourth period 2814, the subject may be trying to stand up after falling. In . a fifth period 2816, the subject may turn his back against the camera, and may then fall } toward to the camera, and stand up. In a sixth period 2818, the subject may squat down.
[00283]- According to various embodiments, the results may show that the summation of all bins of NDDH feature may differentiate between normal posture and falls. Though the sixth period 2618 may be a false alarm, the technique according to various embodiments may detect most of the fall, including the falls happened in the third period 2812 and in the fifth period 2816, which may be difficult cases, According to various embodiments, the falls in the third period 2812 and in.the fifth period 2816 may - .bethe cases of falls that most of monocular-vision-based fall detectors may always miss,
[00284] FIGS. 29A and 29B show examples of results of position detection methods in accordance with an embodiment. : [00285] In FIG. 29A, an image 2900 including a region 2902 showing a person, a first sampling area 2904, a second sampling area 2906, a third sampling area 2908, a fourth sampling area 2910, a fifth sampling area 2912, a sixth sampling area 2914, a seventh sampling area 2916, an eighth sampling area 2918, a ninth sampling area 2920, a tenth : sampling area 2922, an eleventh sampling area 2924, a twelfth sampling area 2926, a } thirteenth sampling area 2928, a fourteenth sampling area 2930, a fifteenth sampling area 2032 and a sixteenth sampling area 2934 are shown,
[00286] According to various embodiments, normalization by summation may be provided, as has been explained above (in other words: the number of pixels of the region showing the person in each of the sampling areas may be divided by the total number of pixels of the region showing the person in all of the sampling areas), an K-means may be applied to obtain key postures. According to various embodiments, a k-means technique © may be used to blindly cluster types of postures.
[00287] FIG. 29B shows a diagram 2936, where over a horizontal axis 2938 the frame number. and over a vertical axis 2940 the number of a respective posture, as will be explained below, is shown. For example, during a first time interval, like indicated by a first area 2942, during a second time interval, like indicated by a second area 2944, during a third time interval, like indicated by a third area 2946, during a fourth time interval, like indicated by a fourth area 2942, and during a fifth time interval, like indicated by a fifth area 2930, a fall on the axis square to the camera plane may be present and may be grouped into type 11, as will be explained in more detail below.
Furthermore, during a sixth time interval, like indicated by a sixth area 2958, a fall on the sideway may be present and may be grouped into type 11, as will be explained in more detail below. Furthermore, during a seventh time interval, like indicated by a seventh area 2960, a fall on the sideway may be present and may be grouped into type 7, as will be explained in more detail below. Furthermore, during an eighth time interval, like indicated by an eighth area 2962, a fall on the sideway may be present and may be grouped into type 1, as will be explained in more detail below. Furthermore, during a ninth time interval, like indicated by a ninth area 2952, during a tenth time interval, like } indicated by a tenth area 2954, and during an eleventh time interval, like indicated by an eleventh area 2956, a fall on the axis square to the camera plane may be present and may be grouped into type 1, as will be explained in more detail below. :
[00288] FIG. 30 shows various postures 3000 in accordance with an embodiment. For example, a first posture 3002 of type 1 and a second posture 3004 of type 1, a posture 3006 of type 3, a posture 3008 of type 6, a posture 3010 of type 7, a posture 3012 of type 8, a first posture 301 of type 11, and a second posture 3016 of type 11 may be shown. ~The postures of type 1, type 3, type 6, type 7, type 8 and type 11 may be samples of image from the group where k-means may have grouped blindly.
[00289] According to various embodiments, a skeleton area obtained from Distance
Transform may be very helpful in constraining ellipse fitting as distracting arms may be neglected and may be useful in comparing areas to indicate the upper and lower body.
According to various embodiments, good estimation of body orientation and head position may provide a ‘simple and effective image alignment for further process of feature extraction. According to various embodiments, N-Directional Distribution
Histogram (NDDH) may be a transform extracting the characteristic of foreground area © in terms of distribution in each direction away from the centre. According to various embodiments, the summation of histogram may be used to distinguish fall posture from other normal posture. According to various embodiments, an NDDH feature may be used asa simple and effective feature in posture estimation or in other applications.
[00290] According to various embodiments, devices and methods may be provided: that may cope well when a fall takes place on the direction perpendicular to the camera plane, . [00291) According to various embodiments, NDDH may be a simple and fast feature and may be effective. According to various embodiments, NDDH may be useful in separation between straight and bending. According to various embodiments, NDDH may be used to initialize further methods such as body alignment. According to various embodiments, good classification between straight and bending body may be provided.
According to various embodiments, a good estimation of posture may be provided.
According to various embodiments, simple and fast techniques may be provided.
[00292] While the invention has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning a and range of equivalency of the claims are therefore intended to be embraced.

Claims (23)

Claims What is claimed is:
1. A condition detection method comprising: acquiring an image comprising a person; detecting a first region of the image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person; removing from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold, to obtain a second region; determining a first geometrical shape that fits the first region according to a pre- determined first matching criterion; determining a second geometrical shape that fits the second region according to a pre- determined second matching criterion; and determining a condition of the person on the image based on the first geometrical shape and based on the second geometrical shape.
2. The condition detection method of claim 1, wherein removing from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold comprises: performing a distance transform of the first region; and removing from the first region the sub-region of the first region with a value of the distance transformed image below a pre-determined removal threshold.
3. The condition detection method of claim 2, wherein removing from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold further comprises: determining a maximum value in the distance transformed first region; and wherein the pre-determined removal threshold is based on the maximum value in the distance transformed first region.
4. The condition detection method of any one of claims 1 to 3, wherein the pre-determined first matching criterion comprises a criterion of correlating the first geometrical shape and the first region.
5. The condition detection method of any one of claims 1 to 4, wherein the pre-determined first matching criterion comprises a criterion of minimizing the area of the difference between the interior of the first geometrical shape and the first region.
6. The condition detection method of any one of claims 1 to 3, wherein determining a condition of the person comprises: determining a third geometrical shape based on the first geometrical shape and based on the second geometrical shape.
7. The condition detection method of claim 6, wherein determining the third geometrical shape comprises: determining at least one geometrical parameter of the first geometrical shape; determining at least one geometrical parameter of the second geometrical shape; determining the third geometrical shape based on the at least one geometrical parameter of the first geometrical shape and on the at least one geometrical parameter of the second geometrical shape.
8. A condition detection method comprising: acquiring an image comprising a person; detecting a region of the image, so that the ratio of the area of the region including the person to the area of the region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person; providing a sampling area template; providing a plurality of sampling areas of the image, wherein each sampling area corresponds to the sampling area template, and wherein each sampling area corresponds to an orientation of the sampling area template; determining, for each of the sampling areas, the area of the region in the sampling area; and determining a condition of the person on the image based on the determined area.
9. The condition detection method of claim 8, further comprising:
determining a geometrical shape that fits the region according to a pre-determined matching criterion; and wherein providing the sampling template comprises providing the sampling template based on the determined geometrical shape.
10. The condition detection method of claim 8 or 9, wherein each of the sampling areas of the plurality of sampling areas is congruent to the sampling area template.
11. The condition detection method of any one of claims § to 10, wherein each of the sampling areas of the plurality of sampling areas is rotated by a pre- determined angle with respect to the sampling area template.
12. A computer program configured to, when run on a computer, execute the method of any one of claims 1 to 11.
13. A condition detection device comprising: an image acquirer configured to acquire an image comprising a person; a detector configured to detect a first region of the image, so that the ratio of the area of the first region including the person to the area of the first region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person;
a remover configured to remove from the first region a sub-region of the first region with a distance to the border of the first region below a pre-determined threshold, to obtain a second region; a first geometrical shape determiner configured to determine a first geometrical shape that fits the first region according to a pre-determined first matching criterion; a second geometrical shape determiner configured to determine a second geometrical shape that fits the second region according to a pre-determined second matching criterion; and a condition determiner configured to determine a condition of the person on the image based on the first geometrical shape and based on the second geometrical shape.
14. The condition detection device of claim 13, wherein the remover is further configured to: perform a distance transform of the first region; and remove from the first region the sub-region of the first region with a value of the distance transformed image below a pre-determined removal threshold.
15. The condition detection device of claim 14, wherein the remover is further configured to: determine a maximum value in the distance transformed first region; and wherein the pre-determined removal threshold is based on the maximum value in the distance transformed first region.
16. The condition detection device of any one of claims 13 to 15, wherein the pre-determined first matching criterion comprises a criterion of correlating the first geometrical shape and the first region.
17. The condition detection device of any one of claims 13 to 16, wherein the pre-determined first matching criterion comprises a criterion of minimizing the area of the difference between the interior of the first geometrical shape and the first region.
18. The condition detection device of any one of claims 13 to 17, wherein the condition determiner is further configured to: determine a third geometrical shape based on the first geometrical shape and based on the second geometrical shape.
19. The condition detection device of claim 18, wherein the condition determiner is further configured to: determine at east one geometrical parameter of the first geometrical shape; determine at least one geometrical parameter of the second geometrical shape; determine the third geometrical shape based on the at least one geometrical parameter of the first geometrical shape and on the at least one geometrical parameter of the second geometrical shape.
20. A condition detection device comprising:
an image acquirer configured to acquire an image comprising a person; a region detector configured to detect a region of the image, so that the ratio of the area of the region including the person to the area of the region not including the person is higher than the ratio of the area of the image including the person to the area of the image not including the person; a sampling area template provider configured to provide a sampling area template; a sampling areas provider configured to provide a plurality of sampling areas of the image, wherein each sampling area corresponds to the sampling area template, and wherein each sampling area corresponds to an orientation of the sampling area template; an area determiner configured to determine, for each of the sampling areas, the area of the region in the sampling area; and a condition determiner configured to determine a condition of the person on the image based on the determined area.
21. The condition detection device of claim 20, further comprising: a geometrical shape determiner configured to determine a geometrical shape that fits the region according to a pre-determined matching criterion; and wherein the sampling template provider is further configured to provide the sampling template based on the determined geometrical shape.
22. The condition detection device of claim 20 or 21, wherein each of the sampling areas of the plurality of sampling areas is congruent to the sampling area template.
23. The condition detection device of any one of claims 20 to 22, wherein each of the sampling areas of the plurality of sampling areas is rotated by a pre- determined angle with respect to the sampling area template.
SG2013008602A 2009-08-05 2010-08-05 Condition detection methods and condition detection devices SG188111A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SG2013008602A SG188111A1 (en) 2009-08-05 2010-08-05 Condition detection methods and condition detection devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG200905224 2009-08-05
SG2013008602A SG188111A1 (en) 2009-08-05 2010-08-05 Condition detection methods and condition detection devices

Publications (1)

Publication Number Publication Date
SG188111A1 true SG188111A1 (en) 2013-03-28

Family

ID=43544542

Family Applications (2)

Application Number Title Priority Date Filing Date
SG2012008041A SG178270A1 (en) 2009-08-05 2010-08-05 Condition detection methods and condition detection devices
SG2013008602A SG188111A1 (en) 2009-08-05 2010-08-05 Condition detection methods and condition detection devices

Family Applications Before (1)

Application Number Title Priority Date Filing Date
SG2012008041A SG178270A1 (en) 2009-08-05 2010-08-05 Condition detection methods and condition detection devices

Country Status (2)

Country Link
SG (2) SG178270A1 (en)
WO (1) WO2011016782A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9524424B2 (en) 2011-09-01 2016-12-20 Care Innovations, Llc Calculation of minimum ground clearance using body worn sensors
US10258257B2 (en) 2012-07-20 2019-04-16 Kinesis Health Technologies Limited Quantitative falls risk assessment through inertial sensors and pressure sensitive platform
US9877667B2 (en) 2012-09-12 2018-01-30 Care Innovations, Llc Method for quantifying the risk of falling of an elderly adult using an instrumented version of the FTSS test
WO2015174228A1 (en) 2014-05-13 2015-11-19 オムロン株式会社 Attitude estimation device, attitude estimation system, attitude estimation method, attitude estimation program, and computer-readable recording medium whereupon attitude estimation program is recorded
CN104574441B (en) * 2014-12-31 2017-07-28 浙江工业大学 A kind of tumble real-time detection method based on GMM and temporal model
US11000078B2 (en) * 2015-12-28 2021-05-11 Xin Jin Personal airbag device for preventing bodily injury
US11116424B2 (en) 2016-08-08 2021-09-14 Koninklijke Philips N.V. Device, system and method for fall detection
US11638538B2 (en) 2020-03-02 2023-05-02 Charter Communications Operating, Llc Methods and apparatus for fall prevention
CN115909503B (en) * 2022-12-23 2023-09-29 珠海数字动力科技股份有限公司 Fall detection method and system based on key points of human body
CN116935495B (en) * 2023-09-18 2024-01-05 深圳中宝新材科技有限公司 Intelligent key alloy wire cutting process user gesture detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7110569B2 (en) * 2001-09-27 2006-09-19 Koninklijke Philips Electronics N.V. Video based detection of fall-down and other events
SE0203483D0 (en) * 2002-11-21 2002-11-21 Wespot Ab Method and device for fall detection

Also Published As

Publication number Publication date
SG178270A1 (en) 2012-03-29
WO2011016782A1 (en) 2011-02-10

Similar Documents

Publication Publication Date Title
SG188111A1 (en) Condition detection methods and condition detection devices
US20180047175A1 (en) Method for implementing human skeleton tracking system based on depth data
JP4860749B2 (en) Apparatus, system, and method for determining compatibility with positioning instruction in person in image
JP4198951B2 (en) Group attribute estimation method and group attribute estimation apparatus
Nghiem et al. Head detection using kinect camera and its application to fall detection
Liao et al. Slip and fall event detection using Bayesian Belief Network
Zhang et al. Evaluating depth-based computer vision methods for fall detection under occlusions
Shoaib et al. View-invariant fall detection for elderly in real home environment
KR20160012758A (en) Apparatus and Method for aiding image diagnosis
Bosch-Jorge et al. Fall detection based on the gravity vector using a wide-angle camera
Albawendi et al. Video based fall detection using features of motion, shape and histogram
Planinc et al. Computer vision for active and assisted living
Hung et al. Fall detection with two cameras based on occupied area
CN104331705B (en) Automatic detection method for gait cycle through fusion of spatiotemporal information
CN113384267A (en) Fall real-time detection method, system, terminal equipment and storage medium
Hung et al. The estimation of heights and occupied areas of humans from two orthogonal views for fall detection
Thuc et al. An effective video-based model for fall monitoring of the elderly
Biswas et al. A literature review of current vision based fall detection methods
Lewandowski et al. I see you lying on the ground—Can I help you? Fast fallen person detection in 3D with a mobile robot
Gilroy et al. An objective method for pedestrian occlusion level classification
Wong et al. Enhanced classification of abnormal gait using BSN and depth
Dorgham et al. Improved elderly fall detection by surveillance video using real-time human motion analysis
Khan et al. An automatic vision-based monitoring system for accurate Vojta-therapy
Lee et al. Automated abnormal behavior detection for ubiquitous healthcare application in daytime and nighttime
Walczak et al. Locating occupants in preschool classrooms using a multiple RGB-D sensor system