CN116363753A - Tumble detection method and device based on motion history image and electronic equipment - Google Patents

Tumble detection method and device based on motion history image and electronic equipment Download PDF

Info

Publication number
CN116363753A
CN116363753A CN202310263725.9A CN202310263725A CN116363753A CN 116363753 A CN116363753 A CN 116363753A CN 202310263725 A CN202310263725 A CN 202310263725A CN 116363753 A CN116363753 A CN 116363753A
Authority
CN
China
Prior art keywords
motion
time
pixel
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310263725.9A
Other languages
Chinese (zh)
Inventor
李煜
张家旺
李文成
卢隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202310263725.9A priority Critical patent/CN116363753A/en
Publication of CN116363753A publication Critical patent/CN116363753A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The application provides a fall detection method based on a motion history image. The motion condition of the target is expressed in the form of image brightness by calculating the pixel change of the same position in a certain time period, and the gray value of each pixel represents the relative time of the pixel position from the current moment, and the pixel is brighter when the pixel position moves closer to the current moment. According to the method and the device, the motion condition of the target is acquired by calculating the inter-frame difference and the like, the extraction speed is high, and time sequence information is utilized to distinguish whether the target behavior falls or lies down, so that the method and the device have the characteristics of simplicity, rapidness and high precision.

Description

Tumble detection method and device based on motion history image and electronic equipment
Technical Field
The application relates to the technical field of behavior recognition, in particular to a tumble detection method and device based on a motion history image and electronic equipment.
Background
With the continuous improvement of medical technology, the problem of social aging becomes normal, and the aged is easy to fall down in a household environment or a public area due to the balance problem of the aged and various chronic disease reasons, and the aged is easy to have irreparable consequences if not found and rescued in time. Therefore, monitoring behavior states of the elderly in public places and home environments by installing cameras is a very important application field.
The conventional algorithms for detecting the tumbling mainly fall into two categories, wherein the first category is based on a target detection algorithm, and whether the tumbling occurs or not is judged according to the relative positions of the figure gesture in the picture and the surrounding environment by utilizing single-frame image data. Such algorithms are generally faster, but cannot distinguish whether the person is falling or lying down, so false alarms are often generated. The second type of algorithm utilizes the sequence of multi-frame images to judge, the algorithm uses 3D convolution or optical flow and other modes to calculate, the calculated amount is large, and more data are needed to train to obtain a more accurate model.
Disclosure of Invention
In order to solve the technical problems, the tumble behavior recognition algorithm based on the motion history image is realized. The motion history image (motion history image, MHI) is a vision-based template method that shows the motion of an object in the form of image brightness by calculating the pixel change at the same position over a certain period of time. The gray value of each pixel represents the relative time that the pixel position moves from the current moment, and the pixel is brighter when the pixel position is closer to the current moment. The technical scheme adopted by the application is as follows:
a fall detection method based on a motion history image, the method comprising the steps of:
step 1, obtaining video images frame by frame from a fixed camera to form an image sequence;
step 2, calculating an update function image for two adjacent frames of images;
step 3, calculating a motion history image by accumulating updated function images in the last period of time;
step 4, binarizing the motion history image to obtain a communication area;
step 5, for each communication area, acquiring pixel values of a motion history image on the communication area so as to determine a motion trend direction of a target in the communication area;
and 6, judging the process of the person falling from standing to falling according to the angle change of the movement trend direction of the target and the size of the communication area, and realizing falling detection.
Further, in step 1, the camera is a fixed camera, a moving object in the image generates a difference in two adjacent frames of images, and a stationary object does not generate a change in pixel value.
Further, in step 2, the difference between two adjacent frames of images is calculated pixel by using an image difference or optical flow mode, and whether motion occurs is determined according to the calculation result, if motion occurs, 1 is taken, and if motion occurs, 0 is taken without motion.
Further, the difference between two adjacent frames of images is calculated by the following formula:
D(x,y,t)=|I(x,y,t)-I(x,y,t-Δt)|
Figure BDA0004133847080000031
wherein I (x, y, t) is the brightness of the image I at the time t, (x, y) pixel point coordinates, and D (x, y, t) is the brightness difference between the time t and the time t-dt at the time (x, y) pixel point; ψ (x, y, t) is the luminance difference binarized using a threshold value ζ.
Further, in step 3, a motion history image is calculated by an update function of successive image frame pairs
Figure BDA0004133847080000032
H (x, y, t) is a motion history image of a pixel point (x, y), if the inter-frame brightness difference at the time t is greater than a threshold value ζ, H (x, y, t) is set to a larger value τ, if the inter-frame brightness difference at the time t is less than the threshold value, the motion history is decreased with time, and the time of each frame difference is reduced by δ until the time is reduced to 0.
Further, in step 5, the motion direction of the target is calculated according to the brightness variation of the pixels in the connected region, and since the brightness of the pixels and the time difference between the motion occurrence time of the pixels and the current time are in a linear relationship, the direction of the maximum brightness variation is obtained by calculating the gradient of the pixel value according to the brightness value of the pixels in the connected region, and then the average gradient direction of the whole region is calculated, so as to obtain the motion direction of the target.
Further, calculating the movement direction of the target includes: a first-order gradient of the luminance variation of the pixel is calculated in the connected region, and an average direction of the first-order gradient is a movement direction of the target.
Further, in step 5, the movement direction of the target is obtained, an angle threshold is set, and if the movement direction of the target is inclined downward and the included angle with the horizontal line is greater than the angle threshold, the target can be judged to be in the falling process, so that the falling of the target is detected.
A fall detection device based on a motion history image, the device comprising a processor and a memory storing instructions executable by the processor, the processor performing the above method steps when the instructions are executed by the processor.
An electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the electronic device is characterized in that the computer program is executed by the processor to realize the fall detection method based on the motion history image.
Through the embodiment of the application, the following technical effects can be obtained:
(1) The motion condition of the target is obtained by calculating the inter-frame difference and the like, the extraction speed is high, and the time sequence information is utilized to distinguish whether the target behavior falls down or lies down, so that the method has the characteristics of simplicity, rapidness and high precision;
(2) The motion history image extraction process based on the frame difference method is simple, the calculated amount is small, and the motion history image can be calculated in real time; the tumbling detection method based on the motion history image can calculate the motion trend and direction of the target, distinguish the tumbling process and the lying state, and reduce the false recognition rate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a fall detection method based on motion history images;
fig. 2 is a motion history image example.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Fig. 1 is a flow chart of a fall detection method based on a motion history image, the method comprising the steps of:
step 1, obtaining video images frame by frame from a fixed camera to form an image sequence;
step 2, calculating an update function image for two adjacent frames of images;
step 3, calculating a motion history image by accumulating updated function images in the last period of time;
step 4, binarizing the motion history image to obtain a communication area;
step 5, for each communication area, acquiring pixel values of a motion history image on the communication area so as to determine a motion trend direction of a target in the communication area;
and 6, judging the process of the person falling from standing to falling according to the angle change of the movement trend direction of the target and the size of the communication area, and realizing falling detection.
In step 1, the camera is a fixed camera, a moving object in the image generates a difference in two adjacent frames of images, and a static object does not generate a pixel value change.
In the step 2, calculating the difference between two adjacent frames of images pixel by pixel in an image difference or optical flow mode, judging whether motion occurs according to a calculation result, if so, taking 1, and taking 0 without motion;
the difference between two adjacent frames of images is calculated by the following formula:
D(x,y,t)=|I(x,y,t)-I(x,y,t-Δt)|
Figure BDA0004133847080000061
wherein I (x, y, t) is the brightness of the image I at the time t, (x, y) pixel point coordinates, and D (x, y, t) is the brightness difference between the time t and the time t-dt at the time (x, y) pixel point; ψ (x, y, t) is the luminance difference binarized using a threshold value ζ.
In step 3, a motion history image is calculated by an update function of successive image frame pairs
Figure BDA0004133847080000062
H (x, y, t) is a motion history image of a pixel point (x, y), if the inter-frame brightness difference at the time t is greater than a threshold value ζ, H (x, y, t) is set to a larger value τ, if the inter-frame brightness difference at the time t is less than the threshold value, the motion history is decreased with time, and the time of each frame difference is reduced by δ until the time is reduced to 0.
In step 5, the motion direction of the target is calculated according to the brightness variation of the pixels in the connected region, and the motion direction of the target is obtained by calculating the gradient of the pixel value according to the brightness value of the pixels in the connected region and calculating the average gradient direction of the whole region because the brightness of the pixels and the time difference between the motion occurrence time of the pixels and the current time are in a linear relationship.
Calculating a direction of motion of the object, comprising: a first-order gradient of the luminance variation of the pixel is calculated in the connected region, and an average direction of the first-order gradient is a movement direction of the target.
The linear relation satisfies the formula H (x, y, t), and the value thereof is gradually decreased with time under the other wise condition.
In step 5, the movement direction of the target is obtained, an angle threshold is set, and if the movement direction of the target is inclined downwards and the included angle between the movement direction of the target and the horizontal line is larger than the angle threshold, the target can be judged to be in the falling process, so that the falling of the target is detected.
In step 5, for a simple scene, the number of moving objects is small, and the camera does not move, so that the variation area of the whole scene between frames is not large. Complex scenes may include camera rotations and translations, as well as large object movements in the scene, such as indoor multi-person dense motion scenes. And calculating to obtain the average gradient direction of the target area, and judging the azimuth relation between the target area and the x axis to judge whether the movement direction is inclined downwards. For a more complex scene, the target has no determined moving direction, a tumbling/normal classification model can be trained by using a deep learning classifier, after a target area is extracted, an image of the target area is cut out, and a CNN-based image classification model is trained. The image information includes an original rgb image, and a motion history image (gray scale image). Specifically, for each connected region, pixels on a corresponding region of the motion history image are acquired as input 1, and at the same time, a corresponding region in the history rgb image is acquired as input 2, and a plurality of image frames can be averaged to obtain one rgb image. Input 1 and input 2 are spliced to obtain an input image, a large number of image/label pairs are used for training a deep learning classifier, and the deep learning classifier is a general algorithm in the industry and is not described herein. Fig. 2 is a motion history image example.
In summary, the video image frame data is read in real time by the camera, and the history frame data is stored in the memory buffer. Calculating the inter-frame difference of the continuous frames by using the historical data stored in the memory, calculating a motion historical image by using the calculated inter-frame difference data of the continuous frames, binarizing the motion historical image, extracting a plurality of connected areas, calculating the average target motion direction of each connected area, and judging whether the target falls down or not according to the motion direction and the size of the connected areas. According to the technical scheme, whether the target falls down or not is judged by utilizing the motion history image, the motion history image is calculated quickly, the multi-frame image is used for calculating the motion information of the target, the robustness is good, and false detection is not easy.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein. The detailed description is not intended to limit the scope of the disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. The tumbling detection method based on the motion history image is characterized by comprising the following steps of:
step 1, obtaining video images frame by frame from a fixed camera to form an image sequence;
step 2, calculating an update function image for two adjacent frames of images;
step 3, calculating a motion history image by accumulating updated function images in the last period of time;
step 4, binarizing the motion history image to obtain a communication area;
step 5, for each communication area, acquiring pixel values of a motion history image on the communication area so as to determine a motion trend direction of a target in the communication area;
and 6, judging the process of the person falling from standing to falling according to the angle change of the movement trend direction of the target and the size of the communication area, and realizing falling detection.
2. The method according to claim 1, wherein in step 1, the camera is a fixed camera, the moving object in the image generates a difference in two adjacent frames of images, and the stationary object does not generate a change in pixel value.
3. The method according to claim 1, wherein in step 2, the difference between two adjacent frames of images is calculated pixel by image difference or optical flow, and whether motion occurs is determined according to the calculation result, and if motion occurs, 1 is taken, and no motion is taken as 0.
4. A method according to claim 3, wherein the difference between two adjacent frames of images is calculated using the formula:
D(x,y,t)=|I(x,y,t)-I(x,y,t-Δt|
Figure FDA0004133847040000011
wherein I (x, y, t) is the brightness of the image I at the time t, (x, y) pixel point coordinates, and D (x, y, t) is the brightness difference between the time t and the time t-dt at the time (x, y) pixel point; ψ (x, y, t) is the luminance difference binarized using a threshold value ζ.
5. The method of claim 4, wherein in step 3, the motion history image is calculated by an update function of successive pairs of image frames
Figure FDA0004133847040000021
H (x, y, t) is a motion history image of a pixel point (x, y), if the inter-frame brightness difference at the time t is greater than a threshold value ζ, H (x, y, t) is set to a larger value τ, if the inter-frame brightness difference at the time t is less than the threshold value, the motion history is decreased with time, and the time of each frame difference is reduced by δ until the time is reduced to 0.
6. The method according to claim 1, wherein in step 5, the moving direction of the object is calculated from the luminance variation of the pixel in the connected region, and since the luminance of the pixel is in a linear relationship with the time difference from the current time of the motion occurrence time of the pixel, the direction of the maximum luminance variation is obtained by calculating the gradient of the pixel value from the luminance value of the pixel in the connected region, and then the average gradient direction of the whole region is calculated, and the moving direction of the object is obtained.
7. The method of claim 1, wherein calculating the direction of motion of the object comprises: a first-order gradient of the luminance variation of the pixel is calculated in the connected region, and an average direction of the first-order gradient is a movement direction of the target.
8. The method according to claim 6, wherein in step 5, the movement direction of the target is obtained, an angle threshold is set, and if the movement direction of the target is inclined downward and the angle with the horizontal is greater than the angle threshold, the target is judged to be in the falling process, and thus the falling of the target is detected.
9. A fall detection device based on motion history images, characterized in that it comprises a processor and a memory storing instructions executable by said processor, said processor executing the method steps according to any one of claims 1 to 8 when said instructions are executed by the processor.
10. An electronic device comprising a memory, a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, implements the motion history image-based fall detection method according to any one of claims 1 to 8.
CN202310263725.9A 2023-03-12 2023-03-12 Tumble detection method and device based on motion history image and electronic equipment Pending CN116363753A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310263725.9A CN116363753A (en) 2023-03-12 2023-03-12 Tumble detection method and device based on motion history image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310263725.9A CN116363753A (en) 2023-03-12 2023-03-12 Tumble detection method and device based on motion history image and electronic equipment

Publications (1)

Publication Number Publication Date
CN116363753A true CN116363753A (en) 2023-06-30

Family

ID=86913282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310263725.9A Pending CN116363753A (en) 2023-03-12 2023-03-12 Tumble detection method and device based on motion history image and electronic equipment

Country Status (1)

Country Link
CN (1) CN116363753A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117037272A (en) * 2023-08-08 2023-11-10 深圳市震有智联科技有限公司 Method and system for monitoring fall of old people

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117037272A (en) * 2023-08-08 2023-11-10 深圳市震有智联科技有限公司 Method and system for monitoring fall of old people
CN117037272B (en) * 2023-08-08 2024-03-19 深圳市震有智联科技有限公司 Method and system for monitoring fall of old people

Similar Documents

Publication Publication Date Title
CN109035304B (en) Target tracking method, medium, computing device and apparatus
KR101747216B1 (en) Apparatus and method for extracting target, and the recording media storing the program for performing the said method
US8718324B2 (en) Method, apparatus and computer program product for providing object tracking using template switching and feature adaptation
JP6482195B2 (en) Image recognition apparatus, image recognition method, and program
JP7272024B2 (en) Object tracking device, monitoring system and object tracking method
US9536321B2 (en) Apparatus and method for foreground object segmentation
JP6024658B2 (en) Object detection apparatus, object detection method, and program
WO2014136623A1 (en) Method for detecting and tracking objects in sequence of images of scene acquired by stationary camera
JP7093427B2 (en) Object tracking methods and equipment, electronic equipment and storage media
CN109727275B (en) Object detection method, device, system and computer readable storage medium
JP2008192131A (en) System and method for performing feature level segmentation
JP2008527525A (en) Method and electronic device for detecting graphical objects
US20190206065A1 (en) Method, system, and computer-readable recording medium for image object tracking
JP2016085487A (en) Information processing device, information processing method and computer program
JP2020149641A (en) Object tracking device and object tracking method
CN116363753A (en) Tumble detection method and device based on motion history image and electronic equipment
CN109543487B (en) Automatic induction triggering method and system based on bar code edge segmentation
CN114613006A (en) Remote gesture recognition method and device
CN114842213A (en) Obstacle contour detection method and device, terminal equipment and storage medium
CN107452019B (en) Target detection method, device and system based on model switching and storage medium
US11507768B2 (en) Information processing apparatus, information processing method, and storage medium
CN110580706A (en) Method and device for extracting video background model
JP2008269181A (en) Object detector
CN109389089B (en) Artificial intelligence algorithm-based multi-person behavior identification method and device
CN108711164B (en) Motion detection method based on L BP and Color characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination