CN113837055A - Fall detection method and device, electronic equipment and storage medium - Google Patents

Fall detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113837055A
CN113837055A CN202111101066.6A CN202111101066A CN113837055A CN 113837055 A CN113837055 A CN 113837055A CN 202111101066 A CN202111101066 A CN 202111101066A CN 113837055 A CN113837055 A CN 113837055A
Authority
CN
China
Prior art keywords
distance
fall
network model
diagram
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111101066.6A
Other languages
Chinese (zh)
Inventor
方震
姚奕成
赵荣建
何光强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Runnan Medical Electronic Research Institute Co ltd
Original Assignee
Nanjing Runnan Medical Electronic Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Runnan Medical Electronic Research Institute Co ltd filed Critical Nanjing Runnan Medical Electronic Research Institute Co ltd
Priority to CN202111101066.6A priority Critical patent/CN113837055A/en
Publication of CN113837055A publication Critical patent/CN113837055A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a fall detection method, a fall detection device, an electronic device and a storage medium, wherein the method comprises the following steps: generating a distance-velocity diagram of the human body according to the radar reflection signals; generating a distance-vertical angle diagram of the human body and a distance-horizontal angle diagram of the human body according to the radar reflection signals; extracting a first feature from the distance-velocity map, a second feature from the distance-vertical angle map, and a third feature from the distance-horizontal angle map; and inputting the first feature, the second feature and the third feature into a machine learning classifier to obtain a detection result. According to the method, a distance-velocity diagram of a human body, a distance-vertical angle diagram and a distance-horizontal angle diagram of the human body are generated according to the radar reflection signals and are analyzed to obtain detection results, interference of environmental factors can be avoided by adopting signals related to the human body, information contained in the radar reflection signals can be utilized more comprehensively, and the accuracy of fall detection is improved.

Description

Fall detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of motion recognition technologies, and in particular, to a fall detection method and apparatus, an electronic device, and a storage medium.
Background
Falls are common in older people, with a probability of falling over 60 years of age exceeding 20% and a probability of falling over 80 years of age exceeding 33%. Falls are one of the leading causes of disability, long-term pain and death in the elderly, and are the fifth leading cause of death in the elderly. The fall of an elderly person can seriously affect their quality of life and may even lose self-care ability or even die. For elderly people living alone, they may not be detected until they fall on the ground for several hours, resulting in a missed treatment. To date, many studies to detect falls have been based on RGB cameras and some wearable devices. However, these methods are inconvenient or unacceptable for the elderly, because they may forget to wear the wearable device, or may not accept to install a camera at home because of privacy concerns. Radar-based approaches well address the limitations of wearable devices and cameras. The method can monitor the user under the condition that the user is not aware of, and privacy disclosure is not needed to be worried about. Although significant progress has been made in radar-based fall detection, past research has been limited significantly. At present, a continuous wave radar and an ultra wide band radar are adopted in the radar-based fall detection research, and whether a human body falls or not is deduced by acquiring the speed or distance information of the human body. Fall detection methods provided by the related art typically suffer from one or more of the following disadvantages:
(1) due to the lack of richness of information, it is difficult to distinguish actions such as sitting down quickly or on the ground from falls because their changes in speed or distance are similar to falls.
(2) The system is continuously operated at any time, and the action type is not judged only after human body movement is detected, so that energy and computing resources of equipment are wasted.
(3) The signals contain environmental information and are susceptible to environmental changes during the classification process.
Disclosure of Invention
The embodiment of the disclosure provides a fall detection method, a fall detection device, an electronic device and a storage medium, which can improve the accuracy of fall detection.
Therefore, the embodiment of the disclosure provides the following technical scheme:
in a first aspect, an embodiment of the present disclosure provides a fall detection method, including:
generating a distance-velocity diagram of the human body according to the radar reflection signals;
generating a distance-vertical angle diagram of the human body and a distance-horizontal angle diagram of the human body according to the radar reflection signals;
inputting the distance-velocity diagram into a first network model to obtain a first characteristic;
inputting the distance-vertical angle diagram into a second network model to obtain a second characteristic;
inputting the distance-horizontal angle diagram into a third network model to obtain a third characteristic;
and inputting the first feature, the second feature and the third feature into a machine learning classifier to obtain a detection result.
Optionally, after generating the distance-velocity map of the human body according to the radar reflection signal, the method further includes:
judging whether a motion event occurs according to the distance-speed diagram;
if not, the subsequent steps are abandoned.
Optionally, the determining whether a motion event occurs according to the distance-velocity map includes:
acquiring a current frame and a distance-velocity map of the N frames nearest to the current frame as a reference distance-velocity map;
counting pixel points of which the pixel difference value is greater than a set threshold value in a reference distance-speed graph of any two adjacent frames to obtain N-1 count values;
calculating the information entropy of the counting value;
and judging whether the information entropy is larger than a set value or not, if so, generating a motion event.
Optionally, the training method of the first network model includes the following steps:
obtaining sample signals, wherein the types of the sample signals comprise walking, sitting down, standing up, micro-motion, macro-motion and falling down;
generating a sample distance-velocity map, a sample distance-vertical angle map and a sample distance-horizontal angle map from the sample signal;
establishing a first original network model;
and taking the sample distance-speed diagram, the sample distance-vertical angle diagram and the sample distance-horizontal angle diagram as input, taking walking, sitting down, standing up, micro-motion, macro-motion and falling down as labels, and training the first original network model to obtain a first network model.
Optionally, training the first original network model comprises:
establishing a loss function, wherein the loss function comprises a cross entropy loss item, a falling precision loss item and a falling finding precision loss item;
substituting the output value of the first network model into the loss function to obtain a loss value;
adjusting the original fall detection network model according to the loss value.
Optionally, the loss function is as follows:
Figure BDA0003270926430000031
Figure BDA0003270926430000032
L=Lcross-entropy+C1*Lrecall+C2*Lprecision
wherein s isum(Pfall-fall) Num (fall) represents the number of samples labeled fall, and sum (P)all-fall) Indicating the number of samples whose output is a fall, LrecallTo find out the rate loss term for a fall, LprecisionFor tumble precision loss term, L is the loss value, Lcross-entropyFor the cross entropy loss term, C1 is the weight of the fall precision rate, and C2 is the weight of the fall precision rate.
Alternatively, C1 is 0.3 and C2 is 0.7.
Optionally, generating a sample distance-velocity map, a sample distance-vertical angle map, and a sample distance-horizontal angle map from the sample signal comprises:
generating an initial sample distance-velocity map, an initial sample distance-vertical angle map and an initial sample distance-horizontal angle map from the sample signal;
determining the coordinates of the human body according to the distance-horizontal angle diagram;
cutting the initial sample distance-speed graph, and reserving an area within +/-20 points of the distance coordinate from the human body to obtain a sample distance-speed graph;
cutting the initial sample distance-vertical angle graph, and reserving an area within +/-20 points of the distance coordinate from the human body to obtain a sample distance-vertical angle graph;
and cutting the initial sample distance-horizontal angle graph, and reserving a region which is +/-20 points away from the human body distance coordinate and +/-25 points around the horizontal angle coordinate to obtain the sample distance-horizontal angle graph.
Optionally, the first network model comprises a first feature extraction module and a first activation function layer;
the first feature extraction module is used for extracting the first features, and comprises a first volume block, a second volume block, a feature leveling layer, a self-attention layer, a first full-connection layer and a second full-connection layer which are sequentially connected;
the first activation function layer is used for generating a first preliminary detection result according to the first characteristic.
In a second aspect, embodiments of the present disclosure provide a fall detection apparatus, including:
the first data processing module is used for generating a distance-velocity diagram of the human body according to the radar reflection signals;
the second data processing module is used for generating a distance-vertical angle diagram and a distance-horizontal angle diagram of the human body according to the radar reflection signals;
the first feature extraction module is used for inputting the distance-speed diagram into a first network model to obtain a first feature;
the second feature extraction module is used for inputting the distance-vertical angle diagram into a second network model to obtain a second feature;
the third feature extraction module is used for inputting the distance-horizontal angle diagram into a third network model to obtain a third feature;
and the classification module is used for inputting the first feature, the second feature and the third feature into a machine learning classifier to obtain a detection result.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a processor; and
a memory for storing a program, wherein the program is stored in the memory,
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method of any of the embodiments described above.
In a fourth aspect, the embodiments of the present disclosure provide a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any of the embodiments described above.
One or more technical solutions provided in the embodiments of the present disclosure have the following advantages:
according to the fall detection method provided by the embodiment of the disclosure, the distance-velocity diagram of the human body, the distance-vertical angle diagram of the human body and the distance-horizontal angle diagram of the human body are generated according to the radar reflection signals to analyze, and the detection result is obtained.
Drawings
Fig. 1 is a flow chart of a fall detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a first network model according to an embodiment of the present disclosure;
fig. 3 is a block diagram of a fall detection apparatus according to an embodiment of the present disclosure.
FIG. 4 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be described in further detail below with reference to the accompanying drawings in conjunction with the detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The described embodiments are only some, but not all embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In the description of the present disclosure, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In addition, technical features involved in different embodiments of the present disclosure described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a flow chart of a fall detection method according to an embodiment of the present disclosure. As shown in fig. 1, the present disclosure provides a fall detection method comprising the steps of:
s101: and generating a distance-velocity diagram of the human body according to the radar reflection signals. The radar can be frequency-modulated continuous wave radar with frequency of 10-20 hz.
S102: and generating a distance-vertical angle diagram and a distance-horizontal angle diagram of the human body according to the radar reflection signals. The distance-velocity diagram, the distance-vertical angle diagram and the distance-horizontal angle diagram of the human body can be generated according to 2.5s-5s continuous radar reflection signals. And selecting every 3s of radar reflection signals to generate a distance-speed diagram of a human body, a distance-vertical angle diagram of the human body and a distance-horizontal angle diagram of the human body.
S103: the distance-velocity map is input into a first network model to obtain a first feature.
S104: and inputting the distance-vertical angle diagram into a second network model to obtain a second characteristic.
S105: and inputting the distance-horizontal angle diagram into a third network model to obtain a third characteristic.
S106: and inputting the first feature, the second feature and the third feature into a machine learning classifier to obtain a detection result.
According to the fall detection method provided by the embodiment of the disclosure, the signal diagram is generated according to the radar reflection signal, the signal diagram comprises a distance-speed diagram of a human body, a distance-vertical angle diagram of the human body and a distance-horizontal angle diagram, the signal diagram is analyzed to obtain the detection result, compared with the fall analysis method adopting the complete radar reflection signal in the related technology, the signal related to the human body can avoid the interference of environmental factors, compared with the fall detection method adopting the human body speed signal generated by the radar reflection signal in the related technology, the information contained in the radar reflection signal can be utilized more comprehensively, and the accuracy of the fall detection can be improved.
After the distance-velocity diagram of the human body is generated according to the radar reflection signals, the method further comprises the following steps: judging whether a motion event occurs according to the distance-speed diagram; if yes, continuing to execute the subsequent steps, otherwise, repeatedly executing the steps until the motion event is sent. The falling can be preliminarily screened by judging whether the motion event occurs or not, static and weak motion is eliminated, and the waste of equipment energy and computing resources caused by long-term operation of the system is avoided.
Judging whether a motion event occurs according to the distance-velocity map comprises the following steps:
and acquiring the distance-velocity map of the current frame and the N frames nearest to the current frame as a reference distance-velocity map. N may be selected to be 10. Counting pixel points of which the pixel difference value is greater than a set threshold value in a reference distance-speed graph of any two adjacent frames to obtain N-1 count values; calculating the information entropy of the counting value; and judging whether the information entropy is larger than a set value or not, and if so, generating a motion event. . Subtracting the reference distance-velocity maps of two adjacent frames can reduce the interference of noise and extract the change of distance and velocity. If the object is moving, there will be some relatively large pixel values at the corresponding distance and velocity coordinate points on the distance-velocity map. If there is no object or the object is stationary, the pixel values of the distance-velocity map are small. In the process of human body movement, the movement speeds and amplitudes of different parts of the human body are different, and the reflected signals of all parts of the human body received by the radar often change along with the time, so that the distance-speed graph obtained by subtracting two adjacent frames continuously changes along with the movement of the human body. The accuracy of judging whether the motion event occurs is improved by counting the change condition of the distance-speed graph of two adjacent frames.
In some embodiments, the training method of the first network model comprises the following steps:
obtaining sample signals, and detecting falling is a two-classification problem, but since the number of falling samples is much smaller than that of non-falling samples, training the first original network model with a very unbalanced number of samples can reduce the performance of the network, so that the first original network model is designed as a six-classification problem, and the six-classification problem alleviates the imbalance of the number of samples. The categories of sample signals include walking, sitting, standing, micro-motion, macro-motion, and falling. Generating a sample distance-velocity map from the sample signal; establishing a first original network model; and taking the sample distance-speed diagram as input, taking walking, sitting down, standing up, micro-motion, macro-motion and falling down as labels, and training the first original network model to obtain the first network model.
Training the first original network model comprises:
establishing a loss function, wherein the loss function comprises a cross entropy loss item, a falling precision loss item and a falling finding rate loss item; substituting the output value of the first original network model into a loss function to obtain a loss value; the first original network model is adjusted according to the loss value. The six classification networks not only optimize the classification precision of falling categories in training, but also optimize other categories, and reduce the attention of the networks to the falling categories. To solve this problem and improve the performance of the fall detection network, two loss function constraints for the fall category are designed: a fall call rate loss term and a fall call precision loss term.
The loss function is as follows:
Figure BDA0003270926430000081
Figure BDA0003270926430000082
L=Lcross-entropy+C1*Lrecall+C2*Lprecision
wherein, sum (P)fall-fall) Num (fall) represents the number of samples labeled fall, and sum (P)all-fall) Indicating the number of samples whose output is a fall, LrecallTo find out the rate loss term for a fall, LprecisionFor tumble precision loss term, L is the loss value, Lcross-entropyFor the cross entropy loss term, C1 is the weight of the fall precision rate, and C2 is the weight of the fall precision rate.
C1 and C2 control the optimization of the recall ratio and precision ratio of the fall types in the training process, and different weight coefficients are set, so that the first original network model can pay more attention to improving the recall ratio or precision ratio of the fall types in the training process. In practical cases, the number of non-falling actions is much greater than the number of falling actions. If the probability of a non-fall action being misclassified as a fall is too high, the model will be unreliable and will have an unnecessary impact on the user. Therefore, it is important to reduce the number of non-falling actions that are wrongly classified as falls, while ensuring correct recognition of the falling action. In the model, the weight coefficients C1 and C2 can be set to 0.3 and 0.7 respectively, so that the model can pay more attention to the fall precision in the training process, and the excessive non-fall samples are prevented from being mistaken for falls.
In a first embodiment, the training method of the second network model includes the following steps:
obtaining sample signals, and falling detection is a two-classification problem, but since the number of falling samples is much smaller than the number of non-falling samples, training the second original network model with a very unbalanced number of samples can reduce the performance of the network, so the second original network model is designed into a six-classification problem, and the six-classification problem alleviates the imbalance of the number of samples. The categories of sample signals include walking, sitting, standing, micro-motion, macro-motion, and falling. Generating a sample distance-vertical angle graph according to the sample signal; establishing a second original network model; and taking the sample distance-vertical angle graph as input, taking walking, sitting down, standing up, micro-motion, macro-motion and falling down as labels, and training the second original network model to obtain a second network model.
Training the second original network model comprises:
establishing a loss function, wherein the loss function comprises a cross entropy loss item, a falling precision loss item and a falling finding rate loss item; substituting the output value of the second original network model into a loss function to obtain a loss value; the second original network model is adjusted according to the loss value. The six classification networks not only optimize the classification precision of falling categories in training, but also optimize other categories, and reduce the attention of the networks to the falling categories. To solve this problem and improve the performance of the fall detection network, two loss function constraints for the fall category are designed: a fall call rate loss term and a fall call precision loss term.
The loss function is as follows:
Figure BDA0003270926430000091
Figure BDA0003270926430000092
L=Lcross-entropy+C1*Lrecall+C2*Lprecision
wherein, sum (P)fall-fall) Num (fall) represents the number of samples labeled fall, and sum (P)all-fall) Indicating the number of samples whose output is a fall, LrecallTo find out the rate loss term for a fall, LprecisionFor tumble precision loss term, L is the loss value, Lcross-entropyFor the cross entropy loss term, C1 is the weight of the fall precision rate, and C2 is the weight of the fall precision rate.
C1 and C2 control the optimization of the detection rate and precision rate of the fall categories in the training process, and different weight coefficients are set, so that the second original network model can pay more attention to improving the detection rate or precision rate of the fall categories in the training process. In practical cases, the number of non-falling actions is much greater than the number of falling actions. If the probability of a non-fall action being misclassified as a fall is too high, the model will be unreliable and will have an unnecessary impact on the user. Therefore, it is important to reduce the number of non-falling actions that are wrongly classified as falls, while ensuring correct recognition of the falling action. In the model, the weight coefficients C1 and C2 can be set to 0.3 and 0.7 respectively, so that the model can pay more attention to the fall precision in the training process, and the excessive non-fall samples are prevented from being mistaken for falls.
In some embodiments, the training method of the third network model includes the following steps:
obtaining sample signals, and detecting falling is a two-classification problem, but since the number of falling samples is much smaller than that of non-falling samples, training the third original network model with a very unbalanced number of samples can reduce the performance of the network, so that the third original network model is designed as a six-classification problem, and the six-classification problem alleviates the imbalance of the number of samples. The categories of sample signals include walking, sitting, standing, micro-motion, macro-motion, and falling. Generating a sample distance-horizontal angle map according to the sample signal; establishing a third original network model; and taking the sample distance-horizontal angle graph as input, taking walking, sitting down, standing up, micro-motion, macro-motion and falling down as labels, and training the third original network model to obtain a third network model.
Training the third original network model comprises:
establishing a loss function, wherein the loss function comprises a cross entropy loss item, a falling precision loss item and a falling finding rate loss item; substituting the output value of the third original network model into a loss function to obtain a loss value; and adjusting the third original network model according to the loss value. The six classification networks not only optimize the classification precision of falling categories in training, but also optimize other categories, and reduce the attention of the networks to the falling categories. To solve this problem and improve the performance of the fall detection network, two loss function constraints for the fall category are designed: a fall call rate loss term and a fall call precision loss term.
The loss function is as follows:
Figure BDA0003270926430000101
Figure BDA0003270926430000102
L=Lcross-entropy+C1*Lrecall+C2*Lprecision
wherein, sum (P)fall-fall) Num (fall) represents the number of samples labeled fall, and sum (P)all-fall) Indicating the number of samples whose output is a fall, LrecallFor falling downFinding the loss term of rate, LprecisionFor tumble precision loss term, L is the loss value, Lcross-entropyFor the cross entropy loss term, C1 is the weight of the fall precision rate, and C2 is the weight of the fall precision rate.
C1 and C2 control the optimization of the recall ratio and precision ratio of the fall types in the training process, and different weight coefficients are set, so that the third original network model can pay more attention to improving the recall ratio or precision ratio of the fall types in the training process. In practical cases, the number of non-falling actions is much greater than the number of falling actions. If the probability of a non-fall action being misclassified as a fall is too high, the model will be unreliable and will have an unnecessary impact on the user. Therefore, it is important to reduce the number of non-falling actions that are wrongly classified as falls, while ensuring correct recognition of the falling action. In the model, the weight coefficients C1 and C2 can be set to 0.3 and 0.7 respectively, so that the model can pay more attention to the fall precision in the training process, and the excessive non-fall samples are prevented from being mistaken for falls.
Fig. 2 is a schematic diagram of a first network model according to an embodiment of the present disclosure. As shown in fig. 2, the first network model includes a first feature extraction module and a first activation function layer; the first feature extraction module is used for extracting first features and comprises a first rolling block, a second rolling block, a feature paving layer, a self-attention layer, a first full-connection layer and a second full-connection layer which are sequentially connected; the first activation function layer is used for generating a first preliminary detection result according to the first characteristic. The first preliminary test result may optionally include six probabilities corresponding to walking, sitting, standing, minor movements, major movements, and falling.
The second network model comprises a second feature extraction module and a second activation function layer; the second feature extraction module is used for extracting second features, and the second activation function layer is used for generating a second preliminary detection result according to the second features. The second preliminary test results may optionally include six probabilities corresponding to walking, sitting, standing, minor movements, major movements, and falling.
The third network model comprises a third feature extraction module and a third activation function layer; the third feature extraction module is used for extracting a third feature, and the third activation function layer is used for generating a third preliminary detection result according to the third feature. The third preliminary test result may optionally include six probabilities corresponding to walking, sitting, standing, minor movements, major movements, and falling.
In some embodiments, a method of training a machine learning classifier comprises:
sample signals are obtained, and the types of the sample signals comprise walking, sitting down, standing up, micro-motion, macro-motion and falling down. A sample distance-velocity map, a sample distance-vertical angle map, and a sample distance-horizontal angle map are generated from the sample signal. And establishing an original machine learning classifier.
The sample distance-velocity map is input into a first network model, the sample distance-vertical angle map is input into a second network model, and the sample distance-horizontal angle map is input into a third network model.
And inputting the first feature extracted by the first feature extraction module, the second feature extracted by the second feature extraction module and the third feature of the third feature extraction module into an original machine learning classifier, and training the original machine learning classifier by using the category of the sample as a label to obtain the machine learning classifier.
When a person moves, the change modes of the three signal diagrams are different, and the corresponding network models have different observation results on the same movement. The outputs of the three network models corresponding to the three signal diagrams are fused, so that the accuracy of the models is improved.
In some embodiments, generating the sample distance-velocity map, the sample distance-vertical angle map, and the sample distance-horizontal angle map from the sample signal comprises:
generating an initial sample distance-velocity map, an initial sample distance-vertical angle map and an initial sample distance-horizontal angle map from the sample signal; determining the coordinates of the human body according to the distance-horizontal angle diagram; and (3) cutting the initial sample distance-velocity diagram in the distance direction, and reserving an area within +/-20 points of the distance coordinate from the human body to obtain the sample distance-velocity diagram. And (3) cutting the initial sample distance-vertical angle diagram in the distance direction, and reserving an area within +/-20 points of the distance coordinate from the human body to obtain the sample distance-vertical angle diagram. And simultaneously cutting the initial sample distance-horizontal angle graph in two directions of distance and horizontal angle, and reserving an area which is +/-20 points away from the human body distance coordinate and +/-25 points around the horizontal angle coordinate to obtain the sample distance-horizontal angle graph.
The complete signal diagram records the absolute range, absolute angle and velocity information of the human body motion. The position of the person can be calculated through the absolute range and the absolute angle information, but if the data are directly input into the original fall detection network model for training, the original fall detection network model can learn more spatial positions of the human body, so that the learning of the motion of the human body is ignored. Due to the effects of multipath, sometimes a large interference is generated to the signal, and the interfering signal may appear at a position other than the human body position in the signal diagram. The accuracy of the original fall detection network model is improved by positioning the human body as the standard and then cutting the signal diagram. Determining the coordinates of the human body from the distance-horizontal angle map includes:
the distance-horizontal angle maps of 60 consecutive frames are added in rows and columns, the row and column with the largest value being selected. And storing the row and column coordinates of the maximum value to form two arrays, and taking the value as the human body coordinate.
Alternatively, in a distance-horizontal angle diagram where the signal is 60 frames, the size of each frame is (m, n), where m is the distance and n represents the angle.
Adding each frame of picture in rows and columns to form two arrays with length of miAnd niAnd i denotes the ith frame. Respectively calculate the array miAnd niMaximum value of (m _ max)iAnd n _ maxi. Save the maximum value of 60 frames as an array [ m _ max1,……m_max60And [ n _ max ]1,……n_max60[ MEANS FOR solving PROBLEMS ] is provided. The median m _ mean and n _ mean of the two arrays are calculated. The coordinate of the human body is distance m _ mean and angle n _ mean.
Fig. 3 is a block diagram of a fall detection apparatus according to an embodiment of the present disclosure. As shown in fig. 3, an embodiment of the present disclosure also provides a fall detection apparatus, including:
the first data processing module 31 is used for generating a distance-velocity diagram of the human body according to the radar reflection signals;
the second data processing module 32 is used for generating a distance-vertical angle diagram and a distance-horizontal angle diagram of the human body according to the radar reflection signals;
a first feature extraction module 33, configured to input the distance-velocity map into the first network model to obtain a first feature;
a second feature extraction module 34, configured to input the distance-vertical angle map into a second network model to obtain a second feature;
the third feature extraction module 35 is configured to input the distance-horizontal angle map into a third network model to obtain a third feature;
and the classification module 36 is configured to input the first feature, the second feature and the third feature into a machine learning classifier to obtain a detection result.
An exemplary embodiment of the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor, the computer program, when executed by the at least one processor, is for causing the electronic device to perform a method according to an embodiment of the disclosure.
As shown in fig. 4, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the electronic device 800, and the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. Output unit 807 can be any type of device capable of presenting information and can include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 804 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above. For example, in some embodiments, the salient object detection method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto the electronic device 800 via the ROM802 and/or the communication unit 809. In some embodiments, the computing unit 801 may be configured to perform the salient object detection method in any other suitable manner (e.g., by means of firmware).
The disclosed exemplary embodiments also provide a non-transitory computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (10)

1. A fall detection method, comprising:
generating a distance-velocity diagram of the human body according to the radar reflection signals;
generating a distance-vertical angle diagram of the human body and a distance-horizontal angle diagram of the human body according to the radar reflection signals;
inputting the distance-velocity diagram into a first network model to obtain a first characteristic;
inputting the distance-vertical angle diagram into a second network model to obtain a second characteristic;
inputting the distance-horizontal angle diagram into a third network model to obtain a third characteristic;
and inputting the first feature, the second feature and the third feature into a machine learning classifier to obtain a detection result.
2. A fall detection method as claimed in claim 1, wherein generating a range-velocity map of the person from the radar-reflected signals further comprises:
judging whether a motion event occurs according to the distance-speed diagram;
if not, the subsequent steps are abandoned.
3. A fall detection method as claimed in claim 2, wherein determining from the distance-velocity map whether a motion event has occurred comprises:
acquiring a current frame and a distance-velocity map of the N frames nearest to the current frame as a reference distance-velocity map;
counting pixel points of which the pixel difference value is greater than a set threshold value in a reference distance-speed graph of any two adjacent frames to obtain N-1 count values;
calculating the information entropy of the counting value;
and judging whether the information entropy is larger than a set value or not, if so, generating a motion event.
4. A fall detection method as claimed in claim 1, wherein the training method of the first network model comprises the steps of:
obtaining sample signals, wherein the types of the sample signals comprise walking, sitting down, standing up, micro-motion, macro-motion and falling down;
generating a sample distance-velocity map, a sample distance-vertical angle map and a sample distance-horizontal angle map from the sample signal;
establishing a first original network model;
and taking the sample distance-speed diagram, the sample distance-vertical angle diagram and the sample distance-horizontal angle diagram as input, taking walking, sitting down, standing up, micro-motion, macro-motion and falling down as labels, and training the first original network model to obtain a first network model.
5. Fall detection method according to claim 4, wherein training the first original network model comprises:
establishing a loss function, wherein the loss function comprises a cross entropy loss item, a falling precision loss item and a falling finding precision loss item;
substituting the output value of the first network model into the loss function to obtain a loss value;
adjusting the original fall detection network model according to the loss value.
6. Fall detection method according to claim 5, wherein the loss function is as follows:
Figure FDA0003270926420000021
Figure FDA0003270926420000022
L=Lcross-entropy+C1*Lrecall+C2*Lprecision
wherein, sum (P)fall-fall) Num (fall) represents the number of samples labeled fall, and sum (P)all-fall) Indicating the number of samples whose output is a fall, LrecallTo find out the rate loss term for a fall, LprecisionFor tumble precision loss term, L is the loss value, Lcross-entropyFor the cross entropy loss term, C1 is the weight of the fall precision rate, and C2 is the weight of the fall precision rate.
7. Fall detection method according to claim 6, wherein C1 is 0.3 and C2 is 0.7.
8. Fall detection method according to claim 4, wherein generating a sample distance-velocity map, a sample distance-vertical angle map and a sample distance-horizontal angle map from the sample signal comprises:
generating an initial sample distance-velocity map, an initial sample distance-vertical angle map and an initial sample distance-horizontal angle map from the sample signal;
determining the coordinates of the human body according to the distance-horizontal angle diagram;
cutting the initial sample distance-speed graph, and reserving an area within +/-20 points of the distance coordinate from the human body to obtain a sample distance-speed graph;
cutting the initial sample distance-vertical angle graph, and reserving an area within +/-20 points of the distance coordinate from the human body to obtain a sample distance-vertical angle graph;
and cutting the initial sample distance-horizontal angle graph, and reserving a region which is +/-20 points away from the human body distance coordinate and +/-25 points around the horizontal angle coordinate to obtain the sample distance-horizontal angle graph.
9. A fall detection method as claimed in any of claims 1 to 8, wherein the first network model comprises a first feature extraction module and a first activation function layer;
the first feature extraction module is used for extracting the first features, and comprises a first volume block, a second volume block, a feature leveling layer, a self-attention layer, a first full-connection layer and a second full-connection layer which are sequentially connected;
the first activation function layer is used for generating a first preliminary detection result according to the first characteristic.
10. A fall detection apparatus, comprising:
the first data processing module is used for generating a distance-velocity diagram of the human body according to the radar reflection signals;
the second data processing module is used for generating a distance-vertical angle diagram and a distance-horizontal angle diagram of the human body according to the radar reflection signals;
the first feature extraction module is used for inputting the distance-speed diagram into a first network model to obtain a first feature;
the second feature extraction module is used for inputting the distance-vertical angle diagram into a second network model to obtain a second feature;
the third feature extraction module is used for inputting the distance-horizontal angle diagram into a third network model to obtain a third feature;
and the classification module is used for inputting the first feature, the second feature and the third feature into a machine learning classifier to obtain a detection result.
CN202111101066.6A 2021-09-18 2021-09-18 Fall detection method and device, electronic equipment and storage medium Pending CN113837055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111101066.6A CN113837055A (en) 2021-09-18 2021-09-18 Fall detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111101066.6A CN113837055A (en) 2021-09-18 2021-09-18 Fall detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113837055A true CN113837055A (en) 2021-12-24

Family

ID=78960020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111101066.6A Pending CN113837055A (en) 2021-09-18 2021-09-18 Fall detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113837055A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271838A (en) * 2018-07-19 2019-01-25 重庆邮电大学 A kind of three parameter attributes fusion gesture identification method based on fmcw radar
CN109829509A (en) * 2019-02-26 2019-05-31 重庆邮电大学 Radar gesture identification method based on fused neural network
CN110109153A (en) * 2019-05-10 2019-08-09 北京百度网讯科技有限公司 Navigation processing method, navigation terminal, equipment and storage medium
CN110286368A (en) * 2019-07-10 2019-09-27 北京理工大学 A kind of Falls Among Old People detection method based on ULTRA-WIDEBAND RADAR
US20200143656A1 (en) * 2018-11-02 2020-05-07 Fujitsu Limited Fall detection method and apparatus
CN111429368A (en) * 2020-03-16 2020-07-17 重庆邮电大学 Multi-exposure image fusion method with self-adaptive detail enhancement and ghost elimination
CN112198507A (en) * 2020-09-25 2021-01-08 森思泰克河北科技有限公司 Method and device for detecting human body falling features
CN112312087A (en) * 2020-10-22 2021-02-02 中科曙光南京研究院有限公司 Method and system for quickly positioning event occurrence time in long-term monitoring video
US20210173045A1 (en) * 2015-07-17 2021-06-10 Yuqian HU Method, apparatus, and system for fall-down detection based on a wireless signal
CN113311428A (en) * 2021-05-25 2021-08-27 山西大学 Intelligent human body falling monitoring system based on millimeter wave radar and identification method
CN113313040A (en) * 2021-06-04 2021-08-27 福州大学 Human body posture identification method based on FMCW radar signal

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210173045A1 (en) * 2015-07-17 2021-06-10 Yuqian HU Method, apparatus, and system for fall-down detection based on a wireless signal
CN109271838A (en) * 2018-07-19 2019-01-25 重庆邮电大学 A kind of three parameter attributes fusion gesture identification method based on fmcw radar
US20200143656A1 (en) * 2018-11-02 2020-05-07 Fujitsu Limited Fall detection method and apparatus
CN111134685A (en) * 2018-11-02 2020-05-12 富士通株式会社 Fall detection method and device
CN109829509A (en) * 2019-02-26 2019-05-31 重庆邮电大学 Radar gesture identification method based on fused neural network
CN110109153A (en) * 2019-05-10 2019-08-09 北京百度网讯科技有限公司 Navigation processing method, navigation terminal, equipment and storage medium
CN110286368A (en) * 2019-07-10 2019-09-27 北京理工大学 A kind of Falls Among Old People detection method based on ULTRA-WIDEBAND RADAR
CN111429368A (en) * 2020-03-16 2020-07-17 重庆邮电大学 Multi-exposure image fusion method with self-adaptive detail enhancement and ghost elimination
CN112198507A (en) * 2020-09-25 2021-01-08 森思泰克河北科技有限公司 Method and device for detecting human body falling features
CN112312087A (en) * 2020-10-22 2021-02-02 中科曙光南京研究院有限公司 Method and system for quickly positioning event occurrence time in long-term monitoring video
CN113311428A (en) * 2021-05-25 2021-08-27 山西大学 Intelligent human body falling monitoring system based on millimeter wave radar and identification method
CN113313040A (en) * 2021-06-04 2021-08-27 福州大学 Human body posture identification method based on FMCW radar signal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
田增山;杨立坤;付长友;余箭飞;: "基于多天线FMCW雷达的人体行为识别方法", 重庆邮电大学学报(自然科学版), vol. 32, no. 05, 15 October 2020 (2020-10-15), pages 779 - 787 *

Similar Documents

Publication Publication Date Title
CN109919251B (en) Image-based target detection method, model training method and device
CN107784282B (en) Object attribute identification method, device and system
CN107702706B (en) Path determining method and device, storage medium and mobile terminal
CN108304758B (en) Face characteristic point tracking method and device
CN111399642B (en) Gesture recognition method and device, mobile terminal and storage medium
EP3651055A1 (en) Gesture recognition method, apparatus, and device
CN110113116B (en) Human behavior identification method based on WIFI channel information
CN111260665A (en) Image segmentation model training method and device
CN110956060A (en) Motion recognition method, driving motion analysis method, device and electronic equipment
CN106874906B (en) Image binarization method and device and terminal
CN103514432A (en) Method, device and computer program product for extracting facial features
CN111178331B (en) Radar image recognition system, method, apparatus, and computer-readable storage medium
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
CN103353935A (en) 3D dynamic gesture identification method for intelligent home system
CN108198159A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN109346061A (en) Audio-frequency detection, device and storage medium
CN111222493B (en) Video processing method and device
CN111505632A (en) Ultra-wideband radar action attitude identification method based on power spectrum and Doppler characteristics
CN112560723B (en) Fall detection method and system based on morphological recognition and speed estimation
CN103105924B (en) Man-machine interaction method and device
CN106503651A (en) A kind of extracting method of images of gestures and system
CN110765924A (en) Living body detection method and device and computer-readable storage medium
CN104915944A (en) Method and device for determining black margin position information of video
CN115422962A (en) Gesture and gesture recognition method and device based on millimeter wave radar and deep learning algorithm
CN108765463A (en) A kind of moving target detecting method calmodulin binding domain CaM extraction and improve textural characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination