CN114663919A - Human body posture detection method and device, electronic equipment and storage medium - Google Patents

Human body posture detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114663919A
CN114663919A CN202210346329.8A CN202210346329A CN114663919A CN 114663919 A CN114663919 A CN 114663919A CN 202210346329 A CN202210346329 A CN 202210346329A CN 114663919 A CN114663919 A CN 114663919A
Authority
CN
China
Prior art keywords
human body
posture
image
detection method
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210346329.8A
Other languages
Chinese (zh)
Inventor
杨辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guanggan Technology Shenzhen Co ltd
Original Assignee
Guanggan Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guanggan Technology Shenzhen Co ltd filed Critical Guanggan Technology Shenzhen Co ltd
Priority to CN202210346329.8A priority Critical patent/CN114663919A/en
Publication of CN114663919A publication Critical patent/CN114663919A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the application discloses a human body posture detection method and device, electronic equipment and a storage medium. The scheme can acquire a human body image of the induction area; inputting the human body image into a preset network model to output human body characteristic information, judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information, and if so, sending reminding information according to a preset contact mode. According to the method and the device, whether the posture of the human body falls is judged by calculating the human body characteristic information in real time, so that the accuracy of an analysis result is improved, and prompt is rapidly carried out.

Description

Human body posture detection method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a human body posture detection method, an apparatus, an electronic device, and a storage medium.
Background
With the aging of the population in China becoming more and more serious, the old people are more likely to fall down because of the degeneration of physical functions of the old people in life because of carelessness, the falling of the old people often causes serious consequences such as fracture and cerebral hemorrhage, and the damage can be reduced to the minimum by timely discovering and curing the fallen old people.
The conventional fall detection method includes the following ways: wearable detection equipment, video monitoring-based fall detection, a sound signal-based detection mode and the like. The applicant found during specific use: the wearable detection device brings great inconvenience to the elderly and the weight of the device may increase the risk of falling of the elderly, and the method of performing posture analysis based on the video shot by the camera and on the sound also has the problem that the analysis result is not accurate enough.
Disclosure of Invention
The embodiment of the application provides a human body posture detection method and device, electronic equipment and a storage medium, and whether the human body posture is fallen or not can be judged in real time by calculating human body characteristic information, so that the accuracy of an analysis result is improved, and prompt is rapidly carried out.
The embodiment of the application provides a human body posture detection method, which comprises the following steps:
acquiring a human body image of the induction area;
inputting the human body image into a preset network model to output human body characteristic information;
judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information;
if yes, sending reminding information according to a preset contact mode.
In one embodiment, the step of acquiring the human body image of the sensing region includes:
judging whether active figures exist in the induction area or not;
and if so, generating a human body image of the sensing area according to the transmitted first infrared light and the received second infrared light.
In an embodiment, the step of inputting the human body image into a preset network model to output human body feature information includes:
extracting initial image features of the human body image through a ResNet convolution network;
determining a candidate region in the human body image through an RPN network;
mapping the candidate region into the initial image feature and pooling into a region feature map;
and judging the type of the region characteristic graph through the full-connection layer, and generating the human body characteristic information according to the judgment result.
In an embodiment, the step of determining whether the current posture of the human body is a falling posture according to the human body feature information includes:
positioning the joint point position of the human body according to the human body characteristic information;
generating a joint thermodynamic diagram of the human body according to the joint positions;
and matching the joint thermodynamic diagram with a preset falling posture template, and if the joint thermodynamic diagram is successfully matched with the preset falling posture template, determining that the current posture of the human body is a falling posture.
In an embodiment, after the matching is successful, the method further comprises:
acquiring the current physical sign data of the human body;
and judging whether the sign data are higher than a preset value, if so, determining that the current posture of the human body is a falling posture.
In an embodiment, after the matching is successful, the method further comprises:
acquiring the current acceleration data of the human body;
and judging whether the acceleration data is higher than a preset acceleration, if so, determining that the current posture of the human body is a falling posture.
In an embodiment, the step of sending the reminding information according to the preset contact manner includes:
extracting face data in the human body image and carrying out identity recognition;
and generating reminding information according to the identity recognition result, and sending the reminding information according to a preset contact way associated with the identity recognition result.
The embodiment of the present application further provides a human body posture detecting device, including: .
The electronic device according to an embodiment of the present application further includes a memory and a processor, where the memory stores a computer program, and the processor executes the steps in the human body posture detection method according to any one of the embodiments of the present application by calling the computer program stored in the memory.
The storage medium is characterized in that the storage medium stores a computer program, and the computer program is suitable for being loaded by a processor to execute the steps in the human body posture detection method provided by the embodiment of the application.
The human body posture detection method provided by the embodiment of the application can acquire the human body image of the induction area; and inputting the human body image into a preset network model to output human body characteristic information, judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information, and if so, sending reminding information according to a preset contact mode. According to the method and the device, whether the posture of the human body falls down is judged by calculating the human body characteristic information in real time, the accuracy of an analysis result is improved, and prompt is rapidly carried out.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a first flowchart of a human body posture detection method provided by an embodiment of the present application;
fig. 2 is a schematic view of a first scenario of a human body posture detection method provided in an embodiment of the present application;
fig. 3 is a schematic view of a first scenario of a human body posture detection method provided in an embodiment of the present application;
fig. 4 is a second flowchart of a human body posture detection method provided by an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a human body posture detecting apparatus according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a second structure of a human body posture detection apparatus provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that, step numbers such as 101, 102, etc. are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and a person skilled in the art may perform 102 and then 101, etc. in the specific implementation, but these should be within the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
An execution main body of the human body posture detection method may be the human body posture detection device provided in the embodiment of the present application, or a server integrated with the human body posture detection device, where the human body posture detection device may be implemented in a hardware or software manner.
As shown in fig. 1, fig. 1 is a first flowchart of a human body posture detection method provided in an embodiment of the present application, and a specific flowchart of the human body posture detection method may be as follows:
101. and acquiring a human body image of the induction area.
In this embodiment of the application, the sensing area may be an area corresponding to the human posture detection device, and specifically, at least one camera may be disposed on the device, and is configured to capture an image corresponding to the corresponding area in real time. It should be noted that the human body image may be a plurality of images, for example, the human body image may be obtained by continuously shooting for multiple times by the same camera, or may be obtained by shooting for multiple cameras simultaneously.
In an embodiment, the human body posture detection device may be integrated on a lamp, for example, an indoor lighting lamp, and may be conveniently installed on a ceiling, and directly powered by 220V mains supply, and the power supply circuit converts strong and weak current into direct current for use inside a product. Further, the device may include an infrared light sensor and a lens, the infrared light sensor emitting infrared light that is reflected and received by the lens when encountering an object such as a person, thereby obtaining an infrared human body image. Specifically, the infrared light may be infrared light with a wavelength of 850nm to 940nm, and the light emitting angle may be 120 degrees or more, as shown in fig. 2, when a person is detected in the sensing area, the infrared light emitted by the infrared sensor and reflected by the lens may be received, and the photoelectric conversion may be continuously performed, that is, the optical signal is converted into an electrical signal, so as to form an infrared image with 200 ten thousand pixels, and the image frame rate may reach 30fps, wherein the sensor may be 200 ten thousand pixels, a high-definition lens with 200 ten thousand resolution may be disposed above the sensor, the field angle of the lens may be 120 degrees, and the lens may have an infrared filter built therein, and only infrared light with a wavelength of 800 and 950nm may be allowed to pass through, and the distance of 2.5 meters may cover more than 30 square meters.
In an embodiment, when the human body posture detection device is integrated on the lamp, human body detection can be performed on the sensing area first, and if a person is detected, the lamp can be turned on before the human body image is acquired, and then the human body image is acquired. If the human body is not detected within the preset time after the human body leaves the current sensing area, the lamp can be turned off, and the acquisition of the human body image is stopped. That is, the step of acquiring the human body image of the sensing region may include: and judging whether the active person exists in the sensing area, if so, generating a human body image of the sensing area according to the transmitted first infrared light and the received second infrared light.
When there are a plurality of persons in the sensing area, corresponding images of the persons may be acquired for the plurality of persons, respectively.
102. And inputting the human body image into a preset network model to output human body characteristic information.
In an embodiment, the predetermined network model may be a pre-trained Faster R-CNN model, and the human body feature information may be obtained by inputting a human body image into the Faster R-CNN model, where the human body feature information may include joint positions of a human body.
In one embodiment, the training process of the Faster R-CNN model may include: 1. and loading parameters in the pre-trained deep convolutional network VGG-Net model into a region suggestion network model in the deep network Faster R-CNN to complete the initialization of the parameters of the region suggestion network model. 2. And inputting the training set image data into the regional suggestion network model in the deep network Faster R-CNN, and training to obtain the trained regional suggestion network model. 3. And loading parameters in the pre-trained VGG-Net model of the deep convolutional network into a Fast R-CNN model in the deep network Fast R-CNN to complete initialization of the Fast R-CNN model parameters. 4. And inputting the image data of the training set into the trained regional suggestion network model to generate a corresponding rectangular detection frame, and training the deep convolution network VGG-Net model and the Fast R-CNN model by using the rectangular detection frame to obtain the trained deep convolution network VGG-Net model and the Fast R-CNN model. 5. And keeping the parameters in the trained deep convolution network VGG-Net model unchanged, and inputting the image data of the training set into the deep convolution network VGG-Net model to obtain the image characteristics. 6. Keeping the parameters in the trained regional suggestion network model unchanged, inputting the image characteristics into the regional suggestion network to obtain a rectangular detection frame, and training the Fast R-CNN network model by using the rectangular detection frame to obtain the trained Fast R-CNN network model. 7. Keeping parameters in the trained Fast R-CNN network model unchanged, inputting image characteristics into the Fast R-CNN network model to obtain a rectangular detection frame, and training the area suggestion network model by using the rectangular detection frame to obtain the trained area suggestion network model. 8. And judging whether the global loss value of the fast R-CNN model of the deep network is smaller than a threshold value, if so, executing the step 9, otherwise, executing the step 6. 9. And finishing the training to obtain a trained deep network Faster R-CNN model.
103. And judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information.
In an embodiment, the current posture of the human body can be determined according to the above human body characteristic information, and then compared with a falling posture template in a database, so as to determine whether the current posture is a falling posture. For example, the similarity between the current posture and the falling posture template may be compared, and when the similarity exceeds a preset similarity, for example, 80%, the current posture may be determined as the falling posture.
In other embodiments, the current posture may be compared with the falling posture template in consideration of factors such as the gender and/or the body type of the person, for example, the body type information and/or the gender information of the current person is determined according to the human body image, then the corresponding database is selected according to the body type information and/or the gender information, and the current posture is compared with the falling posture template corresponding to the database, so that interference due to factors such as the gender and/or the body type can be avoided, and the posture determination accuracy can be improved.
In one embodiment, the human body characteristic information may include joint point positions, and specifically, when the current posture of the human body is determined according to the human body characteristic information, the current posture of the human body may be mapped according to the joint point positions, and the joint points may include a plurality of joint points, for example, 17 joint points of the human body may include a nose, left and right eyes, left and right ears, left and right shoulders, left and right elbows, left and right wrists, left and right buttocks, left and right knees, and left and right ankles. The approximate outline of the person in the image can be determined through the positions of the 17 joint points, so that the current posture can be drawn.
104. If yes, sending reminding information according to a preset contact mode.
In an embodiment, if it is detected that the current posture of the user is a falling posture in the sensing area, the reminding information can be generated and sent to a preset contact way, such as a parent of the user, and meanwhile, the user can directly contact a rescue center to rescue.
For example, when it is detected that the user falls down, the human body posture detection device can be controlled to automatically give an alarm, and meanwhile, the human body images can be stored, and reminding information can be generated and sent so that other people can know specific conditions and timely rescue the human body. The reminding information may be sent to a client associated with a preset contact manner through a wireless network, such as a Wi-Fi network, a 3G/4G network, and the like, as shown in fig. 3, where the client may be a mobile phone, a personal computer, a tablet computer, and the like. The falling part, the injury degree, the collision strength and the like of the falling user can be added into the reminding information, so that the contact can know the falling part, the injury degree, the collision strength and the like more comprehensively.
From the above, the human body posture detection method provided by the embodiment of the application can acquire the human body image of the induction area, input the human body image into the preset network model to output the human body characteristic information, judge whether the current posture of the human body is a falling posture according to the human body characteristic information, and if so, send the reminding information according to the preset contact way. Whether the posture of the human body falls is judged by calculating the human body characteristic information in real time, so that the accuracy of an analysis result is improved, and prompt is rapidly carried out.
The method described in the previous examples is further detailed below.
Referring to fig. 4, fig. 4 is a second flowchart illustrating a human body posture detection method according to an embodiment of the present disclosure. The method comprises the following steps:
201. and acquiring a human body image of the induction area.
In this embodiment of the application, the sensing area may be an area corresponding to the human posture detection device, and specifically, at least one camera may be disposed on the device, and is configured to capture an image corresponding to the corresponding area in real time. It should be noted that the human body image may be a plurality of images, for example, the human body image may be obtained by continuously shooting for multiple times by the same camera, or may be obtained by shooting for multiple cameras simultaneously.
202. And inputting the human body image into a preset network model to output human body characteristic information.
In an embodiment, the predetermined network model may be a pre-trained Faster R-CNN model, and the output human characteristic information may be obtained by inputting a human image into the Faster R-CNN model. Wherein, the Faster R-CNN integrates feature extraction, proposal extraction, Bounding Box Regression and Classication into a network, thereby greatly improving the target detection speed. Specifically, in this embodiment, the fast R-CNN deep neural network may be used for human body detection, and the algorithm includes 4 modules, and first, 18 layers of ResNet convolutional network are used as a backbone network to extract features of an image. And then generating a candidate Region by using an RPN (Region candidate network), wherein the candidate Region is obtained by obtaining a group of anchor points through a group of fixed sizes and proportions, judging whether the anchor points belong to the foreground or the background through a softmax (logistic regression model), and correcting the anchor points by using regional regression so as to obtain an accurate candidate Region. And the subsequent RoI pooling layer collects the input feature maps and candidate regions, maps the candidate regions into the feature maps, pools the feature maps into region feature maps with uniform size, and sends the region feature maps into the full-connection layer to judge the target category. And finally, a detection layer calculates the category of the candidate region by using the region feature map, and obtains the final accurate position of the detection frame by region regression again. That is, the step of inputting the human body image into the preset network model to output the human body feature information includes: extracting initial image characteristics of the human body image through a ResNet convolution network, determining a candidate region in the human body image through an RPN network, mapping the candidate region into the initial image characteristics and pooling the candidate region into a region characteristic diagram, judging the category of the region characteristic diagram through a full connection layer, and generating human body characteristic information according to the judgment result.
In an embodiment, before inputting the human body image into the preset network model, the human body image may be preprocessed. For example, the pre-processing may include image cropping, image denoising, contrast enhancement, and the like. The image cutting can remove useless areas in the shot image, interference is avoided, the calculated amount is reduced, the accuracy of subsequent detection can be improved by image noise reduction, and the step of enabling the outline of a human body to be deepened to facilitate subsequent acquisition of human body characteristic information can be realized by enhancing the contrast.
In the embodiment, the contrast ratio can be enhanced by combining the morphological top-hat transform and gray stretching. The top hat change is to subtract an image after an opening operation from an original image and extract an area with brighter gray level in the image, wherein the opening operation can compensate uneven background brightness; the low-hat transformation is to subtract the image after the closing operation from the original image, extract the area with lower gray level in the image, add the image after the high-hat transformation to the original image, and subtract the image after the low-hat transformation, so that the bright area of the image is brighter and the dark area is darker. In another embodiment, the image contrast can be enhanced based on a gray histogram cone non-uniform stretching algorithm. The basic idea of the method is to firstly interpolate uniformly the gray axes of the image histogram according to the gray distribution, namely more interpolation in the areas with high gray distribution and less interpolation in the areas with low gray distribution, and then homogenize the interpolated gray axes according to the interpolation points, thereby realizing the non-uniform stretching of the histogram.
203. And positioning the positions of the joint points of the human body according to the human body characteristic information.
204. And generating a joint thermodynamic diagram of the human body according to the joint positions.
In an embodiment, the human body characteristic information may include position information of each organ of the human body, and then the joint point position of the human body is determined according to the organ position information, and finally the joint point thermodynamic diagram of the human body is drawn based on the joint point position. Specifically, the articulation points of the human body may include 17 articulation points, etc., for the nose, the left and right eyes, the left and right ears, the left and right shoulders, the left and right elbows, the left and right wrists, the left and right buttocks, the left and right knees, and the left and right ankles.
In an embodiment, joint detection can be performed on the human body image through a joint detection model to obtain a joint thermodynamic diagram corresponding to the human body image. The joint point detection model can comprise a feature extraction network and a feature fusion network, wherein the feature extraction network is used for extracting multi-scale features in the human body image from top to bottom, the feature fusion network is used for fusing the multi-scale features extracted by the feature extraction network from bottom to top, and then the joint point thermodynamic diagram is generated based on the fused features. In one embodiment, the feature extraction network may include densely populated blocks and Orthogonal Attention Blocks (OABs) between the densely populated blocks for extracting multi-scale features, e.g., OAB module 1 for extracting multi-scale features of a feature map with a spatial dimension of l1 w1 and OAB module 2 for extracting multi-scale features of a feature map with a spatial dimension of l2 w 2; in addition, the feature fusion network further includes a second-order fusion module in multiple stages, where the second-order fusion module in each stage is used to perform weighted fusion on the multi-scale features and the aggregation features output by the second-order fusion module in the previous stage, for example, the second-order fusion module in the second stage is used to perform weighted fusion on the multi-scale features and the aggregation features output by the second-order fusion module in the first stage, so as to obtain the aggregation features output by the second-order fusion module in the second stage.
205. And matching the joint thermodynamic diagram with a preset fall posture template to judge whether the matching is successful, and if so, executing the step 206.
In one embodiment, whether the current posture is a fall posture can be determined by comparing the above joint thermodynamic diagrams with the joint thermodynamic diagrams corresponding to the fall posture template in the database. For example, the similarity between the current joint thermodynamic diagram and the joint thermodynamic diagram corresponding to the fall posture template may be compared, and when the similarity exceeds a preset similarity, the current posture may be determined to be the fall posture.
In an embodiment, in order to further improve the accuracy of posture detection, the method may further include adding vital sign data of the human body to perform comprehensive judgment, such as a heart rate, a blood pressure, a body temperature, a pulse, and the like, where the vital sign data may be obtained by associating with a wearable device worn by the user, and when the matching with the preset falling posture template is successful, and the vital sign data is also higher than a normal value, it may be determined that the current posture of the human body is a falling posture. That is, after the matching is successful, the method further includes: and acquiring the current physical sign data of the human body, judging whether the physical sign data is higher than a preset value, and if so, determining that the current posture of the human body is a falling posture.
In an embodiment, the fact that the acceleration of the human body is larger at the moment of falling when the human body falls is taken into consideration, acceleration data can be added to carry out comprehensive judgment, and due to the fact that the sensing area can be continuously shot in the embodiment of the application, when the human body falls, the instantaneous acceleration of the human body can be calculated through the distances between the human body and the ground in a plurality of continuous human body images. When the current posture of the human body is successfully matched with the preset falling posture template and the acceleration data is higher than the preset value, the current posture of the human body can be determined to be the falling posture. That is, after the matching is successful, the method further includes: and acquiring the current acceleration data of the human body, judging whether the acceleration data is higher than a preset acceleration, and if so, determining that the current posture of the human body is a falling posture.
206. And extracting face data in the human body image and carrying out identity recognition.
207. And generating reminding information according to the identity recognition result, and sending the reminding information according to a preset contact way associated with the identity recognition result.
In view of that different people can perform different reminding modes, in an embodiment, before sending the reminding information, the method may further include: the method comprises the steps of obtaining face feature information in a human body image, carrying out identity recognition according to the face feature information, setting a corresponding reminding mode according to an identity recognition result, and finally carrying out reminding according to the reminding mode. For example, different emergency contacts may be set for different people falling. The above identity recognition may be performed by face recognition, iris recognition, voice recognition, and the like, which is not further limited in the present application.
In an embodiment, the human body image can be sent according to a preset contact way when reminding. For example, a plurality of human body images are stored and sent to the client side of the associated user, and further, the falling part, the falling degree, the collision strength and the like of the falling human body can be added to the reminding information. For example, when a human body falls, the human body part corresponding to the falling region, such as legs, arms, the back and the like, is judged through a plurality of human body images, and then the falling degree, the collision strength and the like are calculated according to the acceleration of the human body when the human body falls.
From the above, the human body posture detection method provided by the embodiment of the application can acquire a human body image of a sensing area, input the human body image into a preset network model to output human body characteristic information, position a joint point position of a human body according to the human body characteristic information, generate a joint point thermodynamic diagram of the human body according to the joint point position, match the joint point thermodynamic diagram with a preset falling posture template to judge whether the matching is successful, if so, extract face data in the human body image and perform identity recognition, generate reminding information according to an identity recognition result, and send the reminding information according to a preset contact mode associated with the identity recognition result. According to the method and the device, whether the posture of the human body falls down is judged by calculating the human body characteristic information in real time, the accuracy of an analysis result is improved, and prompt is rapidly carried out.
In order to implement the above method, an embodiment of the present application further provides a human body posture detection apparatus, which may be specifically integrated in a terminal device, such as a mobile phone, a tablet computer, and the like.
For example, as shown in fig. 5, it is a first structural schematic diagram of a human body posture detecting device provided in the embodiment of the present application. The human body posture detecting apparatus may include:
an obtaining module 301, configured to obtain a human body image of the sensing area;
an input module 302, configured to input the human body image into a preset network model to output human body characteristic information;
the judging module 303 is configured to judge whether the current posture of the human body is a falling posture according to the human body feature information;
and the reminding module 304 is configured to send reminding information according to a preset contact manner when the judging module judges that the information is yes.
In an embodiment, please further refer to fig. 6, wherein the obtaining module 301 may include:
a judging submodule 3011, configured to judge whether an active person exists in the sensing region;
the generating sub-module 3012 is configured to, when the determining sub-module 3011 determines that the sensing area is a human body image, generate a human body image of the sensing area according to the transmitted first infrared light and the received second infrared light.
In an embodiment, the input module 302 is specifically configured to extract an initial image feature of the human body image through a ResNet convolutional network, determine a candidate region in the human body image through an RPN network, map the candidate region into the initial image feature and pool the candidate region into a region feature map, determine a category of the region feature map through a full connection layer, and generate the human body feature information according to a determination result.
In an embodiment, the determining module 303 may include:
a positioning sub-module 3031, configured to position joint points of the human body according to the human body characteristic information, and generate a joint point thermodynamic diagram of the human body according to the joint points;
the matching submodule 3032 is configured to match the joint thermodynamic diagram with a preset falling posture template, and if the matching is successful, determine that the current posture of the human body is a falling posture.
In an embodiment, the reminding module 304 is specifically configured to extract the face data in the human body image, perform identity recognition, generate reminding information according to an identity recognition result, and send the reminding information according to a preset contact manner associated with the identity recognition result.
Therefore, the human body posture detection device provided by the embodiment of the application can acquire the human body image of the induction area; and inputting the human body image into a preset network model to output human body characteristic information, judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information, and if so, sending reminding information according to a preset contact mode. According to the method and the device, whether the posture of the human body falls down is judged by calculating the human body characteristic information in real time, the accuracy of an analysis result is improved, and prompt is rapidly carried out.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
Correspondingly, the embodiment of the present application further provides an electronic device, where the electronic device may be a terminal or a server, and the terminal may be a terminal device such as a smart phone, a tablet Computer, a notebook Computer, a touch screen, a game machine, a Personal Computer (PC), a Personal Digital Assistant (PDA), and the like. As shown in fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer-readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. Those skilled in the art will appreciate that the electronic device configurations shown in the figures do not constitute limitations of the electronic device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The processor 401 is a control center of the electronic device 400, connects various parts of the whole electronic device 400 by using various interfaces and lines, performs various functions of the electronic device 400 and processes data by running or loading software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device 400.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, so as to implement various functions:
acquiring a human body image of the induction area;
inputting the human body image into a preset network model to output human body characteristic information;
judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information;
if yes, sending reminding information according to a preset contact mode.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Optionally, as shown in fig. 7, the electronic device 400 further includes: touch-sensitive display screen 403, radio frequency circuit 404, audio circuit 405, input unit 406 and power 407. The processor 401 is electrically connected to the touch display screen 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power source 407. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The touch display screen 403 may be used for displaying a graphical user interface and receiving operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. The display panel may be used, among other things, to display information entered by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. Alternatively, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel using any suitable object or accessory such as a finger, a stylus pen, and the like), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 401, and can receive and execute commands sent by the processor 401. The touch panel may overlay the display panel, and when the touch panel detects a touch operation thereon or nearby, the touch panel may transmit the touch operation to the processor 401 to determine the type of the touch event, and then the processor 401 may provide a corresponding visual output on the display panel according to the type of the touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize input and output functions. However, in some embodiments, the touch panel and the touch panel can be implemented as two separate components to perform the input and output functions. That is, the touch display screen 403 may also be used as a part of the input unit 406 to implement an input function.
In the embodiment of the present application, an application program is executed by the processor 401 to generate a graphical user interface on the touch display screen 403. The touch display screen 403 is used for presenting a graphical user interface and receiving an operation instruction generated by a user acting on the graphical user interface.
The rf circuit 404 may be used for transceiving rf signals to establish wireless communication with a network device or other electronic devices via wireless communication, and for transceiving signals with the network device or other electronic devices. In the embodiment of the application, the electronic equipment can be communicated with the client sides of other users through the radio frequency circuit, so that when a person falls down, the person can be reminded through the electronic equipment in time, the person image shot by the camera can be quickly checked through the electronic equipment, and the loss caused by the falling of the person can be avoided or reduced.
The audio circuit 405 may be used to provide an audio interface between the user and the electronic device through a speaker, microphone. The audio circuit 405 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 405 and converted into audio data, which is then processed by the audio data output processor 401 and then transmitted to, for example, another electronic device via the rf circuit 404, or the audio data is output to the memory 402 for further processing. The audio circuit 405 may also include an earbud jack to provide communication of a peripheral headset with the electronic device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the electronic device 400. Optionally, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, power consumption management, and the like through the power management system. The power supply 407 may also include one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, or any other component.
Although not shown in fig. 7, the electronic device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described in detail herein.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
As can be seen from the above, the electronic device provided in this embodiment obtains the human body image of the sensing area; and inputting the human body image into a preset network model to output human body characteristic information, judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information, and if so, sending reminding information according to a preset contact mode. According to the method and the device, whether the posture of the human body falls down is judged by calculating the human body characteristic information in real time, the accuracy of an analysis result is improved, and prompt is rapidly carried out.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any one of the human body posture detection methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
acquiring a human body image of the induction area;
inputting the human body image into a preset network model to output human body characteristic information;
judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information;
if yes, sending reminding information according to a preset contact mode.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any human body posture detection method provided in the embodiments of the present application, the beneficial effects that can be achieved by any human body posture detection method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted here for the foregoing embodiments.
The human body posture detection method, the human body posture detection device, the electronic device and the storage medium provided by the embodiments of the present application are introduced in detail, and a specific example is applied to illustrate the principle and the implementation of the present application, and the description of the embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A human body posture detection method is characterized by comprising the following steps:
acquiring a human body image of the induction area;
inputting the human body image into a preset network model to output human body characteristic information;
judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information;
if yes, sending reminding information according to a preset contact mode.
2. The human body posture detection method as claimed in claim 1, wherein the step of acquiring the human body image of the sensing area comprises:
judging whether the induction area has a movable human body or not;
and if so, generating a human body image of the sensing area according to the transmitted first infrared light and the received second infrared light.
3. The human body posture detection method as claimed in claim 1, wherein the step of inputting the human body image into a preset network model to output human body characteristic information comprises:
extracting initial image features of the human body image through a ResNet convolution network;
determining a candidate region in the human body image through an RPN network;
mapping the candidate region into the initial image feature and pooling into a region feature map;
and judging the type of the region characteristic graph through the full-connection layer, and generating the human body characteristic information according to the judgment result.
4. The human body posture detection method according to claim 1, wherein the step of determining whether the current posture of the human body is a falling posture or not based on the human body feature information comprises:
positioning the joint point position of the human body according to the human body characteristic information;
generating a joint thermodynamic diagram of the human body according to the joint positions;
and matching the joint thermodynamic diagram with a preset falling posture template, and if the matching is successful, determining that the current posture of the human body is a falling posture.
5. The human body pose detection method of claim 4, wherein after a successful match, the method further comprises:
acquiring the current physical sign data of the human body;
and judging whether the sign data are higher than a preset value, if so, determining that the current posture of the human body is a falling posture.
6. The human body pose detection method of claim 4, wherein after a successful match, the method further comprises:
acquiring current acceleration data of the human body;
and judging whether the acceleration data is higher than a preset acceleration or not, and if so, determining that the current posture of the human body is a falling posture.
7. The human body posture detection method as claimed in claim 1, wherein the step of sending the reminder information according to the preset contact manner comprises:
extracting face data in the human body image and performing identity recognition;
and generating reminding information according to the identity recognition result, and sending the reminding information according to a preset contact way associated with the identity recognition result.
8. A human body posture detecting apparatus, characterized by comprising:
the acquisition module is used for acquiring a human body image of the induction area;
the input module is used for inputting the human body image into a preset network model so as to output human body characteristic information;
the judging module is used for judging whether the current posture of the human body is a falling posture or not according to the human body characteristic information;
and the reminding module is used for sending reminding information according to a preset contact mode when the judgment module judges that the terminal is positive.
9. An electronic device, characterized in that the electronic device comprises a memory in which a computer program is stored and a processor, the processor performing the steps in the human body posture detection method according to any one of claims 1-7 by calling the computer program stored in the memory.
10. A storage medium, characterized in that the storage medium stores a computer program adapted to be loaded by a processor for performing the steps in the human body posture detection method as claimed in any one of claims 1-7.
CN202210346329.8A 2022-03-31 2022-03-31 Human body posture detection method and device, electronic equipment and storage medium Pending CN114663919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210346329.8A CN114663919A (en) 2022-03-31 2022-03-31 Human body posture detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210346329.8A CN114663919A (en) 2022-03-31 2022-03-31 Human body posture detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114663919A true CN114663919A (en) 2022-06-24

Family

ID=82033816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210346329.8A Pending CN114663919A (en) 2022-03-31 2022-03-31 Human body posture detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114663919A (en)

Similar Documents

Publication Publication Date Title
US11561621B2 (en) Multi media computing or entertainment system for responding to user presence and activity
CN108509037B (en) Information display method and mobile terminal
CN110348543B (en) Fundus image recognition method and device, computer equipment and storage medium
WO2018121428A1 (en) Living body detection method, apparatus, and storage medium
US20220309836A1 (en) Ai-based face recognition method and apparatus, device, and medium
WO2019033569A1 (en) Eyeball movement analysis method, device and storage medium
WO2019011098A1 (en) Unlocking control method and relevant product
CN108198130B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110570460B (en) Target tracking method, device, computer equipment and computer readable storage medium
CN112287795B (en) Abnormal driving gesture detection method, device, equipment, vehicle and medium
CN110765924A (en) Living body detection method and device and computer-readable storage medium
WO2021143216A1 (en) Face liveness detection method and related apparatus
CN108881544B (en) Photographing method and mobile terminal
US20230022554A1 (en) Automatic pressure ulcer measurement
CN107169427B (en) Face recognition method and device suitable for psychology
CN110807405A (en) Detection method of candid camera device and electronic equipment
CN113538696A (en) Special effect generation method and device, storage medium and electronic equipment
Wu et al. Appearance-based gaze block estimation via CNN classification
CN113539439B (en) Medical image processing method and device, computer equipment and storage medium
Chua et al. Vision-based hand grasping posture recognition in drinking activity
CN112700568B (en) Identity authentication method, equipment and computer readable storage medium
CN116416545A (en) Behavior detection method, apparatus, device and computer readable storage medium
CN117593493A (en) Three-dimensional face fitting method, three-dimensional face fitting device, electronic equipment and storage medium
CN113076799A (en) Drowning identification alarm method, drowning identification alarm device, drowning identification alarm platform, drowning identification alarm system and drowning identification alarm system storage medium
CN111753813A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination