CN116631045A - Human body liveness detection method, system and device based on action recognition and medium - Google Patents

Human body liveness detection method, system and device based on action recognition and medium Download PDF

Info

Publication number
CN116631045A
CN116631045A CN202210125184.9A CN202210125184A CN116631045A CN 116631045 A CN116631045 A CN 116631045A CN 202210125184 A CN202210125184 A CN 202210125184A CN 116631045 A CN116631045 A CN 116631045A
Authority
CN
China
Prior art keywords
user
average speed
seconds
activity
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210125184.9A
Other languages
Chinese (zh)
Inventor
曾晓嘉
刘易
薛立君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Fit Future Technology Co Ltd
Original Assignee
Chengdu Fit Future Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Fit Future Technology Co Ltd filed Critical Chengdu Fit Future Technology Co Ltd
Priority to CN202210125184.9A priority Critical patent/CN116631045A/en
Priority to PCT/CN2022/094627 priority patent/WO2023151200A1/en
Publication of CN116631045A publication Critical patent/CN116631045A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a human body liveness detection method, a human body liveness detection system, a human body liveness detection device and a human body liveness detection medium based on action recognition, which relate to the field of body building, and are used for acquiring user images in an area within a preset time T seconds to obtain continuous frame action images of a user; acquiring the average speed of a user in T seconds according to continuous frame motion images; and acquiring the activity of the user within T seconds according to the average speed of the user, and displaying the activity. According to the method and the device, the average speed of the user is obtained through the continuous frame motion images of the user, the activity of the user when the motion occurs is obtained according to the average speed, the obtained activity is more accurate, and for small-amplitude motion, the average speed of the user can be obtained more accurately through the continuous frame motion images, and errors are reduced.

Description

Human body liveness detection method, system and device based on action recognition and medium
Technical Field
The application relates to the field of fitness, in particular to a human body liveness detection method, a human body liveness detection system, a human body liveness detection device and a human body liveness detection medium based on action recognition.
Background
With the improvement of living standard, people pay more attention to pursuit of healthy life. Along with the development and progress of scientific technology, the existing body-building apparatus is continuously developed towards families and intellectualization, and under the premise, the intelligent mirror body-building apparatus gradually starts to appear in the life of people. The intelligent mirror body-building device is characterized in that various devices are arranged in a body and are displayed and mirror-image displayed through a front mirror surface, before the intelligent mirror body-building device is used, a great number of researches and patent applications are developed on related schemes of body-building mirrors, including effective identification of actions of users, feedback of the actions of the users and the like, and along with continuous refinement of the researches, the inventor finds that the enthusiasm of the movement of the users is very important in the process of the household development of the body-building device, and when the household body-building device is used for body-building, the enthusiasm of the body-building of the users needs to be mobilized, so that the users can achieve the movement targets. Based on the above, the inventor finds that the effect of exciting the user to achieve the moving target can be better achieved by feeding back the motion quantity to the user, namely the motion activity of the user, however, most of the existing activity detection methods detect the motion target through wearable equipment, but the detection method only roughly detects the activity of the user and is not suitable for real-time detection in the body building process, so that the effect of exciting the user is achieved.
Disclosure of Invention
The application provides a human body liveness detection method, a human body liveness detection system, a human body liveness detection device and a human body liveness detection medium based on motion recognition.
In order to achieve the above object, the present application provides a human body liveness detection method based on motion recognition, including:
collecting user images in the area within a preset time T seconds to obtain continuous frame action images of the user;
acquiring the average speed of a user in T seconds according to continuous frame motion images;
and acquiring the activity of the user within T seconds according to the average speed of the user, and displaying the activity.
In the existing liveness acquisition method, acceleration data of a human body is acquired through a terminal, an action type is identified based on the acquired acceleration data, and a human body liveness index is calculated based on the identified action type and metabolic equivalent corresponding to the action type obtained by table lookup. The application obtains the average speed of the user through each frame of motion image, obtains the liveness according to the average speed of the user, and can not accurately embody the liveness of the user for small-amplitude motion.
The method for obtaining the average speed of the user can be various methods, namely, the coordinate change of one or more skeleton points in the action image can be utilized, a position which can best represent the action amplitude of the user can be obtained according to the action of the user, the position can be one point in the action image, can also be a line segment or a surrounding graph, and only the motion state of the user can be accurately represented, and the average speed of the user can be finally obtained.
On this basis, in order to find that the average speed of the user can better reflect the change of the user motion by determining the bone points of the user in the motion image, particularly for the local motion such as the hand motion or the leg motion, the method can directly obtain the speed which can reflect the motion of the user, and the method specifically comprises the following steps:
acquiring a plurality of skeleton points of a user in each action image and information corresponding to each skeleton point;
acquiring the average speed of each bone point in T seconds according to the information corresponding to each bone point;
the average speed of the user is obtained from the average speed of each bone point.
Wherein, the average speed of the jth bone point in T seconds:
where n is the number of motion picture frames included in T seconds,(x i ,y i ) And j and i are positive integers for the coordinates of the skeleton point in the ith action image. In the application, j skeleton points are provided, j can be 16 or 12 or 14, and is determined according to actual use conditions. After determining the average speed of each bone point in T seconds according to the above formula, since the specific gravity of each bone point in each motion affecting the average motion of the user is different for one motion, the application obtains the final average speed of the user according to the preset weight of each bone point after obtaining the average speed of each bone point, specifically, each bone point is preset with a weight w j J is a positive integer, the average speed of the user:
wherein v is j Average velocity, w, of the jth bone point in T seconds j And (3) presetting a weight for the jth bone point, wherein m is the number of the bone points of the user in the action image. The weight of each skeleton point can be determined according to the type of each action and can be obtained through induction.
In the present application, 16 skeletal points are preferably counted in the motion image of the user, including a head, neck, throat, crotch, left hand, right hand, left elbow, right elbow, left foot, right foot, left shoulder, right shoulder, left knee, right knee, left hip and right hip, wherein the weight of the head, neck, throat and crotch is 1, the weight of the left hand, right hand, left elbow and right elbow is 1.5, the weight of the left foot, right foot, left shoulder and right shoulder is 2.5, the weight of the left knee and right knee is 4, and the weight of the left hip and right hip is 4.5.
After obtaining the average speed of the user based on the average speed of each bone point by the above formula, the present inventors have also found that, in the acquisition regionIn the user image, the same action is performed, the user stands near to the station far to the station, because of the relationship of the imaging near to the far, the pixel distance passed by the near bone point of the station is larger, and the calculated average speed V of the user is larger. Thus, in order to maintain the calculated speed unchanged from the distance, the present inventors obtained the relative average speed of the user based on the average speed of the user and a preset human body reference distanceAccording to the relative average speed of the user->And acquiring the user activity level in T seconds. Specifically, the preset human body reference distance is a human body reference distance S, the human body reference distance S is obtained through two bone points in the upper body, and the two bone points can be selected as a head bone point and a pelvic bone point, a throat bone point, a crotch bone point and the like. The reference points used for determining the human body reference distance are stable through the distance length between the two reference points, so that the relative average speed of the user can be accurately obtained through the proportional size when the user station is far or near>Preferably, in the present application, throat bone points and crotch bone points are selected as two reference points, the human body reference distance S is obtained by calculating the Euclidean distance between the throat bone points and the crotch bone points, and then the relative average speed of the user is obtained by the ratio of the average speed V of the user to the human body reference distance S>
Obtaining relative average speed of user through several skeleton pointsAfterwards, it is necessary to add->The method is converted into more visual data, thus being more convenient for a user to intuitively see the current liveness of the user in the process of movement, and the score of 0 to 100 can intuitively represent the liveness of the user for the user, so that the relative average speed of the user is improved in the application>A score of 0-100 is converted to indicate the current liveness of the user. The method specifically comprises the following steps:
presetting a mapping function of the average speed and the liveness score of a user;
and obtaining the liveness score according to the obtained average speed of the user and the mapping function.
In the application, the mapping function can be a linear function or a nonlinear function, when the mapping function is preset, the relative average speed of the user is calculatedWhen 0, the liveness score is 0, when the user is relative to the average speed +.>And when the activity score is greater than or equal to the preset maximum value of the relative average speed, the activity score is 100.
In the application, three mapping functions are preset, namely a first nonlinear function, a second nonlinear function and a linear function, when the mapping function is selected, the mapping function is selected according to the actual situation, and if the hope score is uniformly and linearly increased along with the bone point movement of a user, the linear mapping is selected; if the user wishes to increase the score rapidly with increasing motion, a first non-linear function is selected, and if the user wishes to have difficulty achieving a high score of motion, a second non-linear function is selected.
Wherein the first nonlinear function is:
the second nonlinear function is:
the first nonlinear function is:where Vmax is a preset relative average velocity maximum.
Corresponding to the method in the application, the application also provides a human body liveness detection system based on action recognition, which comprises the following steps:
the acquisition module is used for acquiring the user images in the area within a preset time T seconds to obtain continuous frame action images of the user;
the first calculation module is used for acquiring the average speed of the user in T seconds according to the continuous frame motion images;
the second calculation module is used for acquiring the activity of the user in T seconds according to the average speed of the user;
and the display module is used for displaying the activity of the user.
And the skeleton point module is used for acquiring a plurality of skeleton points of the user in each action image and coordinates corresponding to each skeleton point.
Corresponding to the method in the application, the application also provides an electronic device which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the human body activity detection method based on action recognition when executing the computer program.
Corresponding to the method in the application, the application also provides a storage medium, and the computer readable storage medium stores a computer program which realizes the steps of the human body activity detection method based on the action recognition when being executed by a processor.
The one or more technical schemes provided by the application have at least the following technical effects or advantages:
according to the method and the device, the average speed of the user is obtained through the continuous frame motion images of the user, the activity of the user when the motion occurs is obtained according to the average speed, the obtained activity is more accurate, and for small-amplitude motion, the average speed of the user can be obtained more accurately through the continuous frame motion images, and errors are reduced.
Meanwhile, the method converts the liveness into the score for showing, and shows the liveness of the current movement to the user in the movement process of the user, so that the user movement can be better stimulated, and the enthusiasm is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings:
FIG. 1 is a flow chart of a human liveness detection method based on action recognition;
fig. 2 is a schematic diagram of the composition of a human body activity detection system based on motion recognition.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. In addition, the embodiments of the present application and the features in the embodiments may be combined with each other without collision.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than within the scope of the description, and the scope of the application is therefore not limited to the specific embodiments disclosed below.
It will be appreciated by those skilled in the art that in the present disclosure, the terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," etc. refer to an orientation or positional relationship based on that shown in the drawings, which is merely for convenience of description and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore the above terms should not be construed as limiting the present application.
It will be understood that the terms "a" and "an" should be interpreted as referring to "at least one" or "one or more," i.e., in one embodiment, the number of elements may be one, while in another embodiment, the number of elements may be plural, and the term "a" should not be interpreted as limiting the number.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of a human body liveness detection method based on action recognition, and the application provides a human body liveness detection method based on action recognition, which comprises the following steps:
collecting user images in the area within a preset time T seconds to obtain continuous frame action images of the user;
acquiring the average speed of a user in T seconds according to continuous frame motion images;
and acquiring the activity of the user within T seconds according to the average speed of the user, and displaying the activity.
In this embodiment, there are various methods for obtaining the average speed of the user in T seconds through the continuous frame motion image, which may be to determine a key point or a plurality of key points or lines between key points in the motion image according to the motion type of the user, establish a coordinate system in the motion image, and obtain the average speed of the user in T seconds through the position transformation of the key points in the continuous frame motion image in T seconds.
In this embodiment, the average speed of the user and the activity are preset to have a corresponding relationship, and after the average speed of the user is obtained, the activity can be directly obtained according to the preset corresponding relationship, and displayed, so as to further excite the user.
Example two
On the basis of embodiment 1, the method for obtaining the average speed of the user by selecting a plurality of bone points as key points specifically includes:
collecting user images in the area within a preset time T seconds to obtain continuous frame action images of the user;
acquiring a plurality of skeleton points of a user in each action image and information corresponding to each skeleton point;
acquiring the average speed of each bone point in T seconds according to the information corresponding to each bone point;
average speed of jth bone point in T seconds:
where n is the number of motion image frames included in T seconds, (x) i ,y i ) And j and i are positive integers for the coordinates of the skeleton point in the ith action image.
Obtaining the average speed of the user according to the average speed of each bone point;
each bone point is preset with a weight w j J is a positive integer, the average speed of the user:
wherein the average speed, w, of the jth bone point in T seconds j And (3) presetting a weight for the jth bone point, wherein m is the number of the bone points of the user in the action image.
And acquiring the activity of the user within T seconds according to the average speed of the user, and displaying the activity.
In this embodiment, the number of bone points may be 12 bone points, 14 bone points, or 16 bone points, and in this embodiment, the number and positions of the bone points are not limited, so long as the movements of the hands, legs, and heads can be represented respectively.
The human body liveness detection method based on action recognition in the application is described below with reference to specific examples:
step 1, collecting user images in an area within a preset time of 1 second to obtain continuous 25 frames of action images of a user;
step 2, acquiring a plurality of skeleton points of a user in each action image and information corresponding to each skeleton point;
and 2.1, setting up a rectangular coordinate system with the lower left corner of the action image as an origin, the horizontal direction as an x-axis and the vertical direction as a y-axis, and the unit length as 1 pixel, and positioning coordinates of 16 skeleton points and 16 skeleton points of a user in the action image by AI, wherein the coordinates comprise a head, a neck, a throat, a crotch, a left hand, a right hand, a left elbow, a right elbow, a left foot, a right foot, a left shoulder, a right shoulder, a left knee, a right knee, a left hip and a right hip.
Step 3, obtaining the average speed of each bone point within 1 second according to the information corresponding to each bone point;
average speed of jth bone point in 1 second:
where n is the number of moving image frames included in 1 second, n is 25 in this embodiment, (x) i ,y i ) And j and i are positive integers for the coordinates of the skeleton point in the ith action image. The head, neck, throat, crotch, left hand, right hand, left elbow, right elbow, left foot, right foot, left shoulder, right shoulder, left knee, right knee, left hip and right hip are sequenced in sequence, i.e. the head is numbered 1, the right hip is numbered 16, v 1 Is the average velocity of the skeletal points of the head, v 16 For the average speed of the right hip bone points, the average speed v of the 25 frame motion image of 16 bone points in 1 second was calculated by equation 1 j
Step 4, obtaining the average speed of the user according to the average speed of each bone point;
each bone point is preset with a weight w j J is a positive integer, the average speed of the user:
wherein the average speed, w, of the jth bone point in 1 second j Preset weight for jth bone point, m isThe number of skeletal points of the user in the motion image, m in this embodiment, is 16.
In this embodiment, the weight of the head, neck, throat, crotch is 1, the weight of the left hand, right hand, left elbow, right elbow is 1.5, the weight of the left foot, right foot, left shoulder, right shoulder is 2.5, the weight of the left knee, right knee is 4, and the weight of the left hip and right hip is 4.5.
Step 5, obtaining the relative average speed of the user according to the average speed v of the user and the preset human body reference distance s
In the present embodiment, the human body reference distance s is determined by the throat bone point and crotch bone point rhenium, the throat bone point coordinates are (x 1, y 1), the coordinates of the portion are (x 2, y 2), and then the reference pixel distance s:
s=sqrt ((x 1-x 2) × (x 1-x 2) + (y 1-y 2) × (y 1-y 2)), a common two-point euclidean distance.
Step 6 according to the relative average speed of the userAcquiring an liveness score;
step 6.1, presetting a mapping function of the average speed and the liveness score of the user;
the first nonlinear function is:
the second nonlinear function is:
the first nonlinear function is:wherein Vmax is a preset relative average velocity maximum;
step 6.2 user relative average speedAs a mapping function of x brought into step 6.1, when the user is relative to the average speed +.>And when the activity score is greater than or equal to the preset maximum value of the relative average speed, the activity score is 100.
In this embodiment, the first nonlinear function is selected as the mapping function to obtain the liveness, and after the liveness is obtained, the liveness score is displayed.
In this embodiment, this device is intelligent body-building mirror, and intelligent body-building mirror organism embeds all kinds of equipment, openly is provided with the mirror surface that is used for showing and mirror image show, and the user is when doing exercises, and the mirror surface of accessible intelligent body-building mirror shows liveness.
In this embodiment, the user continuously displays liveness on the mirror surface of the intelligent fitness mirror during the exercise. In this example, t=1 second, including 25 moving images per second, assuming that the consecutive frame numbers for a certain period of time are 1, 2, 3, 4, 5, 6,..100, and the average speed of each bone point in 1-25 frames, 2-26 frames, 3-27 frames, respectively, is calculated for a total of approximately 4 seconds. I.e. at each instant, the average speed per bone point for the first 25 frames of that instant can be calculated. And the activity of the user can be displayed at each moment, so that a better excitation effect is achieved.
Example III
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a human body activity detection system based on motion recognition, and in a second embodiment of the present application, a human body activity detection system based on motion recognition is provided, and based on embodiment 1 or 2, the system includes:
the acquisition module is used for acquiring the user images in the area within a preset time T seconds to obtain continuous frame action images of the user; the acquisition module comprises a camera, and in the embodiment, the user image is acquired through the camera;
and the skeleton point module is used for acquiring a plurality of skeleton points of the user in each action image and coordinates corresponding to each skeleton point.
The first calculation module is used for acquiring the average speed of the user in T seconds according to the continuous frame motion images;
the second calculation module is used for acquiring the activity of the user in T seconds according to the average speed of the user;
and the display module is used for displaying the activity of the user.
The first calculation module is used for obtaining the average speed of the user within T seconds according to a plurality of skeleton points in the continuous frame action images and coordinates corresponding to each skeleton point.
Example IV
The fourth embodiment of the application provides an electronic device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the human body activity detection method based on action recognition when executing the computer program.
The processor may be a central processing unit, or may be other general purpose processors, digital signal processors, application specific integrated circuits, off-the-shelf programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store the computer program and/or the module, and the processor may implement various functions of the human activity detection device based on motion recognition in the application by running or executing the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card, secure digital card, flash memory card, at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Example five
A fifth embodiment of the present application provides a computer-readable storage medium storing a computer program, where the computer program when executed by a processor implements the steps of the human body activity detection method based on motion recognition.
The computer storage media of embodiments of the application may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ReadOnlyMemory, ROM), an erasable programmable read-only memory ((ErasableProgrammableReadOnlyMemory, EPROM) or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The human body liveness detection method based on action recognition is characterized by comprising the following steps of:
collecting user images in the area within a preset time T seconds to obtain continuous frame action images of the user;
acquiring the average speed of a user in T seconds according to continuous frame motion images;
and acquiring the activity of the user within T seconds according to the average speed of the user, and displaying the activity.
2. The human body liveness detection method based on motion recognition according to claim 1, wherein the step of obtaining the average speed of the user within T seconds according to the continuous frame motion image comprises the following steps:
acquiring a plurality of skeleton points of a user in each action image and information corresponding to each skeleton point;
acquiring the average speed of each bone point in T seconds according to the information corresponding to each bone point;
the average speed of the user is obtained from the average speed of each bone point.
3. The motion recognition-based human activity detection method according to claim 2, wherein the average speed of the jth bone point in T seconds is:
where n is the number of motion image frames included in T seconds, (x) i ,y i ) And j and i are positive integers for the coordinates of the skeleton point in the ith action image.
4. A method for detecting human activity based on motion recognition according to claim 2 or 3, wherein each bone point is preset withWeight w j J is a positive integer, the average speed of the user:
wherein v is j Average velocity, w, of the jth bone point in T seconds j And (3) presetting a weight for the jth bone point, wherein m is the number of the bone points of the user in the action image.
5. The human body activity detection method based on motion recognition according to claim 4, wherein the average speed of the user is obtained according to the average speed of the user and a preset human body reference distance, and the user activity is obtained within T seconds according to the average speed of the user.
6. The human body liveness detection method based on action recognition according to claim 1, wherein the method is characterized in that the user liveness is obtained within T seconds according to the average speed of the user, and specifically comprises the following steps:
presetting a mapping function of the average speed and the liveness score of a user;
and obtaining the liveness score according to the obtained average speed of the user and the mapping function.
7. Human liveness detecting system based on action discernment, characterized by comprising:
the acquisition module is used for acquiring the user images in the area within a preset time T seconds to obtain continuous frame action images of the user;
the first calculation module is used for acquiring the average speed of the user in T seconds according to the continuous frame motion images;
the second calculation module is used for acquiring the activity of the user in T seconds according to the average speed of the user;
and the display module is used for displaying the activity of the user.
8. The motion recognition based human liveness detection system of claim 7 further comprising:
and the skeleton point module is used for acquiring a plurality of skeleton points of the user in each action image and coordinates corresponding to each skeleton point.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the human activity detection method based on motion recognition as claimed in any one of claims 1-6.
10. A storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the human activity detection method based on motion recognition according to any one of claims 1-6.
CN202210125184.9A 2022-02-10 2022-02-10 Human body liveness detection method, system and device based on action recognition and medium Pending CN116631045A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210125184.9A CN116631045A (en) 2022-02-10 2022-02-10 Human body liveness detection method, system and device based on action recognition and medium
PCT/CN2022/094627 WO2023151200A1 (en) 2022-02-10 2022-05-24 Action-recognition-based human body activity level measurement method, system and apparatus, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210125184.9A CN116631045A (en) 2022-02-10 2022-02-10 Human body liveness detection method, system and device based on action recognition and medium

Publications (1)

Publication Number Publication Date
CN116631045A true CN116631045A (en) 2023-08-22

Family

ID=87563517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210125184.9A Pending CN116631045A (en) 2022-02-10 2022-02-10 Human body liveness detection method, system and device based on action recognition and medium

Country Status (2)

Country Link
CN (1) CN116631045A (en)
WO (1) WO2023151200A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035557B (en) * 2014-05-22 2017-04-19 华南理工大学 Kinect action identification method based on joint activeness
CN110338804A (en) * 2019-07-02 2019-10-18 中山大学 Human body liveness appraisal procedure based on action recognition
CN112169296B (en) * 2019-07-05 2021-10-22 荣耀终端有限公司 Motion data monitoring method and device
JP7420246B2 (en) * 2020-05-27 2024-01-23 日本電気株式会社 Video processing device, video processing method, and program

Also Published As

Publication number Publication date
WO2023151200A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
TW452723B (en) Method and apparatus for three-dimensional input entry
US20190066390A1 (en) Methods of Using an Imaging Apparatus in Augmented Reality, in Medical Imaging and Nonmedical Imaging
CN108776775B (en) Old people indoor falling detection method based on weight fusion depth and skeletal features
US20130141607A1 (en) Video generating apparatus and method
WO2020042542A1 (en) Method and apparatus for acquiring eye movement control calibration data
CN107930048B (en) Space somatosensory recognition motion analysis system and motion analysis method
CN109375765A (en) Eyeball tracking exchange method and device
CN103608761B (en) Input equipment, input method and recording medium
US11429193B2 (en) Control of virtual objects based on gesture changes of users
KR20200138349A (en) Image processing method and apparatus, electronic device, and storage medium
JP2013156889A (en) Movement control device, control method for movement control device, and program
CN111967407A (en) Action evaluation method, electronic device, and computer-readable storage medium
WO2023108842A1 (en) Motion evaluation method and system based on fitness teaching training
JP2003320064A (en) Exercise support system
CN116631045A (en) Human body liveness detection method, system and device based on action recognition and medium
US9046933B2 (en) Displaying three-dimensional image data
US20230145451A1 (en) Monitoring exercise activity in a gym environment
US20140073383A1 (en) Method and system for motion comparison
CN115223240A (en) Motion real-time counting method and system based on dynamic time warping algorithm
CN111462337B (en) Image processing method, device and computer readable storage medium
US10769803B2 (en) Sight vector detecting method and device
Pourazar et al. A comprehensive framework for evaluation of stereo correspondence solutions in immersive augmented and virtual realities
CN113805704A (en) Vision treatment method and system based on VR technology
CN113544701A (en) Method and device for detecting associated object
JP2000271108A (en) Device and system for processing image, method for judging posture of object, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination