CN112102360B - Action type identification method and device, electronic equipment and medium - Google Patents

Action type identification method and device, electronic equipment and medium Download PDF

Info

Publication number
CN112102360B
CN112102360B CN202010824543.0A CN202010824543A CN112102360B CN 112102360 B CN112102360 B CN 112102360B CN 202010824543 A CN202010824543 A CN 202010824543A CN 112102360 B CN112102360 B CN 112102360B
Authority
CN
China
Prior art keywords
gesture
human body
thermal image
stage
thermal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010824543.0A
Other languages
Chinese (zh)
Other versions
CN112102360A (en
Inventor
叶景泰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202010824543.0A priority Critical patent/CN112102360B/en
Publication of CN112102360A publication Critical patent/CN112102360A/en
Application granted granted Critical
Publication of CN112102360B publication Critical patent/CN112102360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a medium for identifying action types. The method comprises the following steps: acquiring a human body thermal image at least comprising a body of a user, determining a human body area in the human body thermal image, and extracting characteristic information of the human body area from the human body thermal image; according to the characteristic information of the human body areas of two adjacent frames of human body thermal images in the human body thermal images, determining the posture change condition of a user between the two adjacent frames of human body thermal images to generate time axis data, wherein the time axis data comprises multiple frames of human body thermal images which are divided into different posture stages according to the posture change condition; acquiring human body thermal images included in each gesture stage from the time axis data to acquire representative gesture thermal images corresponding to each gesture stage; and acquiring a first representative gesture thermal image corresponding to the first gesture stage and a second representative gesture thermal image corresponding to the second gesture stage, and determining whether a target action occurs to the user from the first gesture stage to the second gesture stage according to the representative gesture thermal images.

Description

Action type identification method and device, electronic equipment and medium
Technical Field
The present application relates to the field of thermal imaging technologies, and in particular, to a method and apparatus for identifying a motion type, an electronic device, and a medium.
Background
Good sleep quality is very important, and influences mental conditions to be healthy and other factors, so people hope to improve the sleep quality by combining the intelligent household technology to realize intelligent sleep.
The quilt kicking behavior of the sleeper has great significance. On the one hand, the quilt can indirectly reflect the thermal comfort feeling of temperature, such as a hot quilt and a cold quilt. On the other hand, particularly for children, the quilt can be kicked during sleeping, but the quilt can not be actively covered, so that the children can catch cold easily. Therefore, there is a need for real-time monitoring and identification of quilt kicking behaviors.
The wearing type sensor and the quilt-mounted limit contactor used for monitoring are in the market at present, which is not beneficial to sleeping comfort.
Disclosure of Invention
The application provides an action type identification method, an action type identification device, electronic equipment and a medium.
In a first aspect, there is provided an action type recognition method, including:
acquiring a human thermal image, wherein the human thermal image at least comprises a body of a user;
determining a human body area in the human body thermal image, and extracting characteristic information of the human body area from the human body thermal image;
Determining the posture change condition of a user between two adjacent frames of human body thermal images according to the characteristic information of the human body areas of the two adjacent frames of human body thermal images;
generating time axis data according to the posture change condition of the user, wherein the time axis data comprises a plurality of frames of human body thermal images, and the plurality of frames of human body thermal images are divided into different posture stages according to the posture change condition;
acquiring human body thermal images included in each gesture stage from the time axis data, and acquiring a representative gesture thermal image corresponding to each gesture stage according to the human body thermal images included in each gesture stage, wherein one representative gesture thermal image is used for representing the gesture of a user in one gesture stage;
and acquiring a first representative gesture thermal image corresponding to a first gesture stage and a second representative gesture thermal image corresponding to a second gesture stage in the time axis data, and determining whether the user generates a target action from the first gesture stage to the second gesture stage according to the first representative gesture thermal image and the second representative gesture thermal image, wherein the second gesture stage is the next stage of the first gesture stage.
In an optional embodiment, the determining, according to the feature information of the human body area of the two adjacent frames of human body thermal images, the posture change situation of the user between the two adjacent frames of human body thermal images includes:
acquiring a temperature difference value of pixel points at the same position between two adjacent frames of human body thermal images in the human body thermal images;
obtaining the number of pixels with the temperature difference value not smaller than a temperature difference value threshold;
and if the number of the pixel points, of which the temperature difference value is not smaller than the temperature difference value threshold value, is larger than a number threshold value, determining that the gesture change occurs between the two adjacent frames of human body thermal images.
In an optional implementation manner, the two adjacent frames of human thermal images comprise a history frame and a reference frame, wherein the history frame is a previous frame of the reference frame;
the generating time axis data according to the gesture change condition of the user comprises the following steps:
adding the reference frame into a queue when the gesture change of the user occurs between the history frame and the reference frame, and taking the reference frame as the history frame and taking the next frame of the reference frame as the reference frame when the queue member is not full; executing the step of determining the posture change condition of the user between the two adjacent frames of human thermal images;
And when the queue member is full, newly adding the queue into one gesture stage of the time axis data.
In an optional implementation manner, the acquiring the thermal images of the human body included in each gesture stage from the time axis data, and obtaining the thermal images of the representative gesture corresponding to each gesture stage according to the thermal images of the human body included in each gesture stage, includes:
acquiring temperature values and position information of all pixel points in all the human body thermal image frames of the queue, and acquiring an average value of the temperature values of the pixel points with the same position information in all the human body thermal image frames;
obtaining an average frame of all the human thermal image frames according to the obtained average value of the temperature values of the pixel points and the position information of the pixel points;
and taking the average frame as a representative gesture thermal image corresponding to the gesture stage.
In an alternative embodiment, the determining whether the user has performed a target action in the first gesture stage to the second gesture stage based on the first representative gesture thermal image and the second representative gesture thermal image includes:
acquiring human body region parameters corresponding to the first gesture stage and the second gesture stage respectively according to the first representative gesture thermal image and the second representative gesture thermal image, wherein one human body region parameter reflects the size of a human body region in one representative gesture thermal image, and the human body region parameter is at least one of the perimeter, the area or the circumscribed matrix of the human body region;
And comparing the human body area parameters respectively corresponding to the first gesture stage and the second gesture stage, and determining whether the user generates a target action in the first gesture stage to the second gesture stage.
In an alternative embodiment, before the acquiring the thermal image of the human body, the method further comprises:
acquiring a thermal image to be processed acquired by a thermal imaging sensor, and detecting whether the temperature value of a pixel point in the thermal image to be processed is higher than a temperature threshold value;
if the temperature value of the pixel points in the thermal image to be processed is higher than the temperature threshold value, extracting a communication region in the thermal image to be processed after binarization processing is carried out on the thermal image to be processed, and acquiring characteristic parameters of the communication region, wherein the characteristic parameters reflect the shape characteristics of the communication region;
and determining whether the thermal image to be processed is the thermal image of the human body according to the characteristic parameters of the communication area.
In an alternative embodiment, the determining the human body region in the human body thermal image includes:
acquiring a gradient value of each pixel point in the human thermal image, wherein the gradient value is a temperature difference value between the pixel point and the pixel points in the upper, lower, left and right directions;
And if the gradient value of at least one direction of the pixel points in the thermal image to be processed is larger than a gradient threshold value, determining the region where the pixel points with the gradient value of at least one direction larger than the gradient threshold value are located as the human body region.
In a second aspect, there is provided an action type recognition apparatus, comprising:
the acquisition module is used for acquiring a human thermal image, wherein the human thermal image at least comprises a body of a user;
the feature extraction module is used for determining a human body region in the human body thermal image and extracting feature information of the human body region from the human body thermal image;
the judging module is used for determining the posture change condition of a user between two adjacent frames of human body thermal images according to the characteristic information of the human body areas of the two adjacent frames of human body thermal images;
the generating module is used for generating time axis data according to the posture change condition of the user, wherein the time axis data comprises a plurality of frames of human body thermal images, and the plurality of frames of human body thermal images are divided into different posture stages according to the posture change condition;
the analysis module is used for acquiring human body thermal images included in each gesture stage from the time axis data, acquiring a representative gesture thermal image corresponding to each gesture stage according to the human body thermal images included in each gesture stage, and one representative gesture thermal image is used for representing the gesture of a user in one gesture stage;
The analysis module is further configured to obtain a first representative gesture thermal image corresponding to a first gesture stage and a second representative gesture thermal image corresponding to a second gesture stage in the timeline data, and determine, according to the first representative gesture thermal image and the second representative gesture thermal image, whether a target action occurs to the user in the first gesture stage to the second gesture stage, where the second gesture stage is a stage next to the first gesture stage.
In a third aspect, there is provided an electronic device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps as in the first aspect and any one of its possible implementations.
In a fourth aspect, there is provided a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the steps of the first aspect and any one of its possible implementations described above.
The present application is a method for determining whether or not a first posture of a user has occurred in a first posture of a user in a first posture stage, and a second posture of a user in a second posture stage, by acquiring a thermal image of a human body including at least a body of the user, determining a region of the human body in the thermal image, extracting feature information of the region of the human body from the thermal image of the human body, determining a change in posture of the user between two adjacent thermal images in accordance with the feature information of the region of the human body in the thermal image of the human body, generating time axis data including a plurality of thermal images of the human body in accordance with the change in posture of the user, dividing the plurality of thermal images of the human body into different posture stages in accordance with the change in posture of the user, acquiring a thermal image of the human body included in each posture stage from the time axis data, acquiring a representative thermal image of the human body corresponding to each posture stage from the thermal image of the human body included in each posture stage, and acquiring a first representative thermal image of the user representing one posture stage, and acquiring a first representative thermal image of the user corresponding to the first representative posture stage in the time axis data, and a second representative thermal image of the user corresponding to the first posture stage in the time axis data, and determining whether or not the first representative thermal image of the user has occurred in the first posture stage. The time axis data dividing different gesture stages are generated through thermal image analysis to detect the gesture change condition of the object, representative gesture thermal images of the gesture stages are analyzed, and the analysis is not only judged according to the simple rule of the change of heat of the thermal image, namely, the time for maintaining the gesture of the detected object is considered to reduce the influence interference of factors of temporary actions such as turning over, the action and the state of a user can be judged more accurately, the detection device is suitable for the action monitoring of a quilt kicked off and the like in a sleeping scene, a precondition is provided for the response processing of follow-up intelligent equipment, the monitoring is not needed through a wearable sensor and the like, and the user comfort is improved.
Drawings
In order to more clearly describe the embodiments of the present application or the technical solutions in the background art, the following description will describe the drawings that are required to be used in the embodiments of the present application or the background art.
FIG. 1 is a flow chart of a method for identifying action types according to an embodiment of the present application;
fig. 2 is a schematic diagram of a thermal image of a human body according to an embodiment of the present application;
FIG. 3 is a schematic view of a sleeping position time axis according to an embodiment of the present application;
FIG. 4 is a thermal image schematic diagram of three sleeping positions provided by an embodiment of the present application;
FIG. 5 is a flowchart illustrating another method for identifying action types according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an action type recognition device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the present application better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for identifying an action type according to an embodiment of the application. The method may include:
101. and acquiring a human thermal image, wherein the human thermal image at least comprises a body of a user.
The execution body of the embodiment of the present application may be an action type recognition apparatus, and may be an electronic device, in a specific implementation, the electronic device is a terminal, which may also be referred to as a terminal device, including, but not limited to, other portable devices such as a mobile phone, a laptop computer, or a tablet computer having a touch sensitive surface (e.g., a touch screen display and/or a touch pad). It should also be appreciated that in some embodiments, the above-described devices are not portable communication devices, but rather desktop computers having touch-sensitive surfaces (e.g., touch screen displays and/or touch pads).
In one embodiment, the thermal imaging sensor may be mounted on a wall on the bedside side, such as in the center of the bedside, at an elevation of about 1.8m or so, to allow imaging of the torso of a person in the center region of the thermal image from head to knee. Embodiments of the present application are not limited in the type and placement of thermal imaging sensors.
The thermal imaging sensor may periodically acquire thermal images, including thermal images of the human body, which are provided to the motion type recognition device for processing. The thermal image of the human body at least comprises the body of the user and can be used for human body gesture analysis.
In an alternative embodiment, the frame rate of the sensor may be set to 2/s, the acquisition time is preset at night, for example, 0-7 points, and the thermal image data of a whole night may be obtained by identifying the period from when the user enters the sleep state until getting up in the morning through the image. Thermal imaging sensors may be installed and thermal image acquisition times configured as desired, as embodiments of the application are not limited in this regard.
In the embodiment of the application, a thermal imaging image (thermal image) in the environment can be acquired through a thermal imaging sensor, and the thermal imaging is a detection device which detects infrared energy (heat) in a non-contact way and converts the infrared energy (heat) into an electric signal so as to generate a thermal image and a temperature value on a display and can calculate the temperature value. All objects in nature, whether it be arctic glaciers, flames, human bodies, even extremely cold universe deep spaces, will have infrared radiation as long as their temperature is above absolute zero-273 ℃, as a result of the thermal movement of molecules inside the object. The radiant energy is proportional to the fourth power of the temperature of the device, and the radiated wavelength is inversely proportional to the temperature of the device. Infrared imaging techniques are based on the level of radiant energy of a detected object. The temperature distribution of the measured object is obtained by converting the temperature distribution into a thermal image of the object through system processing and displaying the thermal image in gray level or pseudo color, so as to judge the state of the object.
The thermal imaging sensor in the embodiment of the application can periodically acquire thermal images, alternatively, the resolution of the thermal imaging sensor can be 24×32, and each time of acquisition, thermal image data with a frame size of 24×32 is output, wherein the value of each pixel point is a temperature value. The distribution range of heat and specific temperature value information can be known through thermal image analysis.
The action type recognition device may acquire the thermal image of the human body, and execute step 102.
102. And determining a human body area in the human body thermal image, and extracting characteristic information of the human body area from the human body thermal image.
Optionally, the thermal image acquired by the thermal imaging sensor may be preprocessed to obtain a thermal image of the human body therein.
In one embodiment, before the step 101, the method further includes:
acquiring a thermal image to be processed acquired by a thermal imaging sensor, and detecting whether the temperature value of a pixel point in the thermal image to be processed is higher than a temperature threshold value;
if the temperature value of the pixel points in the thermal image to be processed is higher than the temperature threshold value, extracting a connected region in the thermal image to be processed after binarizing the thermal image to be processed, and acquiring characteristic parameters of the connected region, wherein the characteristic parameters reflect the shape characteristics of the connected region;
And determining whether the thermal image to be processed is the human thermal image according to the characteristic parameters of the communication area.
Binarization as mentioned in the embodiments of the present application is one of the simplest methods of image segmentation. Binarization may convert a gray scale image into a binary image. The binarization may be achieved by setting a pixel gradation larger than a certain critical gradation value as a gradation maximum value and setting a pixel gradation smaller than this value as a gradation minimum value.
In the embodiment of the application, the pixel points in the area which can be higher than the temperature threshold value are marked as 1, the pixel points in the other areas are marked as 0, and then the extraction of the connected area is performed, for example, the adjacent pixel points which are all 1 are searched to obtain one connected area. The above steps can obtain the communication region, and further obtain the characteristic parameters of the communication region, which may include the area, perimeter, length and width of the external matrix, and the like of the communication region, which is not limited herein.
Further, whether the thermal image to be processed is a human thermal image may be determined according to the feature parameters of the communication area.
In an alternative embodiment, the thermal image to be processed may be determined to be a thermal image of a human body according to the area of the communication area if the area of the communication area in the thermal image to be processed is greater than an area threshold.
The above-mentioned primarily determined communication area may include not only a human body but also a heat-generating interfering article due to the influence of some factors in the external environment. The area threshold value can be preset, and small-sized heating interference objects such as mobile phones and hot water bags can be filtered through the steps, so that accuracy is improved. Similarly, the filtering can be performed by setting a perimeter threshold, a length and width of an external matrix, and the like, which are not described herein.
See a thermal image of the human body as shown in fig. 2. The thermal image of the human body in fig. 2 is marked with the temperature value of each pixel point, so that the human shape (human body area) in the white frame can be seen, and the temperature of the human body area is higher than the surrounding environment and is mainly 24-30 because the quilt is covered to mainly expose the head and the upper limbs.
Specifically, in order to extract the human body region preliminarily, the temperature threshold may be set, and whether the temperature value of each pixel point in the thermal image to be processed is greater than the temperature threshold is determined; if the temperature value of the pixel points in the thermal image to be processed is higher than the temperature threshold value, binarizing the thermal image to be processed, and then extracting the connected region in the thermal image to be processed.
For example, if the temperature threshold is 28 and the temperature value of the pixel points in the image to be processed is greater than 28, the binarization processing can be performed on the thermal image to be processed; if the temperature value of the pixel point is not greater than 28 in the image to be processed, it can be determined that the thermal image to be processed does not belong to the thermal image of the human body. Through the steps, the thermal image of the human body can be preliminarily determined.
In order to more accurately determine the human body region in the human body thermal image, the temperature change is extracted by combining the processing of the temperature threshold value and the direction gradient, namely the determination of the human body region in the human body thermal image comprises the following steps:
acquiring a gradient value of each pixel point in the human thermal image, wherein the gradient value is a temperature difference value between the pixel point and the pixel points in the upper, lower, left and right directions;
and if the gradient value of at least one direction of the pixel points in the thermal image to be processed is larger than the gradient threshold value, determining the region where the pixel points with the gradient value of at least one direction larger than the gradient threshold value are located as the human body region.
Specifically, in addition to the temperature threshold value, a gradient value of each pixel point in the thermal image of the human body, that is, a temperature difference value (left_grad, right_grad, up_grad, down_grad) between the pixel point and the pixel points in the up, down, left and right directions may be obtained. Of these 4-direction gradient values, it can be determined as belonging to a human body region as long as one direction gradient value is greater than a preset gradient threshold value (e.g., 2). The temperature threshold set in this manner may be higher than in the method using only the temperature threshold. The method is better in effect of extracting the human body region by taking the difference between the human body and the environment temperature into consideration besides that the human body temperature is larger than the threshold value.
After the pretreatment, the human body region in the human body thermal image is determined, and the characteristic extraction can be performed to obtain the characteristic information of the human body region.
103. And determining the posture change condition of the user between the two adjacent frames of human body thermal images according to the characteristic information of the human body areas of the two adjacent frames of human body thermal images.
Specifically, the characteristic information of the human body region of a human body thermal image may include a temperature value of a pixel point of the human body region, an area, a perimeter, a length and a width of an external rectangle, and the like of the human body region. By comparing the difference of the characteristic information of the human body areas in the two adjacent frames of human body thermal images, the posture change condition of the user between the two adjacent frames of human body thermal images can be judged.
In an alternative embodiment, the step 103 includes:
acquiring the temperature difference value of pixel points at the same position between two adjacent frames of human body thermal images in the human body thermal images;
obtaining the number of pixels with the temperature difference value not smaller than a temperature difference value threshold;
and if the number of the pixel points, of which the temperature difference is not smaller than the temperature difference threshold, is larger than a number threshold, determining that the gesture of the user is changed between the two adjacent frames of human thermal images.
The differences referred to in the embodiments of the present application are also known as differential functions or differential operations, and the result of the differences reflects a variation between discrete quantities. And (2) sequentially judging whether the temperature difference is smaller than the temperature difference threshold according to the preset temperature difference threshold (such as 2), and counting the number of pixels with the temperature difference not smaller than the temperature difference threshold, wherein the larger the number of pixels is, the larger the gesture difference corresponding to the human body region of the two adjacent frames of human body thermal images can be understood.
Based on a preset number threshold, if the counted number is greater than the number threshold, it can be determined that the user has a gesture change, for example, a turning action is considered to exist in sleeping gesture monitoring.
And when a new frame of human body thermal image is acquired, comparing the human body thermal image with the human body thermal image of the adjacent previous frame to determine whether the posture of the user changes.
104. Generating time axis data according to the posture change condition of the user, wherein the time axis data comprises a plurality of frames of human body thermal images, and the plurality of frames of human body thermal images are divided into different posture stages according to the posture change condition.
Specifically, the periodically acquired thermal images of the human body may generate a time axis according to a time sequence, wherein the multiple thermal images of the human body are divided into different gesture stages according to the gesture change condition. It will be appreciated that when it is determined that a user has changed posture between two adjacent frames of human thermal images, the two adjacent posture stages may be divided by taking the node between the two adjacent frames of human thermal images as a boundary.
Optionally, a dividing duration threshold may also be set, where the independent gesture phase duration needs to satisfy a constraint that the duration is greater than the dividing duration threshold (e.g., 1 minute). The obtained time axis can be output and displayed in the form of an image, such as a sleep posture time axis schematic diagram generated after the sleep is finished.
105. And acquiring human body thermal images included in each gesture stage from the time axis data, and acquiring a representative gesture thermal image corresponding to each gesture stage according to the human body thermal images included in each gesture stage, wherein one representative gesture thermal image is used for representing the gesture of a user in one gesture stage.
Specifically, for each gesture stage in the timeline data, a representative gesture thermal image may be selected from the thermal images of the human body included in the gesture stage, representing the user gesture of the gesture stage. Alternatively, the thermal image of the human body at the intermediate time of each posture stage may be taken as the representative posture thermal image corresponding to the time period, which is not limited herein. In case a representative pose thermal image of the generated pose stage is obtained, step 106 may be performed.
For example, reference may be made to a sleeping position timeline diagram shown in fig. 3. According to the embodiment of the application, the human body thermal image in the sleeping process can be periodically acquired, the time axis can divide a plurality of stages according to the human body sleeping posture change in the human body thermal image, the divided sleeping posture meets the constraint condition that the duration time is longer than 1 minute, five time periods a, b, c, d, e in fig. 3 can be obtained, and the sleeping posture of a user in each time period is maintained in a basically unchanged state. Different time periods can be displayed in different colors, and information such as the time of a time axis, the duration of each stage and the like can be marked so as to intuitively show the sleeping posture change condition of the user in the sleeping process. Wherein the sleeping posture at the middle time of the period can be used as a sleeping posture thermal image (representative posture thermal image) of the stage. For example, three kinds of thermal images of sleeping posture are shown in fig. 4, which are thermal images of sleeping posture at the middle time of the three posture stages a, b, and c in the time axis.
106. And acquiring a first representative gesture thermal image corresponding to a first gesture stage and a second representative gesture thermal image corresponding to a second gesture stage in the time axis data, and determining whether or not a target motion occurs in the user from the first gesture stage to the second gesture stage based on the first representative gesture thermal image and the second representative gesture thermal image, wherein the second gesture stage is a stage next to the first gesture stage.
Wherein the second gesture stage is the next stage to the first gesture stage, and the step indicates user gesture analysis between two adjacent gesture stages, specifically by comparing representative gesture thermal images. The preset action recognition model can be called, the representative gesture thermal images of the two gesture stages are transmitted, and the characteristic change of the human body area is analyzed to judge whether the user generates the target action.
In one embodiment, the human body region parameters corresponding to the first posture stage and the second posture stage may be acquired based on the first representative posture thermal image and the second representative posture thermal image, one of the human body region parameters reflecting a size of a human body region in the representative posture thermal image, the human body region parameter being at least one of a perimeter, an area, or an circumscribed matrix of the human body region;
and comparing the human body region parameters corresponding to the first gesture stage and the second gesture stage, and determining whether the user generates a target action from the first gesture stage to the second gesture stage.
In the two representative pose thermal images being compared, a human body region parameter is extracted, which may be at least one of a perimeter, an area, or an circumscribing matrix of a human body region, one human body region parameter reflecting a size of a human body region in one representative pose thermal image. By comparing the human body region parameters corresponding to the first posture stage and the second posture stage, and determining the degree of difference between the two, it is possible to determine whether or not the user has performed a target motion in the first posture stage to the second posture stage.
In an alternative embodiment, if the human body region parameter is the area of the human body region. Taking an application scene of sleeping gesture motion detection as an example, the sleeping gesture of a user in two stages adjacent to each other before and after needs to be compared. Alternatively, the area difference between the human body areas from the first posture stage to the second posture stage may be calculated, the absolute value of the result may be obtained, whether the absolute value of the area difference is greater than a preset area threshold may be determined, if so, the action may be determined, and whether the area of the human body area is reduced or increased may be determined according to the positive and negative conditions of the area difference, where the action corresponding to the user is specifically that of covering the quilt (the area of the human body area is reduced) or kicking the quilt (the area of the human body area is increased).
Alternatively, for convenience and description, the area of the human body region of the first representative posture thermal image is referred to as a first area, and the area of the human body region of the second representative posture thermal image is referred to as a second area. The following calculations may also be performed: and (first area-second area)/first area, comparing the calculated result with a preset ratio threshold, and determining that the quilt kicking action occurs if the calculated result is larger than the preset ratio threshold. Optionally, the action classification can be further implemented by determining whether different actions occur through different action recognition models or judgment rules, such as quilt covering actions, turning actions, and the like. The embodiment of the application does not limit the specific judgment rule of the action recognition model.
Further alternatively, in the case that the target action is detected, an instruction corresponding to the target action may be issued. For example, under the condition that the occurrence of the quilt kicking action is determined, corresponding automatic processing of the intelligent home can be triggered, for example, prompt information is sent out by intelligent anti-kicking or the temperature, the wind direction and the like of an air conditioner are intelligently adjusted and controlled.
The embodiment of the application can be applied to a scene for monitoring the sleep state, the thermal imaging sensor is used for collecting the thermal image of the human body in the sleeping process of the user, and then the image processing technology is used for processing and analyzing the thermal image to extract the relevant characteristics. Finally, the type judgment of the quilt kicking action is realized by a preset logic rule. The whole calculation processing process can be deployed locally without uploading the cloud, so that the privacy of the user is protected. Finally, based on action information such as kicking, quilt covering and the like, a precondition basis can be provided for the subsequent processing of the intelligent equipment.
According to the embodiment of the application, the human body thermal image at least comprises a human body of a user, the human body thermal image at least comprises a human body of the user, the human body area in the human body thermal image is determined, the characteristic information of the human body area is extracted from the human body thermal image, the posture change condition of the user between two adjacent frames of human body thermal images is determined according to the characteristic information of the human body area of the human body thermal image, time axis data can be generated according to the posture change condition of the user, the time axis data comprises a plurality of frames of human body thermal images, the plurality of frames of human body thermal images are divided into different posture stages according to the posture change condition, the human body thermal image included in each posture stage is obtained from the time axis data, the representative posture thermal image corresponding to each posture stage is obtained according to the human body thermal image included in each posture stage, one representative posture thermal image is used for representing the user posture of one posture stage, then the first representative posture thermal image corresponding to the first posture stage in the time axis data and the second representative thermal image corresponding to the second representative posture stage can be obtained, whether the first representative thermal image is in the first posture stage and the second representative thermal image is the first posture stage. The time axis data dividing different gesture stages are generated through thermal image analysis to detect the gesture change condition of the object, representative gesture thermal images of the gesture stages are analyzed, and the analysis is not only judged according to the simple rule of the change of heat of the thermal image, namely, the time for maintaining the gesture of the detected object is considered to reduce the influence interference of factors of temporary actions such as turning over, the action and the state of a user can be judged more accurately, the detection device is suitable for the action monitoring of a quilt kicked off and the like in a sleeping scene, a precondition is provided for the response processing of follow-up intelligent equipment, the monitoring is not needed through a wearable sensor and the like, and the user comfort is improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating another method for identifying an action type according to an embodiment of the application. As shown in fig. 5, the method may specifically include:
501. and acquiring a human thermal image, wherein the human thermal image at least comprises a body of a user.
502. And determining a human body area in the human body thermal image, and extracting characteristic information of the human body area from the human body thermal image.
503. And determining the posture change condition of a user between two adjacent frames of human thermal images according to the characteristic information of the human body areas of the two adjacent frames of human thermal images, wherein the two adjacent frames of human thermal images comprise a history frame and a reference frame, and the history frame is the previous frame of the reference frame.
The above steps 501 to 503 may refer to the specific descriptions of the steps 101 to 103 in the embodiment shown in fig. 1, and are not repeated here.
504. Adding the reference frame to a queue when the user has changed posture between the history frame and the reference frame, and taking the reference frame as the history frame and taking the next frame of the reference frame as the reference frame when the queue member is not full; and executing the step of determining the posture change condition of the user between the two adjacent frames of human thermal images.
Specifically, for the human body thermal images of two adjacent frames, for convenience of understanding, one frame is called a reference frame, the previous frame of the reference frame is called a history frame, and in the data processing process, the reference frame can be a frame of human body thermal image acquired recently.
A queue may be predefined for storing and processing the thermal image data of the human body, which may be used to generate a gesture phase in the timeline data. When determining that the gesture change occurs between the history frame and the reference frame, for example, in sleeping gesture monitoring, the user considers that a turning action exists, a new gesture stage needs to be divided from the reference frame, and the reference frame can be put into a queue to be used as a first frame of the queue and is continuously processed; and when the queue member is not full and a new human thermal image is acquired, the new human thermal image is taken as a reference frame, and at the moment, the last reference frame is taken as a corresponding history frame, and the processing flow is repeatedly performed.
If the user does not change posture between the history frame and the reference frame, the reference frame may be equally divided into the same time periods as the history frame, and the phases belonging to the same posture may be indicated.
505. When the queue member is full, the queue is newly added to one gesture stage of the time axis data.
And executing the steps until the queue member is full, and newly adding the queue into a gesture stage of the time axis data. Each frame is analyzed and judged in a queue mode, the time axis data can be obtained, and the gesture change in the process can be reflected.
Specifically, the above method may be performed corresponding to the following procedure:
(1) Defining a buffer queue Q, where the window length of the buffer queue may be 2×60=120, where the window length may limit the time period duration in the time axis data, that is, the dividing gesture satisfies the constraint condition that the duration is longer than the preset duration. Reading a frame as a history frame pre_frame, pressing the history frame into a queue, defining a corresponding history sleeping posture pre_sleep_post, and initializing to be empty; a flag is set to indicate whether a gesture change has occurred, and is initialized to False.
Further, it is possible to perform:
(2) And reading a frame of data frame (reference frame), carrying out differential operation on the pre_frame and the frame to obtain a temperature difference value of pixel points at the same position between the two frames, and counting the number of the pixel points with the temperature difference value being more than or equal to 2. If the number of pixels is greater than the threshold T, a turning operation (posture change) is considered to be present.
(3) Under the condition that the turning action is determined to exist, emptying a queue Q, adding frames to the queue, and setting change_flag as True (gesture change occurs); otherwise, the data frame is added to the queue Q.
(4) Judging whether the queue Q is full, namely, the number of elements in the queue Q is equal to the length 120 of the queue itself, if the queue Q is not full, updating a history frame pre_frame into a frame, and jumping to the step (2) for execution; if the queue is full, step (5) may be performed.
506. And acquiring the temperature values and the position information of all the pixel points in all the human body thermal image frames of the queue, and acquiring the average value of the temperature values of the pixel points with the same position information in all the human body thermal image frames.
In the case where the above-mentioned queue member is full, statistics of one gesture stage in the time axis are completed, and a representative gesture thermal image needs to be determined for one gesture stage. The temperature values and the position information of all the pixel points in all the human body thermal image frames of the queue can be obtained, and the average value of the temperature values of the pixel points with the same position information in all the human body thermal image frames is obtained.
507. And obtaining an average frame of all the human thermal image frames according to the obtained average value of the temperature values of the pixel points and the position information of the pixel points, and taking the average frame as a representative gesture thermal image corresponding to the gesture stage.
And obtaining the average frame of all the human body thermal image frames according to the average value of the temperature values of the pixel points with the same position information in all the human body thermal image frames, wherein the temperature value of the pixel point at each position is the average value of the temperatures of the corresponding positions of all the human body thermal image frames.
The average frame for each gesture stage can be obtained in the above manner as a representative gesture thermal image for the corresponding gesture stage.
Alternatively, in order to avoid that only an instantaneous action, not an action maintained after the posture change, occurs between two adjacent frames, the first frame at the time of the posture change may be omitted, and then the judgment may be performed. Specific:
an average frame of all the thermal images of the human body in the queue is obtained as a sleep-posture thermal image sleep_posture, after one frame is deleted from the head of the queue, the condition that change_flag=true and pre_sleep_posture is not equal to null (the previous posture exists) is judged, and if the condition is met, the posture change is determined to occur. It should be noted that by deleting the first frame and then making the determination, the pose that was changed and maintained can be determined and not the moment of action. Then the change_flag is set to False and the sleep posture change type recognition function func is called, so that the type of the kicking action is determined. The update history sleep pose pre_sleep_post is sleep_post and the update history frame pre_frame is frame for continued processing.
If no gesture change occurs, this indicates that a short motion is detected but the final motion remains substantially unchanged, and no new gesture recognition is required. In this case, the time period may also be recombined with the previous time period.
508. And acquiring a first representative gesture thermal image corresponding to a first gesture stage and a second representative gesture thermal image corresponding to a second gesture stage in the time axis data, and determining whether or not a target motion occurs in the user from the first gesture stage to the second gesture stage based on the first representative gesture thermal image and the second representative gesture thermal image, wherein the second gesture stage is a stage next to the first gesture stage.
After the gesture stages are divided, gesture comparisons between different stages may be made using the obtained representative gesture thermal images to determine whether a particular action has occurred by the user.
Further, for example, in the foregoing sleep posture monitoring scenario, a kicking quilt motion recognition model may be invoked, pre_sleep_post and sleep_post are transmitted, that is, two sleep postures before and after the motion are respectively extracted from features (such as the perimeter, the area, the external matrix, etc. of the human body region) of the pre_sleep_ posture, sleep _post, and the human body region parameter variation conditions of the two sleep postures are counted, and some simple threshold rule decisions are performed to determine the motion type.
The step 508 may refer to the specific description of the step 106 in the embodiment shown in fig. 1, which is not repeated here.
Based on the description of the embodiment of the action type recognition method, the embodiment of the application also discloses an action type recognition device. Referring to fig. 6, the action type recognition apparatus 600 includes:
an acquisition module 610, configured to acquire a thermal image of a human body, where the thermal image of the human body includes at least a body of a user;
a feature extraction module 620, configured to determine a human body region in the human body thermal image, and extract feature information of the human body region from the human body thermal image;
a judging module 630, configured to determine a posture change condition of a user between two adjacent frames of human thermal images according to feature information of human body areas of the two adjacent frames of human thermal images;
a generating module 640, configured to generate time axis data according to a posture change situation of the user, where the time axis data includes multiple frames of human body thermal images, and the multiple frames of human body thermal images are divided into different posture stages according to the posture change situation;
an analysis module 650, configured to obtain thermal images of a human body included in each gesture stage from the time axis data, obtain a representative thermal image of a gesture corresponding to each gesture stage according to the thermal images of the human body included in each gesture stage, where one of the representative thermal images of a gesture of a user representing one gesture stage;
The analysis module 650 is further configured to obtain a first representative gesture thermal image corresponding to a first gesture stage and a second representative gesture thermal image corresponding to a second gesture stage in the timeline data, and determine, according to the first representative gesture thermal image and the second representative gesture thermal image, whether the user performs a target action in the first gesture stage to the second gesture stage, where the second gesture stage is a stage next to the first gesture stage.
Optionally, the judging module 630 is specifically configured to:
acquiring a temperature difference value of pixel points at the same position between two adjacent frames of human body thermal images in the human body thermal images;
obtaining the number of pixels with the temperature difference value not smaller than a temperature difference value threshold;
and if the number of the pixel points, of which the temperature difference value is not smaller than the temperature difference value threshold value, is larger than a number threshold value, determining that the gesture change occurs between the two adjacent frames of human body thermal images.
Optionally, the two adjacent frames of human thermal images include a history frame and a reference frame, and the history frame is the previous frame of the reference frame; the generating module 640 specifically is configured to:
adding the reference frame into a queue when the gesture change of the user occurs between the history frame and the reference frame, and taking the reference frame as the history frame and taking the next frame of the reference frame as the reference frame when the queue member is not full; executing the step of determining the posture change condition of the user between the two adjacent frames of human thermal images;
And when the queue member is full, newly adding the queue into one gesture stage of the time axis data.
Optionally, the analysis module 650 is specifically configured to:
acquiring temperature values and position information of all pixel points in all the human body thermal image frames of the queue, and acquiring an average value of the temperature values of the pixel points with the same position information in all the human body thermal image frames;
obtaining an average frame of all the human thermal image frames according to the obtained average value of the temperature values of the pixel points and the position information of the pixel points;
and taking the average frame as a representative gesture thermal image corresponding to the gesture stage.
Optionally, the analysis module 650 is specifically configured to:
acquiring human body region parameters corresponding to the first gesture stage and the second gesture stage respectively according to the first representative gesture thermal image and the second representative gesture thermal image, wherein one human body region parameter reflects the size of a human body region in one representative gesture thermal image, and the human body region parameter is at least one of the perimeter, the area or the circumscribed matrix of the human body region;
and comparing the human body area parameters respectively corresponding to the first gesture stage and the second gesture stage, and determining whether the user generates a target action in the first gesture stage to the second gesture stage.
Optionally, the acquiring module 610 is further configured to: before the thermal image of the human body is acquired, acquiring a thermal image to be processed, which is acquired by a thermal imaging sensor;
detecting whether the temperature value of the pixel point in the thermal image to be processed is higher than a temperature threshold value;
if the temperature value of the pixel points in the thermal image to be processed is higher than the temperature threshold value, extracting a communication region in the thermal image to be processed after binarization processing is carried out on the thermal image to be processed, and acquiring characteristic parameters of the communication region, wherein the characteristic parameters reflect the shape characteristics of the communication region;
and determining whether the thermal image to be processed is the thermal image of the human body according to the characteristic parameters of the communication area.
Optionally, the feature extraction module 620 is specifically configured to:
acquiring a gradient value of each pixel point in the human thermal image, wherein the gradient value is a temperature difference value between the pixel point and the pixel points in the upper, lower, left and right directions;
and if the gradient value of at least one direction of the pixel points in the thermal image to be processed is larger than a gradient threshold value, determining the region where the pixel points with the gradient value of at least one direction larger than the gradient threshold value are located as the human body region.
According to an embodiment of the present application, each step involved in the methods shown in fig. 1 and 5 may be performed by each module in the action type recognition device 600 shown in fig. 6, which is not described herein.
The motion type recognition apparatus 600 according to an embodiment of the present application may acquire a thermal image of a human body including at least a body of a user, determine a region of the human body in the thermal image of the human body, extract feature information of the region of the human body from the thermal image of the human body, determine a posture change condition of the user between two adjacent thermal images of the human body based on the feature information of the region of the human body in the thermal image of the human body, generate time axis data including a plurality of thermal images of the human body based on the posture change condition of the user, divide the thermal images of the human body into different posture stages based on the posture change condition of the user, acquire a representative thermal image of the human body corresponding to each posture stage from the time axis data, acquire a representative thermal image of the human body corresponding to each posture stage based on the thermal image of the human body included in each posture stage, and then acquire a first representative thermal image of the user corresponding to a first posture stage in the time axis data, and a second representative image of the user corresponding to a second representative of the first posture stage in the time axis data, and determine whether the first posture and the second representative thermal image of the user occur in the first posture stage to the second posture stage. The time axis data dividing different gesture stages are generated by analyzing gesture change conditions of the detection object through the thermal image, representative gesture thermal images of the gesture stages are analyzed, and the analysis is not only judged according to a simple rule of change of heat of the thermal image, namely, influence interference of factors of temporary actions such as turning over is reduced by considering the time of maintaining the gesture of the detection object, so that the user actions and states can be judged more accurately, the method is suitable for action monitoring such as kicking a quilt in a sleep scene, and a precondition is provided for response processing of follow-up intelligent equipment.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application also provides electronic equipment. Referring to fig. 7, the electronic device 700 includes at least a processor 701, an input device 702, an output device 703, and a computer storage medium 704. Wherein the processor 701, input device 702, output device 703, and computer storage medium 704 in the terminal may be connected by a bus or other means.
A computer storage medium 704 may be stored in a memory of the terminal, the computer storage medium 704 being configured to store a computer program, the computer program comprising program instructions, and the processor 701 being configured to execute the program instructions stored in the computer storage medium 704. The processor 701 (or CPU (Central Processing Unit, central processing unit)) is a computing core and a control core of the terminal, which are adapted to implement one or more instructions, in particular to load and execute one or more instructions to implement a corresponding method flow or a corresponding function; in one embodiment, the processor 701 described above in the embodiments of the present application may be used to perform a series of processes, including the methods of the embodiments shown in fig. 1 and 5, and so on.
The embodiment of the application also provides a computer storage medium (Memory), which is a Memory device in the terminal and is used for storing programs and data. It will be appreciated that the computer storage medium herein may include both a built-in storage medium in the terminal and an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor 701. The computer storage medium herein may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory; optionally, at least one computer storage medium remote from the processor may be present.
In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by the processor 701 to implement the corresponding steps in the above embodiments; in particular, one or more instructions in the computer storage medium may be loaded by the processor 701 and perform any steps of the methods of fig. 1 and/or fig. 5, which are not described herein.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the division of the module is merely a logical function division, and there may be another division manner when actually implemented, for example, a plurality of modules or components may be combined or may be integrated into another system, or some features may be omitted or not performed. The coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, device or module indirect coupling or communication connection, which may be in electrical, mechanical, or other form.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a read-only memory (ROM), or a random-access memory (random access memory, RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a digital versatile disk (digital versatile disc, DVD), or a semiconductor medium, such as a Solid State Disk (SSD), or the like.

Claims (10)

1. A method of action type recognition, comprising:
acquiring a human thermal image, wherein the human thermal image at least comprises a body of a user;
determining a human body area in the human body thermal image, and extracting characteristic information of the human body area from the human body thermal image;
determining the posture change condition of a user between two adjacent frames of human body thermal images according to the characteristic information of the human body areas of the two adjacent frames of human body thermal images;
generating time axis data according to the posture change condition of the user, wherein the time axis data comprises a plurality of frames of human body thermal images, and the plurality of frames of human body thermal images are divided into different posture stages according to the posture change condition;
acquiring human body thermal images included in each gesture stage from the time axis data, and acquiring a representative gesture thermal image corresponding to each gesture stage according to the human body thermal images included in each gesture stage, wherein one representative gesture thermal image is used for representing the gesture of a user in one gesture stage;
and acquiring a first representative gesture thermal image corresponding to a first gesture stage and a second representative gesture thermal image corresponding to a second gesture stage in the time axis data, and determining whether the user generates a target action from the first gesture stage to the second gesture stage according to the first representative gesture thermal image and the second representative gesture thermal image, wherein the second gesture stage is the next stage of the first gesture stage.
2. The method for recognizing action type according to claim 1, wherein the determining a posture change condition of the user between the adjacent two frames of human thermal images based on the feature information of the human body region of the adjacent two frames of human thermal images comprises:
acquiring a temperature difference value of pixel points at the same position between two adjacent frames of human body thermal images in the human body thermal images;
obtaining the number of pixels with the temperature difference value not smaller than a temperature difference value threshold;
and if the number of the pixel points, of which the temperature difference value is not smaller than the temperature difference value threshold value, is larger than a number threshold value, determining that the gesture change occurs between the two adjacent frames of human body thermal images.
3. The action type recognition method according to claim 2, wherein the adjacent two frames of human thermal images include a history frame and a reference frame, the history frame being a frame preceding the reference frame;
the generating time axis data according to the gesture change condition of the user comprises the following steps:
adding the reference frame into a queue when the gesture change of the user occurs between the history frame and the reference frame, and taking the reference frame as the history frame and taking the next frame of the reference frame as the reference frame when the queue member is not full; executing the step of determining the posture change condition of the user between the two adjacent frames of human thermal images;
And when the queue member is full, newly adding the queue into one gesture stage of the time axis data.
4. A method of motion type recognition according to any one of claims 1 to 3, wherein the acquiring, from the time axis data, a thermal image of a human body included in each gesture stage, and acquiring, from the thermal image of a human body included in each gesture stage, a thermal image of a representative gesture corresponding to each gesture stage, includes:
acquiring temperature values and position information of all pixel points in all the human body thermal image frames of the queue, and acquiring an average value of the temperature values of the pixel points with the same position information in all the human body thermal image frames;
obtaining an average frame of all the human thermal image frames according to the obtained average value of the temperature values of the pixel points and the position information of the pixel points;
and taking the average frame as a representative gesture thermal image corresponding to the gesture stage.
5. The method of action type recognition according to claim 4, wherein the determining whether the user has a target action in the first gesture stage to the second gesture stage from the first representative gesture thermal image and the second representative gesture thermal image comprises:
Acquiring human body region parameters corresponding to the first gesture stage and the second gesture stage respectively according to the first representative gesture thermal image and the second representative gesture thermal image, wherein one human body region parameter reflects the size of a human body region in one representative gesture thermal image, and the human body region parameter is at least one of the perimeter, the area or the circumscribed matrix of the human body region;
and comparing the human body area parameters respectively corresponding to the first gesture stage and the second gesture stage, and determining whether the user generates a target action in the first gesture stage to the second gesture stage.
6. A method of action type recognition according to any one of claims 1-3, wherein prior to said acquiring a thermal image of a human body, the method further comprises:
acquiring a thermal image to be processed acquired by a thermal imaging sensor, and detecting whether the temperature value of a pixel point in the thermal image to be processed is higher than a temperature threshold value;
if the temperature value of the pixel points in the thermal image to be processed is higher than the temperature threshold value, extracting a communication region in the thermal image to be processed after binarization processing is carried out on the thermal image to be processed, and acquiring characteristic parameters of the communication region, wherein the characteristic parameters reflect the shape characteristics of the communication region;
And determining whether the thermal image to be processed is the thermal image of the human body according to the characteristic parameters of the communication area.
7. The action type recognition method of claim 1, the determining a human body region in the human body thermal image comprising:
acquiring a gradient value of each pixel point in the human thermal image, wherein the gradient value is a temperature difference value between the pixel point and the pixel points in the upper, lower, left and right directions;
and if the gradient value of at least one direction of the pixel points in the thermal image to be processed is larger than a gradient threshold value, determining the region where the pixel points with the gradient value of at least one direction larger than the gradient threshold value are located as the human body region.
8. An action type recognition device, comprising:
the acquisition module is used for acquiring a human thermal image, wherein the human thermal image at least comprises a body of a user;
the feature extraction module is used for determining a human body region in the human body thermal image and extracting feature information of the human body region from the human body thermal image;
the judging module is used for determining the posture change condition of a user between two adjacent frames of human body thermal images according to the characteristic information of the human body areas of the two adjacent frames of human body thermal images;
The generating module is used for generating time axis data according to the posture change condition of the user, wherein the time axis data comprises a plurality of frames of human body thermal images, and the plurality of frames of human body thermal images are divided into different posture stages according to the posture change condition;
the analysis module is used for acquiring human body thermal images included in each gesture stage from the time axis data, acquiring a representative gesture thermal image corresponding to each gesture stage according to the human body thermal images included in each gesture stage, and one representative gesture thermal image is used for representing the gesture of a user in one gesture stage;
the analysis module is further configured to obtain a first representative gesture thermal image corresponding to a first gesture stage and a second representative gesture thermal image corresponding to a second gesture stage in the timeline data, and determine, according to the first representative gesture thermal image and the second representative gesture thermal image, whether a target action occurs to the user in the first gesture stage to the second gesture stage, where the second gesture stage is a stage next to the first gesture stage.
9. An electronic device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the action type recognition method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored, which, when being executed by a processor, causes the processor to perform the steps of the action type recognition method according to any one of claims 1 to 7.
CN202010824543.0A 2020-08-17 2020-08-17 Action type identification method and device, electronic equipment and medium Active CN112102360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010824543.0A CN112102360B (en) 2020-08-17 2020-08-17 Action type identification method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010824543.0A CN112102360B (en) 2020-08-17 2020-08-17 Action type identification method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN112102360A CN112102360A (en) 2020-12-18
CN112102360B true CN112102360B (en) 2023-12-12

Family

ID=73753823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010824543.0A Active CN112102360B (en) 2020-08-17 2020-08-17 Action type identification method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112102360B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113495493B (en) * 2021-07-30 2024-07-16 青岛海尔空调器有限总公司 Method and device for identifying height of human body, household appliance and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017216006A (en) * 2017-08-10 2017-12-07 パラマウントベッド株式会社 Watching support device
CN108229389A (en) * 2017-12-29 2018-06-29 努比亚技术有限公司 Facial image processing method, apparatus and computer readable storage medium
KR20190072175A (en) * 2017-12-15 2019-06-25 전자부품연구원 Apparatus for detecting human using thermo-graphic camera in dynamical environment and method thereof
CN111079613A (en) * 2019-12-09 2020-04-28 北京明略软件***有限公司 Gesture recognition method and apparatus, electronic device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017216006A (en) * 2017-08-10 2017-12-07 パラマウントベッド株式会社 Watching support device
KR20190072175A (en) * 2017-12-15 2019-06-25 전자부품연구원 Apparatus for detecting human using thermo-graphic camera in dynamical environment and method thereof
CN108229389A (en) * 2017-12-29 2018-06-29 努比亚技术有限公司 Facial image processing method, apparatus and computer readable storage medium
CN111079613A (en) * 2019-12-09 2020-04-28 北京明略软件***有限公司 Gesture recognition method and apparatus, electronic device, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周意乔 ; 徐昱琳 ; .基于双向LSTM的复杂环境下实时人体姿势识别.仪器仪表学报.2020,(03),全文. *
张文利 ; 郭向 ; 杨堃 ; 王佳琪 ; 朱清宇 ; .面向室内环境控制的人员信息检测***的设计与实现.北京工业大学学报.2020,(05),全文. *

Also Published As

Publication number Publication date
CN112102360A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112101115B (en) Temperature control method and device based on thermal imaging, electronic equipment and medium
KR102345579B1 (en) Method, storage medium and apparatus for providing service associated with images
CN108010008B (en) Target tracking method and device and electronic equipment
KR102329862B1 (en) Method and electronic device for converting color of image
KR102545768B1 (en) Method and apparatus for processing metadata
US11257226B1 (en) Low-overhead motion classification
JP2011527789A (en) Apparatus and method for classifying movement of an object within a monitoring zone
CN110399908B (en) Event-based camera classification method and apparatus, storage medium, and electronic apparatus
CN109076159A (en) Electronic equipment and its operating method
CN106650666A (en) Method and device for detection in vivo
KR102437698B1 (en) Apparatus and method for encoding image thereof
CN110338803A (en) Object monitoring method and its arithmetic unit
CN111526342B (en) Image processing method, device, camera, terminal and storage medium
CN105468891A (en) Apparatus and method for supporting computer aided diagnosis
CN112690761B (en) Sleep state detection method, device, equipment and computer readable medium
CN112529149B (en) Data processing method and related device
CN116416416A (en) Training method of virtual fitting model, virtual fitting method and electronic equipment
CN112102360B (en) Action type identification method and device, electronic equipment and medium
Ma et al. Dynamic gesture contour feature extraction method using residual network transfer learning
CN111310531B (en) Image classification method, device, computer equipment and storage medium
CN114203300A (en) Health state evaluation method and system, server and storage medium
CN106355182A (en) Methods and devices for object detection and image processing
CN111797874B (en) Behavior prediction method and device, storage medium and electronic equipment
US10990859B2 (en) Method and system to allow object detection in visual images by trainable classifiers utilizing a computer-readable storage medium and processing unit
CN116482987B (en) Automatic induction method and device for realizing intelligent furniture based on user behaviors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant