CN117412440B - Lamp control method and device based on human body posture detection, illuminating lamp and medium - Google Patents

Lamp control method and device based on human body posture detection, illuminating lamp and medium Download PDF

Info

Publication number
CN117412440B
CN117412440B CN202311343943.XA CN202311343943A CN117412440B CN 117412440 B CN117412440 B CN 117412440B CN 202311343943 A CN202311343943 A CN 202311343943A CN 117412440 B CN117412440 B CN 117412440B
Authority
CN
China
Prior art keywords
point cloud
human body
model
feature extraction
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311343943.XA
Other languages
Chinese (zh)
Other versions
CN117412440A (en
Inventor
刘运可
袁国枢
万顺
李佳倓
杨碧婉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Earda Technologies Co ltd
Original Assignee
Earda Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Earda Technologies Co ltd filed Critical Earda Technologies Co ltd
Priority to CN202311343943.XA priority Critical patent/CN117412440B/en
Publication of CN117412440A publication Critical patent/CN117412440A/en
Application granted granted Critical
Publication of CN117412440B publication Critical patent/CN117412440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21VFUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
    • F21V23/00Arrangement of electric circuit elements in or on lighting devices
    • F21V23/04Arrangement of electric circuit elements in or on lighting devices the elements being switches
    • F21V23/0442Arrangement of electric circuit elements in or on lighting devices the elements being switches activated by means of a sensor, e.g. motion or photodetectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/115Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/175Controlling the light source by remote control

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lamp control method, a device, an illuminating lamp and a medium based on human body posture detection, which comprise the following steps: the method comprises the steps of acquiring continuous multi-frame point cloud images acquired by a millimeter wave radar on a preset area, respectively inputting the multi-frame point cloud images into a human body posture detection model, carrying out multi-scale convolution operation on the point cloud images, wherein the width of the convolution operation is larger than the height, fusing a plurality of characteristic images obtained by the convolution operation, decoding fused characteristics to obtain the probability of classifying the human body posture in the point cloud images, determining continuous target point cloud images, classified as prone positions, of the human body posture according to the probability, determining the confidence level of the human body in a sleep state according to the target point cloud images, and determining the human body in the sleep state when the confidence level is larger than a threshold value.

Description

Lamp control method and device based on human body posture detection, illuminating lamp and medium
Technical Field
The invention relates to the technical field of intelligent home, in particular to a lamp control method, device, illuminating lamp and medium based on human body posture detection.
Background
With the development of smart home, the brightness of the illumination lamp is automatically adjusted and the illumination lamp is turned on and off by detecting the human body posture of a user, so that the intelligent degree is improved.
In an application scene, whether a user is in a sleep state can be determined when a human body is in a prone position, the brightness of an illumination lamp is adjusted or the illumination lamp is turned off when the user is in the sleep state, in the prior art, images are mainly collected through a camera, the human body posture of the user is determined when the human body is in the prone position for a long time through image analysis, however, the user privacy is easy to leak when the images are collected through the camera on the one hand, the accuracy of detecting that the human body is in the prone position is low when the images are collected through the camera under the influence of light and environment, and if the characteristics of detecting human body breathing, heartbeat and the like are directly adopted to detect whether the human body is in the sleep state, the millimeter wave radar with high precision is needed, and the cost is high.
Disclosure of Invention
The invention provides a lamp control method, device, illuminating lamp and medium based on human body posture detection, which are used for solving the problems of poor privacy and low accuracy in the prior art when the human body posture is detected by a camera to acquire an image for lamp control.
In a first aspect, the present invention provides a method for controlling a light fixture based on human posture detection, including:
acquiring continuous multi-frame point cloud images acquired by a millimeter wave radar for a preset area;
Respectively inputting multi-frame point cloud images into a pre-trained human body posture detection model, carrying out convolution operation of multi-scale convolution kernels on the point cloud images, fusing a plurality of characteristic images obtained by the convolution operation, and decoding fused characteristics to obtain the probability of classifying human body postures in the point cloud images, wherein the convolution kernels comprise convolution kernels with widths larger than heights;
Determining continuous target point cloud images classified as prone positions by human body gestures according to the probability;
Determining the confidence level of the human body in a sleep state according to the multi-frame target point cloud image;
and when the confidence coefficient is larger than a preset threshold value, determining that the human body is in a sleep state, and reducing the brightness of the lamp in the preset area.
In a second aspect, the present invention provides a light fixture control device based on human posture detection, including:
the point cloud image acquisition module is used for acquiring continuous multi-frame point cloud images acquired by the millimeter wave radar for a preset area;
the human body posture classification prediction module is used for respectively inputting multi-frame point cloud images into a pre-trained human body posture detection model, carrying out convolution operation of multi-scale convolution kernels on the point cloud images, fusing a plurality of feature images obtained by the convolution operation, and decoding fused features to obtain the probability of classifying the human body posture in the point cloud images, wherein the convolution kernels comprise convolution kernels with the width being larger than the height;
The target point cloud image determining module is used for determining continuous target point cloud images classified into prone positions by human body gestures according to the probability;
The confidence coefficient calculating module is used for determining the confidence coefficient of the human body in a sleep state according to the multi-frame target point cloud image;
And the lamp brightness adjusting module is used for determining that the human body is in a sleep state when the confidence coefficient is larger than a preset threshold value, and reducing the brightness of the lamp in the preset area.
In a third aspect, the present invention provides an illumination lamp comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the human gesture detection-based luminaire control method according to the first aspect of the present invention.
In a fourth aspect, the present invention provides a computer readable storage medium storing computer instructions for causing a processor to implement the human posture detection-based luminaire control method according to the first aspect of the present invention when executed.
According to the embodiment of the invention, continuous multi-frame point cloud images acquired in a preset area are acquired through millimeter wave radars, multi-scale convolution operation with a convolution kernel with a width larger than a height is carried out on the point cloud images in a human body posture detection model, a plurality of feature images obtained through the convolution operation are fused, the probability of classifying the human body postures in the point cloud images is obtained by decoding the fused features, the continuous target point cloud images with human body postures classified into prone positions are determined according to the probability, the confidence level of the human body in a sleep state is determined according to the multi-frame target point cloud images, when the confidence level is larger than a preset threshold value, the human body in the sleep state is determined, the brightness of a lamp in the preset area is reduced, on one hand, the point cloud images are acquired through millimeter wave radars, privacy leakage is not caused, the influence of light is avoided, on the other hand, the characteristic is extracted through the convolution kernel with a width larger than the height, the feature is in accordance with the characteristic that the human body in the point cloud images is in prone position, the feature that the human body in the prone position is far larger than the prone position in the prone position is extracted, the prone position in the point cloud images, the human body posture is in the prone position, the prone position is recognized, the human body in the sleep state is in the sleep state, when the human body is in the sleep state, and the human body is in the sleep state, and the sleep state is determined, and the sleep state is in the state, which is determined according to the continuous state.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a lamp control method based on human body posture detection according to an embodiment of the present invention;
fig. 2A is a flowchart of a method for controlling a lamp based on human posture detection according to a second embodiment of the present invention;
FIG. 2B is a schematic diagram of a scenario of light fixture control based on human gesture detection in one example;
FIG. 2C is a schematic diagram of a human posture detection model;
FIG. 2D is a schematic diagram of a multi-scale convolution kernel in the present embodiment;
FIG. 2E is a schematic view of the width and height of a human body in a prone position;
Fig. 3 is a schematic structural diagram of a lighting device control device based on human posture detection according to a third embodiment of the present invention;
Fig. 4 is a schematic structural diagram of an illumination lamp according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Example 1
Fig. 1 is a flowchart of a light control method based on human body posture detection according to a first embodiment of the present invention, where the method may be performed by a light control device based on human body posture detection, and the light control device based on human body posture detection may be implemented in hardware and/or software, and the light control device based on human body posture detection may be configured in an illumination lamp, as shown in fig. 1, and the light control method based on human body posture detection includes:
S101, acquiring continuous multi-frame point cloud images acquired by the millimeter wave radar for a preset area.
In this embodiment, the preset area may refer to an area for sleeping in a living place of the user, for example, an area occupied by a bed in a bedroom, and the millimeter wave radar may be integrated on a main board of the lighting lamp, may be integrated in the intelligent switch, or may be separately provided outside the lighting lamp and the intelligent switch. The millimeter wave radar may transmit signals to a preset area according to a preset period or continuously, the transmitted signals are received by the millimeter wave radar after being reflected, the millimeter wave radar generates a point cloud through the transmitted signals and the received signals, and a point cloud image is generated based on the point cloud.
In one embodiment, when a human body exists in a preset area can be detected, the millimeter wave radar is controlled to acquire multiple frames of original point clouds for the preset area according to preset frequency, the original point clouds are preprocessed to obtain target point clouds, and a point cloud image of the preset area is generated based on the target point clouds.
Specifically, whether a human body exists in a preset area or not can be detected through the infrared sensor, if so, the millimeter wave radar is started to acquire an original point cloud for the preset area, so that the millimeter wave radar is prevented from being in an on state for a long time, and the power consumption of the millimeter wave radar is reduced. The method for generating the point cloud image can refer to the method for generating a three-dimensional point cloud image, a polar view and the like in the prior art, and the embodiment does not limit the mode of generating the point cloud image.
S102, respectively inputting multi-frame point cloud images into a pre-trained human body posture detection model, performing convolution operation of multi-scale convolution kernels on the point cloud images, fusing a plurality of feature images obtained through the convolution operation, and decoding fused features to obtain the probability of classifying human body postures in the point cloud images, wherein the convolution kernels comprise convolution kernels with widths larger than heights.
In this embodiment, the human body posture detection model may be a model for outputting probabilities that a human body belongs to various postures in a point cloud image after the point cloud image is input, where the human body posture may include standing, squatting, prone, jumping, and other postures, the human body posture detection model may include at least one feature extraction sub-model, a feature fusion sub-model, and a decoding sub-model, where the feature extraction sub-model may be a model for performing convolution operation by using a convolution kernel with a width greater than a height to extract more features in a width direction in the point cloud image, the feature fusion sub-model is used to fuse the features extracted by the at least one feature extraction sub-model, and input the fused features to the decoding sub-model, and the decoding sub-model is used to predict probabilities of various human body posture classifications and various human body posture classifications after decoding the fused features.
S103, determining continuous target point cloud images of human body gestures classified as prone positions according to the probability.
In this embodiment, the greater the probability of classifying a certain human body posture, the greater the probability that a human body is in the human body posture in the point cloud images, the more likely the point cloud images of prone positions in the multi-frame point cloud images are determined to be candidate point cloud images.
S104, determining the confidence that the human body is in a sleep state according to the multi-frame target point cloud image.
The confidence level of the human body in the sleeping state indicates the confidence level of the human body in the sleeping state, in one embodiment, the similarity of the cloud images of the adjacent two frames of target points can be calculated, then the average value of the multiple similarities is calculated to obtain a similarity average value, the similarity average value is used as the confidence level, that is, the larger the similarity average value is, the more similar the cloud images of the adjacent two frames of target points are, the human body in the sleeping state is in the cloud images of the multiple frames of target points, and the less the posture change is, the more likely the human body is in the sleeping state.
In another embodiment, after calculating the average similarity, the number of images of the cloud images of the target point may be counted, and the confidence coefficient is calculated by the average similarity and the number of images, where the confidence coefficient is positively related to the average similarity and the number of images, that is, the greater the average similarity is, the greater the number of images, the greater the confidence coefficient is, that is, the number of continuous cloud images of the target point with prone posture of the human body is, and the greater the average similarity is, which indicates that the human body is in prone posture for a long time, and the less the change of posture of the human body is, the more likely the human body is in sleep state.
And S105, when the confidence coefficient is larger than a preset threshold value, determining that the human body is in a sleep state, and reducing the brightness of the lamp in the preset area.
In one embodiment, when the confidence is greater than the threshold, it is determined that the human body is in a sleep state, and the brightness of the light fixture in the preset area may be reduced, for example, the current or voltage of the light fixture may be reduced, or even the light fixture may be turned off.
According to the embodiment of the invention, continuous multi-frame point cloud images acquired in a preset area are acquired through millimeter wave radars, multi-scale convolution operation with a convolution kernel with a width larger than a height is carried out on the point cloud images in a human body posture detection model, a plurality of feature images obtained through the convolution operation are fused, the probability of classifying the human body postures in the point cloud images is obtained by decoding the fused features, the continuous target point cloud images with human body postures classified into prone positions are determined according to the probability, the confidence level of the human body in a sleep state is determined according to the multi-frame target point cloud images, when the confidence level is larger than a preset threshold value, the human body in the sleep state is determined, the brightness of a lamp in the preset area is reduced, on one hand, the point cloud images are acquired through millimeter wave radars, privacy leakage is not caused, the influence of light is avoided, on the other hand, the characteristic is extracted through the convolution kernel with a width larger than the height, the feature is in accordance with the characteristic that the human body in the point cloud images is in prone position, the feature that the human body in the prone position is far larger than the prone position in the prone position is extracted, the prone position in the point cloud images, the human body posture is in the prone position, the prone position is recognized, the human body in the sleep state is in the sleep state, when the human body is in the sleep state, and the human body is in the sleep state, and the sleep state is determined, and the sleep state is in the state, which is determined according to the continuous state.
Example two
Fig. 2A is a flowchart of a light fixture control method based on human body posture detection according to a second embodiment of the present invention, where optimization is performed based on the first embodiment of the present invention, and as shown in fig. 2A, the light fixture control method based on human body posture detection includes:
s201, acquiring continuous multi-frame point cloud images acquired by the millimeter wave radar for a preset area.
The preset area can be a sleeping area of a user, for example, the sleeping area can be an area where a bed is located, and when detecting that a human body exists in the preset area through sensors such as an infrared sensor, continuous multi-frame point cloud images are acquired for the preset area through millimeter wave radar.
As shown in fig. 2B, articles including a bed 1, a human body 2 on the bed 1, a sheet 3 and the like in a preset area can acquire a point cloud image of the preset area through a millimeter wave radar 4 in a lamp.
S202, respectively inputting the point cloud image into a first feature extraction sub-model, a second feature extraction sub-model, a third feature extraction sub-model and a fourth feature extraction sub-model.
As shown in fig. 2C, the human body posture detection model of the present embodiment includes a first feature extraction sub-model, a second feature extraction sub-model, a third feature extraction sub-model, a fourth feature extraction sub-model, a feature fusion sub-model, and a decoding sub-model, wherein the first feature extraction sub-model, the second feature extraction sub-model, the third feature extraction sub-model, and the fourth feature extraction sub-model extract feature images from point cloud images when the point cloud images are input, the feature fusion sub-model fuses a plurality of feature images when the feature fusion sub-model inputs a plurality of feature images, and outputs fused features to the decoding sub-model, and the decoding sub-model decodes the fused features to predict a plurality of human body posture classifications, and probabilities of the respective human body posture classifications.
In one embodiment, when training the human body posture detection model, a training point cloud image may be acquired first, the training point cloud image is marked with a first detection frame and a first human body posture classification of a prone human body, then the human body posture detection model is constructed, the training point cloud image is input into the human body posture detection model to obtain a second detection frame and a second human body posture classification, the human body posture detection model is updated by adopting the first detection frame, the second detection frame, the first human body posture classification and the second human body posture classification, whether a preset training condition is met is judged, if yes, the human body posture detection model is determined to complete training, if no, the training point cloud image is input into the human body posture detection model to obtain the second detection frame and the second human body posture classification.
As shown in fig. 2C, the human body posture detection model includes a first feature extraction sub-model, a second feature extraction sub-model, a third feature extraction sub-model, a fourth feature extraction sub-model, and a feature fusion sub-model, wherein the first feature extraction sub-model uses a convolution kernel-to-point cloud image with a scale of n1×n1 to extract a first feature image, the second feature extraction sub-model uses a convolution kernel-to-point cloud image with a scale of n2×n3 to extract a second feature image, wherein a width N3 of the convolution kernel is greater than a height N2 of the convolution kernel, the second feature extraction sub-model uses a hole convolution kernel-to-point cloud image with a scale of n1×n1 to extract a third feature image, and the fourth feature extraction sub-model uses a hole convolution kernel-to-point cloud image with a scale of n2×n3 to extract a fourth feature image, wherein a width N3 of the convolution kernel is greater than a height N2 of the convolution kernel.
Fig. 2D is a schematic diagram of a convolution kernel in the first feature extraction sub-model, the second feature extraction sub-model, the third feature extraction sub-model, and the fourth feature extraction sub-model, fig. 2D is a schematic diagram of a convolution kernel in the first feature extraction sub-model, the scale of the convolution kernel in the first feature extraction sub-model is 3×3, fig. 2D is a schematic diagram of a convolution kernel in the second feature extraction sub-model, the scale of the convolution kernel in the second feature extraction sub-model is 4×9, that is, the width is greater than the height of the convolution kernel, fig. 2D is a schematic diagram of a hole convolution kernel in the third feature extraction sub-model, the scale of a hole convolution kernel in the third feature extraction sub-model is 5×5, and fig. 2D is a schematic diagram of a hole convolution kernel in the fourth feature extraction sub-model, that is 5×9, that is, the width is greater than the height of the hole convolution kernel, and the width is greater than the height of the convolution kernel, and the convolution kernel can be convolved, and the feature can be extracted by the side of the convolution kernel, and the side-sensitive feature can be extracted.
As shown in fig. 2E, when the human body is in the prone position, as can be seen from fig. 2E, the width W of the human body in the point cloud image is far greater than the height H of the human body, and the features of the point cloud image are extracted through the convolution kernel with the width greater than the height, so that the features of the human body in the prone position can be more easily and effectively extracted.
In one embodiment, the loss value loss may be calculated by the following formula:
loss=(class_1–class_2)2+(1–IOU2)+λ(w1–w2)2
Wherein class_1 is the first human posture classification of the label, class_2 is the second human posture classification of the prediction, IOU is the intersection ratio of the first detection frame and the second detection frame, w1 is the width of the label first detection frame, w2 is the width of the second detection frame of the prediction, and lambda is a constant coefficient.
Through the loss value calculation formula, the human body posture classification and the difference supervision model training of the detection frame are performed, and the width difference supervision model training of the detection frame is performed, so that the model is more focused on the characteristics of the point cloud image in the width direction, namely the model is more suitable for predicting the posture of the human body in the prone position.
S203, performing convolution operation on the point cloud image in the first feature extraction submodel through convolution kernel of n1 multiplied by n1 to obtain a first feature map.
Specifically, after the point cloud image is input into the first feature extraction sub-model, the first feature extraction sub-model may perform convolution operation on the input point cloud image by adopting a convolution kernel of n1×n1 in each convolution layer to obtain a first feature map, where the convolution kernel of n1×n1 may be set according to practical situations, for example, may be 3×3, 5×5, and the like.
S204, carrying out convolution operation on the point cloud image through the convolution kernel of n2 multiplied by n3 in the second feature extraction submodel to obtain a second feature map, wherein n3 is the width of the convolution kernel, and n2 is the height of the convolution kernel.
Specifically, after the point cloud image is input into the second feature extraction sub-model, the second feature extraction sub-model may perform convolution operation on the input point cloud image by adopting a convolution kernel of n2×n3 in each convolution layer to obtain a second feature image, where the convolution kernel of n2×n3 may be set according to the actual situation, n3 is the width of the convolution kernel, and n2 is the height of the convolution kernel, for example, may be 3×5, 5×9, and so on. Since the width of the convolution kernel is greater than the height, the second feature map includes more information in the width direction.
S205, performing convolution operation on the point cloud image in the third feature extraction submodel through the hole convolution kernel of N1×N1 to obtain a third feature map.
After the point cloud image is input into the third feature extraction sub-model, the third feature extraction sub-model can perform convolution operation on the input point cloud image by adopting an N1×N1 hole convolution kernel in each convolution layer to obtain a third feature image, wherein the N1×N1 convolution kernel can be set according to practical conditions, for example, can be 3×3, 5×5, and the like. Since the convolution kernel is a hole convolution kernel, the third feature map includes information over a larger receptive field.
S206, carrying out convolution operation on the point cloud image through the hole convolution kernel of N2×N3 in the fourth feature extraction submodel to obtain a fourth feature map, wherein N3 is the width of the convolution kernel, and N2 is the height of the convolution kernel.
After the point cloud image is input into the fourth feature extraction sub-model, the fourth feature extraction sub-model can perform convolution operation on the input point cloud image by adopting N2 XN 3 hole convolution cores in each convolution layer to obtain a fourth feature image, wherein N2 XN 3 convolution cores can be set according to actual conditions, N3 is the width of the convolution cores, and N2 is the height of the convolution cores, such as 3X 5, 5X 9 and the like. Because the convolution kernel has a greater width than height and employs hole convolution, the fourth feature map includes information in the width direction over a larger receptive field.
S207, fusing the first feature map, the second feature map, the third feature map and the fourth feature map according to a preset weight in the feature fusion submodel to obtain fusion features, wherein the weight is positively correlated with the ratio of the width to the height of the convolution kernel.
In one embodiment, the feature fusion submodel may determine the fusion weight according to the ratio of the width to the height of the convolution kernels extracting the first feature map, the second feature map, the third feature map and the fourth feature map, that is, the ratio R1, R2, R3 and R4 of the width to the height of the convolution kernels of the first feature fusion submodel, the second feature fusion submodel, the third feature fusion submodel and the fourth feature fusion submodel is calculated, and the weights w1, w2, w3 and w4 of the first feature map, the second feature map, the third feature map and the fourth feature map are normalized respectively.
In another embodiment, the ratio of the ratio R1, R2, R3, R4 to the preset value may be further calculated to obtain weights w1, w2, w3, w4 smaller than 1, and the embodiment may further determine the weights by other manners, so as to achieve positive correlation of the ratio of the weights to the width and the height of the convolution kernel, thereby achieving that the larger the ratio of the width to the height of the convolution kernel is, the larger the fusion weight of the feature map obtained by performing the convolution operation by adopting the convolution kernel is, so that the fusion feature includes more information in the width direction.
S208, decoding the fusion features in the decoding sub-model to obtain the human body posture classification in the point cloud image.
The decoding submodule can up-sample the fusion characteristics, detect the human body of the up-sampled characteristics, and output a human body detection frame, the probability that the human body belongs to various human body gesture classifications and the like.
S209, determining continuous target point cloud images classified into prone positions by the human body gestures according to the probability.
In one embodiment, it may be determined from multiple frames of point cloud images that a point cloud image with a probability of classifying a prone position of a human body as a prone position greater than a preset probability threshold is used as a candidate point cloud image, and continuous point cloud images are determined from the candidate point cloud images and used as target point cloud images.
For example, a point cloud image with a probability of classifying a human body gesture into a prone position greater than 0.75 may be determined from multiple frames of point cloud images as a candidate point cloud image, and then a continuous point cloud image is determined from the candidate point cloud images as a target point cloud image.
S210, calculating the similarity of two adjacent frames of target point cloud images in the multi-frame target point cloud images, and calculating the average value of a plurality of similarities to obtain a similarity average value.
In an actual detection scene, the background of a preset area is usually fixed, and two adjacent frames of point cloud images are caused to have differences when a human body moves, so that the similarity of the two adjacent frames of point cloud images in the multi-frame point cloud images can be calculated to obtain a plurality of similarities, for example, cosine similarity, normal form distance and the like can be calculated as the similarities, and then the average value of the plurality of similarities is calculated to obtain a similarity average value.
For example, if the multi-frame target point cloud image includes continuous point cloud images P13, P14, P15, P16, and P17, the similarity between the point cloud images P13 and P14 and the similarity between the point cloud images P14 and P15 may be calculated, and the average value may be calculated after a plurality of similarities are obtained by the same method, to obtain the average value of the similarities, which is higher, which indicates that the smaller the posture change is when the human body is in the prone position during the acquisition of the multi-frame target point cloud image, the more likely the human body is in the sleep state.
S211, determining the image quantity of the target point cloud images.
Because the target point cloud image is a continuous multi-frame point cloud image sequence, the number of images contained in the multi-frame point cloud image sequence can be counted to obtain the number of images. As the example multi-frame target point cloud image described above includes consecutive point cloud images P13, P14, P15, P16, P17, the number of images is 5 frames.
S212, calculating the confidence coefficient C of classifying the human body posture into the prone position through a preset formula.
C=Sim_ave×(1÷(1+e-i))。
Wherein sim_ave is a similarity average value, and i is the number of images.
From the calculation formula of the confidence coefficient C, the larger the average value sim_ave of the similarity of the multi-frame target point cloud images is, the larger the confidence coefficient C is, which shows that the smaller the posture change is when the human body is in the prone position during the acquisition of the multi-frame target point cloud images, the higher the confidence coefficient is when the human body is in the prone position, the larger the image quantity of the multi-frame target point cloud images is, the larger the confidence coefficient C is, namely, the more continuous target point cloud images when the human body is in the prone position, the longer the duration of the prone position, the higher the confidence coefficient is calculated according to the average value of the similarity and the image quantity, and the confidence coefficient when the human body is in the prone position can be measured from the posture change degree and the duration of the prone position, so that the human body is in the sleep state, and the reliability of the sleep state is determined to be higher.
And S213, when the confidence coefficient is larger than a preset threshold value, determining that the human body is in a sleep state, and reducing the brightness of the lamp in the preset area.
In one embodiment, when the confidence level is greater than the threshold value, the confidence level that the human body is in the sleep state when the human body posture is the prone position is determined to be high, and the human body is determined to be in the sleep state, the brightness of the lamp in the preset area can be reduced, for example, the current or the voltage of the lamp can be reduced, and even the lamp can be turned off.
The human body posture detection model of the embodiment comprises a first feature extraction sub-model, a second feature extraction sub-model, a third feature extraction sub-model and a fourth feature extraction sub-model, wherein the convolution operation can be carried out on a point cloud image through convolution of N1 multiplied by N1 in the first feature extraction sub-model, the convolution operation can be carried out on a point cloud image through convolution of N2 multiplied by N3 in the second feature extraction sub-model, the convolution operation can be carried out on a point cloud image through convolution of holes of N1 multiplied by N1 in the third feature extraction sub-model, the convolution operation can be carried out on a point cloud image through convolution of holes of N2 multiplied by N3 in the fourth feature extraction sub-model, so as to obtain a feature map, the convolution operation of a multi-scale convolution kernel is realized, the convolution operation comprises convolution kernels with width being larger than height, and fusion weight of each feature map is further determined according to the ratio of the width of the convolution kernels to the height, the human body posture classification in the point cloud image is obtained through decoding, on one hand, the point cloud image is acquired through the millimeter wave radar, privacy leakage is not caused, the influence of light is not caused, the application environment is wide, on the other hand, the characteristics are extracted through the convolution kernel type point cloud image convolution operation with multi-scale and width being larger than height in the multiple characteristic extraction submodels, the characteristics that the width is far larger than the height when the human body is in the prone position in the point cloud image are met, the characteristics when more human body is in the prone position can be extracted, the accuracy of the human body posture in the prone position identification in the point cloud image is improved, on the other hand, the confidence is calculated through the continuous target point cloud image of the human body posture classification as the prone position, when the confidence is larger than a threshold value, the human body is determined to be in the sleep state, the reliability of the human body determined to be in the sleep state when the human body is in the prone position is improved, therefore, the lamp can be accurately adjusted according to the user state.
Example III
Fig. 3 is a schematic structural diagram of a lighting device control device based on human posture detection according to a third embodiment of the present invention. As shown in fig. 3, the lamp control device based on human posture detection includes:
The point cloud image acquisition module 301 is configured to acquire continuous multi-frame point cloud images acquired by the millimeter wave radar on a preset area;
The human body posture classification prediction module 302 is configured to input multi-frame point cloud images into a pre-trained human body posture detection model, perform convolution operation of a multi-scale convolution kernel on the point cloud images, fuse a plurality of feature images obtained by the convolution operation, and decode the fused features to obtain probability of classifying human body postures in the point cloud images, where the convolution kernel includes a convolution kernel with a width greater than a height;
The target point cloud image determining module 303 is configured to determine continuous target point cloud images classified as prone positions according to the probability;
The confidence coefficient calculating module 304 is configured to determine a confidence coefficient that the human body is in a sleep state according to the multi-frame target point cloud image;
the lamp brightness adjustment module 305 is configured to determine that the human body is in a sleep state and reduce the brightness of the lamp in the preset area when the confidence coefficient is greater than the preset threshold.
Optionally, the point cloud image acquisition module 301 includes:
The original point cloud acquisition unit is used for controlling the millimeter wave radar to acquire multi-frame original point clouds for the preset area according to the preset frequency when detecting that the human body exists in the preset area;
The point cloud preprocessing unit is used for preprocessing the original point cloud to obtain a target point cloud;
And the point cloud image generation unit is used for generating a point cloud image of the preset area based on the target point cloud.
Optionally, the human body posture detection model comprises a first feature extraction sub-model, a second feature extraction sub-model, a third feature extraction sub-model, a fourth feature extraction sub-model, a feature fusion sub-model and a decoding sub-model;
the human posture classification prediction module 302 includes:
The point cloud image input unit is used for inputting the point cloud image into the first feature extraction submodel, the second feature extraction submodel, the third feature extraction submodel and the fourth feature extraction submodel respectively;
The first feature extraction unit is used for carrying out convolution operation on the point cloud image through convolution kernel of n1 multiplied by n1 in the first feature extraction sub-model to obtain a first feature map;
The second feature extraction unit is used for carrying out convolution operation on the point cloud image through convolution kernels of n2 multiplied by n3 in a second feature extraction sub-model to obtain a second feature image, wherein n3 is the width of the convolution kernels, and n2 is the height of the convolution kernels;
The third feature extraction unit is used for carrying out convolution operation on the point cloud image through the hole convolution kernel of N1×N1 in the third feature extraction sub-model to obtain a third feature map;
The fourth feature extraction unit is used for carrying out convolution operation on the point cloud image through the hole convolution kernel of N2 multiplied by N3 in the fourth feature extraction sub-model to obtain a fourth feature image, wherein N3 is the width of the convolution kernel, and N2 is the height of the convolution kernel;
The feature fusion unit is used for fusing the first feature map, the second feature map, the third feature map and the fourth feature map according to a preset weight in the feature fusion submodel to obtain fusion features, wherein the ratio of the weight to the width to the height of the convolution kernel is positively correlated;
And the classification prediction unit is used for decoding the fusion characteristics in the decoding sub-model to obtain the human body posture classification in the point cloud image.
Optionally, the human posture detection model is trained by the following modules:
the training data acquisition module is used for acquiring training point cloud images, wherein the training point cloud images are marked with a first detection frame of a prone position human body and a first human body gesture classification;
The model building module is used for building a human body posture detection model;
the prediction module is used for inputting the training point cloud image into the human body posture detection model to obtain a second detection frame and second human body posture classification;
The model updating module is used for updating the human body posture detection model by adopting the first detection frame, the second detection frame, the first human body posture classification and the second human body posture classification;
The training condition judging module is used for judging whether the preset training condition is met, if yes, executing the training completion determining module, and if not, returning to the predicting module.
And the training completion determining module is used for determining that the human body posture detection model completes training.
Optionally, the model updating module includes:
A loss value calculation unit for calculating a loss value loss by the following formula:
loss=(class_1–class_2)2+(1–IOU2)+λ(w1–w2)2
wherein class_1 is a first human body posture classification, class_2 is a second human body posture classification, IOU is an intersection ratio of the first detection frame and the second detection frame, w1 is a width of the first detection frame, and w2 is a width of the second detection frame;
And the model parameter updating module is used for adjusting the model parameters of the human body posture detection model according to the loss value.
Optionally, the target point cloud image determining module 303 includes:
the candidate point cloud image determining unit is used for determining a point cloud image with the probability of classifying the prone position of the human body into the prone position being larger than a preset probability threshold value, and taking the point cloud image as a candidate point cloud image;
And the continuous candidate image determining unit is used for determining continuous images from the candidate point cloud images and taking the continuous images as target point cloud images.
Optionally, the confidence computation module 304 includes:
The similarity calculation unit is used for calculating the similarity of two adjacent frames of target point cloud images in the multi-frame target point cloud images, calculating the average value of a plurality of similarity, and obtaining a similarity average value;
an image number determining unit configured to determine an image number of the target point cloud image;
a confidence calculating unit for calculating a confidence C of classifying the human body posture into the prone posture by the following formula:
C=Sim_ave×(1÷(1+e-i));
wherein sim_ave is a similarity average value, and i is the number of images.
The lamp control device based on human body posture detection provided by the embodiment of the invention can execute the lamp control method based on human body posture detection provided by the first embodiment and the second embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 4 shows a schematic view of the structure of an illumination lamp 40 that can be used to implement the present invention. As shown in fig. 4, the illumination lamp 40 includes at least one processor 41, and a memory, such as a Read Only Memory (ROM) 42, a Random Access Memory (RAM) 43, etc., communicatively connected to the at least one processor 41, in which the memory stores a computer program executable by the at least one processor, and the processor 41 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 42 or the computer program loaded from the storage unit 48 into the Random Access Memory (RAM) 43. In the RAM 43, various programs and data required for the operation of the illumination lamp 40 can also be stored. The processor 41, the ROM 42 and the RAM 43 are connected to each other via a bus 44. An input/output (I/O) interface 45 is also connected to bus 44.
Various components in the illumination lamp 40 are connected to the I/O interface 45, including a microwave radar unit 46; a storage unit 47 such as a memory card or the like; and a communication unit 48, such as a wireless communication transceiver, e.g., bluetooth, wiFi, etc. The communication unit 48 allows the illumination lamp 40 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processor 41 may be various general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 41 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 41 performs the various methods and processes described above, such as a light fixture control method based on human posture detection.
In some embodiments, the luminaire control method based on human gesture detection may be implemented as a computer program, which is tangibly embodied in a computer-readable storage medium, such as the storage unit 47. In some embodiments, part or all of the computer program may be loaded and/or installed onto the illumination lamp 40 via the ROM 42 and/or the communication unit 48. When the computer program is loaded into RAM 43 and executed by processor 41, one or more steps of the luminaire control method described above based on human posture detection may be performed. Alternatively, in other embodiments, processor 41 may be configured to perform a luminaire control method based on human gesture detection by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A luminaire control method based on human body posture detection, characterized by comprising:
acquiring continuous multi-frame point cloud images acquired by a millimeter wave radar for a preset area;
Respectively inputting multi-frame point cloud images into a pre-trained human body posture detection model, carrying out convolution operation of multi-scale convolution kernels on the point cloud images, fusing a plurality of characteristic images obtained by the convolution operation, and decoding fused characteristics to obtain the probability of classifying human body postures in the point cloud images, wherein the convolution kernels comprise convolution kernels with widths larger than heights;
Determining continuous target point cloud images classified as prone positions by human body gestures according to the probability;
Determining the confidence level of the human body in a sleep state according to the multi-frame target point cloud image;
When the confidence coefficient is larger than a preset threshold value, determining that the human body is in a sleep state, and reducing the brightness of the lamp in the preset area;
the human body posture detection model comprises a first feature extraction sub-model, a second feature extraction sub-model, a third feature extraction sub-model, a fourth feature extraction sub-model, a feature fusion sub-model and a decoding sub-model;
the method for classifying the human body posture of the point cloud image comprises the steps of inputting multi-frame point cloud images into a pre-trained human body posture detection model respectively, carrying out convolution operation of multi-scale convolution kernels on the point cloud images, fusing a plurality of characteristic images obtained by the convolution operation, and decoding the fused characteristics to obtain the human body posture classification of the point cloud images, and comprises the following steps:
Respectively inputting the point cloud image into a first feature extraction submodel, a second feature extraction submodel, a third feature extraction submodel and a fourth feature extraction submodel;
In the first feature extraction submodel, carrying out convolution operation on the point cloud image through a convolution check of n1 multiplied by n1 to obtain a first feature image;
Performing convolution operation on the point cloud image in the second feature extraction submodel through a convolution kernel of n2 multiplied by n3 to obtain a second feature image, wherein n3 is the width of the convolution kernel, and n2 is the height of the convolution kernel;
In the third feature extraction submodel, carrying out convolution operation on the point cloud image through a hole convolution check of N1 multiplied by N1 to obtain a third feature image;
Performing convolution operation on the point cloud image in the fourth feature extraction submodel through a hole convolution kernel of N2 multiplied by N3 to obtain a fourth feature image, wherein N3 is the width of a convolution kernel, and N2 is the height of the convolution kernel;
Fusing the first feature map, the second feature map, the third feature map and the fourth feature map in the feature fusion submodel according to a preset weight to obtain fusion features, wherein the weight is positively related to the ratio of the width to the height of the convolution kernel;
and decoding the fusion features in the decoding sub-model to obtain the human body posture classification in the point cloud image.
2. The method according to claim 1, wherein the acquiring continuous multi-frame cloud images acquired by the millimeter wave radar for the preset area includes:
When detecting that a human body exists in a preset area, controlling the millimeter wave radar to acquire multi-frame original point clouds for the preset area according to preset frequency;
preprocessing the original point cloud to obtain a target point cloud;
And generating a point cloud image of the preset area based on the target point cloud.
3. The method according to any one of claims 1-2, wherein the human posture detection model is trained by:
Acquiring a training point cloud image, wherein the training point cloud image is marked with a first detection frame and a first human body gesture classification;
constructing a human body posture detection model;
Inputting the training point cloud image into the human body posture detection model to obtain a second detection frame and a second human body posture classification;
Updating the human body posture detection model by adopting the first detection frame, the second detection frame, the first human body posture classification and the second human body posture classification;
judging whether a preset training condition is met or not;
If yes, determining that the human body posture detection model is trained;
if not, returning to input the training point cloud image into the human body posture detection model to obtain a second detection frame and a second human body posture classification.
4. A method according to claim 3, wherein updating the human posture detection model using the first detection frame, the second detection frame, a first human posture classification, and a second human posture classification comprises:
the loss value loss is calculated by the following formula:
loss =(class_1–class_2)2+(1 – IOU2)+λ(w1 – w2)2
wherein class_1 is a first human body posture classification, class_2 is a second human body posture classification, IOU is an intersection ratio of the first detection frame and the second detection frame, w1 is a width of the first detection frame, and w2 is a width of the second detection frame;
and adjusting model parameters of the human body posture detection model according to the loss value.
5. The method according to any one of claims 1-2, wherein said determining successive cloud images of target points of the human body pose classified as prone based on said probability comprises:
Determining a point cloud image with the probability of classifying the prone position of the human body as the prone position being greater than a preset probability threshold value as a candidate point cloud image;
and determining continuous images from the candidate point cloud images to serve as target point cloud images.
6. The method according to any one of claims 1-2, wherein determining the confidence level that the human body is in a sleep state from the multi-frame target point cloud image comprises:
calculating the similarity of two adjacent frames of target point cloud images in the multi-frame target point cloud images, and calculating the average value of a plurality of similarities to obtain a similarity average value;
Determining the image quantity of the cloud images of the target point;
The confidence coefficient C of the human body in the sleep state is calculated by the following formula:
C=Sim_ave×(1÷(1+e-i));
wherein sim_ave is a similarity average value, and i is the number of images.
7. A light fixture control device based on human posture detection, comprising:
the point cloud image acquisition module is used for acquiring continuous multi-frame point cloud images acquired by the millimeter wave radar for a preset area;
the human body posture classification prediction module is used for respectively inputting multi-frame point cloud images into a pre-trained human body posture detection model, carrying out convolution operation of multi-scale convolution kernels on the point cloud images, fusing a plurality of feature images obtained by the convolution operation, and decoding fused features to obtain the probability of classifying the human body posture in the point cloud images, wherein the convolution kernels comprise convolution kernels with the width being larger than the height;
The target point cloud image determining module is used for determining continuous target point cloud images classified into prone positions by human body gestures according to the probability;
The confidence coefficient calculating module is used for determining the confidence coefficient of the human body in a sleep state according to the multi-frame target point cloud image;
the lamp brightness adjusting module is used for determining that the human body is in a sleep state when the confidence coefficient is larger than a preset threshold value, and reducing the brightness of the lamp in the preset area;
The human body posture detection model comprises a first feature extraction sub-model, a second feature extraction sub-model, a third feature extraction sub-model, a fourth feature extraction sub-model, a feature fusion sub-model and a decoding sub-model;
The human posture classification prediction module comprises:
The point cloud image input unit is used for inputting the point cloud image into the first feature extraction submodel, the second feature extraction submodel, the third feature extraction submodel and the fourth feature extraction submodel respectively;
The first feature extraction unit is used for carrying out convolution operation on the point cloud image through convolution kernel of n1 multiplied by n1 in the first feature extraction sub-model to obtain a first feature map;
The second feature extraction unit is used for carrying out convolution operation on the point cloud image through convolution kernels of n2 multiplied by n3 in a second feature extraction sub-model to obtain a second feature image, wherein n3 is the width of the convolution kernels, and n2 is the height of the convolution kernels;
The third feature extraction unit is used for carrying out convolution operation on the point cloud image through the hole convolution kernel of N1×N1 in the third feature extraction sub-model to obtain a third feature map;
The fourth feature extraction unit is used for carrying out convolution operation on the point cloud image through the hole convolution kernel of N2 multiplied by N3 in the fourth feature extraction sub-model to obtain a fourth feature image, wherein N3 is the width of the convolution kernel, and N2 is the height of the convolution kernel;
The feature fusion unit is used for fusing the first feature map, the second feature map, the third feature map and the fourth feature map according to a preset weight in the feature fusion submodel to obtain fusion features, wherein the ratio of the weight to the width to the height of the convolution kernel is positively correlated;
And the classification prediction unit is used for decoding the fusion characteristics in the decoding sub-model to obtain the human body posture classification in the point cloud image.
8. An illumination lamp, characterized in that the illumination lamp comprises:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the human gesture detection-based luminaire control method of any one of claims 1-6.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores computer instructions for causing a processor to implement the human posture detection based luminaire control method of any one of claims 1-6 when executed.
CN202311343943.XA 2023-10-17 2023-10-17 Lamp control method and device based on human body posture detection, illuminating lamp and medium Active CN117412440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311343943.XA CN117412440B (en) 2023-10-17 2023-10-17 Lamp control method and device based on human body posture detection, illuminating lamp and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311343943.XA CN117412440B (en) 2023-10-17 2023-10-17 Lamp control method and device based on human body posture detection, illuminating lamp and medium

Publications (2)

Publication Number Publication Date
CN117412440A CN117412440A (en) 2024-01-16
CN117412440B true CN117412440B (en) 2024-05-10

Family

ID=89493697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311343943.XA Active CN117412440B (en) 2023-10-17 2023-10-17 Lamp control method and device based on human body posture detection, illuminating lamp and medium

Country Status (1)

Country Link
CN (1) CN117412440B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086700A (en) * 2018-07-20 2018-12-25 杭州电子科技大学 Radar range profile's target identification method based on depth convolutional neural networks
CN112580458A (en) * 2020-12-10 2021-03-30 中国地质大学(武汉) Facial expression recognition method, device, equipment and storage medium
CN115841707A (en) * 2022-11-19 2023-03-24 郑州大学 Radar human body posture identification method based on deep learning and related equipment
CN116012875A (en) * 2022-12-07 2023-04-25 奥比中光科技集团股份有限公司 Human body posture estimation method and related device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086700A (en) * 2018-07-20 2018-12-25 杭州电子科技大学 Radar range profile's target identification method based on depth convolutional neural networks
CN112580458A (en) * 2020-12-10 2021-03-30 中国地质大学(武汉) Facial expression recognition method, device, equipment and storage medium
CN115841707A (en) * 2022-11-19 2023-03-24 郑州大学 Radar human body posture identification method based on deep learning and related equipment
CN116012875A (en) * 2022-12-07 2023-04-25 奥比中光科技集团股份有限公司 Human body posture estimation method and related device

Also Published As

Publication number Publication date
CN117412440A (en) 2024-01-16

Similar Documents

Publication Publication Date Title
CN112990211B (en) Training method, image processing method and device for neural network
US20190011263A1 (en) Method and apparatus for determining spacecraft attitude by tracking stars
JP6572537B2 (en) Authentication apparatus, method, and program
CN112115805B (en) Pedestrian re-recognition method and system with bimodal difficult-to-excavate ternary-center loss
CN111476814B (en) Target tracking method, device, equipment and storage medium
JP6977345B2 (en) Image processing device, image processing method, and image processing program
CN114222986A (en) Random trajectory prediction using social graph networks
CN111291749B (en) Gesture recognition method and device and robot
CN109996377B (en) Street lamp control method and device and electronic equipment
CN114139564B (en) Two-dimensional code detection method and device, terminal equipment and training method of detection network
CN116095922B (en) Lighting lamp control method and device, lighting lamp and storage medium
CN117412440B (en) Lamp control method and device based on human body posture detection, illuminating lamp and medium
WO2021155661A1 (en) Image processing method and related device
CN115866852B (en) Lighting lamp light adjusting method, device, equipment and storage medium
CN109960990B (en) Method for evaluating reliability of obstacle detection
CN111091022A (en) Machine vision efficiency evaluation method and system
CN111488476B (en) Image pushing method, model training method and corresponding devices
CN116486063A (en) Detection frame calibration method, device, equipment and computer readable storage medium
CN113158730A (en) Multi-person on-duty identification method and device based on human shape identification, electronic device and storage medium
CN115886626B (en) Toilet seat control method and device based on microwave radar and intelligent toilet
CN117077812B (en) Network training method, sleep state evaluation method and related equipment
CN117611929B (en) LED light source identification method, device, equipment and medium based on deep learning
CN112686936B (en) Image depth completion method, apparatus, computer device, medium, and program product
CN118097797A (en) Face living body detection method, device, equipment and medium
CN117473332A (en) Data processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant