CN114063572B - Non-perception intelligent device control method, electronic device and control system - Google Patents

Non-perception intelligent device control method, electronic device and control system Download PDF

Info

Publication number
CN114063572B
CN114063572B CN202010760967.5A CN202010760967A CN114063572B CN 114063572 B CN114063572 B CN 114063572B CN 202010760967 A CN202010760967 A CN 202010760967A CN 114063572 B CN114063572 B CN 114063572B
Authority
CN
China
Prior art keywords
information
point cloud
person
human body
radio frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010760967.5A
Other languages
Chinese (zh)
Other versions
CN114063572A (en
Inventor
唐志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Entropy Technology Co ltd
Original Assignee
Beijing Entropy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Entropy Technology Co ltd filed Critical Beijing Entropy Technology Co ltd
Priority to CN202010760967.5A priority Critical patent/CN114063572B/en
Publication of CN114063572A publication Critical patent/CN114063572A/en
Application granted granted Critical
Publication of CN114063572B publication Critical patent/CN114063572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Selective Calling Equipment (AREA)

Abstract

A control method, electronic equipment and control system of a non-perception intelligent device belong to the technical field of intelligent home, wherein the method comprises the following steps: acquiring a radio frequency scanning signal; judging whether a person exists in the controlled scene based on the radio frequency scanning signal, and calculating and acquiring any one or more of the following human body information: the position of the person, the posture of the person, the physiological information of the person; collecting and storing control information, wherein the control information comprises whether a person exists in a scene or not and human body information acquired when the person exists in the scene; and matching the collected control information according to the preset corresponding relation between the control information and the intelligent equipment control instruction, determining the corresponding control instruction and outputting the control instruction.

Description

Non-perception intelligent device control method, electronic device and control system
Technical Field
The invention belongs to the technical field of intelligent home furnishing, and particularly relates to a non-inductive intelligent device control method.
Background
With the progress of technology, intelligent household appliances have been rapidly developed in recent years and enter into thousands of households, so that the family life of people is more comfortable, simpler, more convenient and more happy. How to allow users to manage home devices in a more convenient way has been a topic of interest in the industry. For this reason, CN111158246 a provides an intelligent home appliance control system, which detects gestures in a designated area by using a microwave radar, and sends detected gesture information to an intelligent home appliance control application program; and the intelligent household appliance control client determines a control instruction according to the received gesture information, and further controls the intelligent household appliance according to the determined control instruction. However, the control system needs to preset gesture instructions through the client, and requires the user to make specific gestures in the designated area, the whole control process still needs the user to make active operations, and no perception control can be realized.
Disclosure of Invention
The invention aims to provide a control method of a non-perception intelligent device, and provides corresponding electronic equipment and a control system.
Based on the above purpose, the invention provides the following three technical schemes:
in a first aspect, a method for controlling a non-aware smart device, the steps include,
S101: acquiring a radio frequency scanning signal;
S102: judging whether a person exists in the controlled scene based on the radio frequency scanning signal, and calculating and acquiring any one or more of the following human body information: the position of the person, the posture of the person, the physiological information of the person;
s103: collecting and storing control information, wherein the control information comprises whether a person exists in a scene or not and human body information acquired when the person exists in the scene;
S104: and matching the collected control information according to the preset corresponding relation between the control information and the intelligent equipment control instruction, determining the corresponding control instruction and outputting the control instruction. According to the method, objective information such as whether a person exists in a scene, the position of the person, the posture of the person, physiological information of the person and the like is collected to be matched with a control instruction, and the person is not required to perform active control actions, so that non-perception control can be realized.
In order to further meet the active control wish of the person, the human body information can also comprise gesture information of the person.
In order to save the operation amount, step S102 judges whether a person exists in the controlled scene according to the radio frequency scanning signal; and when the person is judged to exist, acquiring the human body information based on the point cloud computing.
In order to further save the operation amount, whether a moving object exists or not can be judged through simple calculation, and whether a person exists or not can be further confirmed. The step S102 is a process of judging whether a person exists in the scene, which is as follows:
performing FFT signal processing on the acquired radio frequency scanning signals to judge whether moving objects exist in a scene or not; if no moving object is found and no person is in the previous round of scanning judgment scene, directly judging that no person is in the current round of scanning scene;
if the moving object is found or the moving object is not found but the person is in the scene is judged in the previous scanning, the radio frequency scanning signal is further calculated to be point cloud information, and whether the person is in the scene is judged based on the point cloud information.
Step S102 of determining whether a person is present by using the machine learning model MO obtained by training includes:
s201: based on radio frequency scanning, acquiring point cloud information: regarding point cloud data obtained by a group of N radio frequency scans as 1 point cloud data set, and calculating point cloud information corresponding to each point cloud data set, wherein the information corresponding to each reflection point in the point cloud information at least comprises the spatial position, the speed and the signal intensity information of the reflection point; in order to improve the efficiency and accuracy of machine learning, the point cloud information corresponding to the information corresponding to each reflection point can further comprise acceleration and noise amplitude information;
S202: inputting the point cloud information calculated in the step S201 into a model MO, outputting the probability that the target is O= { (Pr m,Psm),m=1,2,3,……,M},Prm is the existence of the mth human body target to be detected, ps m is the spatial position of the representing point of the mth human body target to be detected, M is the number of people in the scene, the spatial position of the representing point of the human body corresponds to the position of the person, M=0 represents no people in the scene, and M > 0 represents the existence of the person in the scene;
the model MO is obtained through training the following steps:
S301: based on radio frequency scanning, acquiring point cloud data of a scene;
S302: regarding point cloud data obtained by a group of N radio frequency scans as 1 point cloud data set, and calculating point cloud information corresponding to each point cloud data set, wherein the information corresponding to each reflection point in the point cloud information at least comprises the spatial position, the speed and the signal intensity information of the reflection point; n is an integer greater than or equal to 2;
S303: marking the space position information of the human body representative point corresponding to the point cloud information acquired by each group of radio frequency scanning in the scene according to the reference information recorded in the data acquisition process; collecting point cloud information obtained by multiple groups of radio frequency scanning and corresponding human body representative point space position information to form a first sample set; training a model MO capable of identifying the number M of people in a scene and the spatial position of each human representative point by using a machine learning method based on the first sample set; the reference information is a video record or an audio-video record synchronously acquired in the radio frequency scanning process; the marks of the training set can be marked manually, and as a more preferable scheme, the positions of representative points, key points and behavior information of the human body can be extracted from the reference information by using the existing artificial intelligent identification method, so that the point cloud information is marked automatically based on the same time axis.
The model MK and MA obtained by training is used for obtaining the gesture of the person and the gesture of the person according to the result output by MO, including,
S203: when the number M of human bodies output by the MO is more than or equal to 1, filtering the point cloud information obtained by radio frequency scanning according to the spatial position information of the human body representative points output by the model MO, and only retaining the point cloud information in a specific distance range near the human body representative points; inputting the filtered point cloud information into a model MK, wherein the model MK uses a sliding window method to scan and identify the input point cloud information, the window length corresponds to N pk groups of radio frequency scanning, and a plurality of M pieces of key point information of human bodies are output;
S204: inputting the output result of the model MK into the model MA, scanning and identifying the input information by the model MA by using a sliding window method, continuously outputting the result for N ma times by the window length, and outputting the specific gesture of the person and the gesture of the person;
step S304 is continued on the basis of the training of the model MO to obtain the model MK:
The point cloud information obtained by N pk groups of radio frequency scanning is regarded as a point cloud information sequence, the point cloud information sequence is filtered according to the output result of the model MO, and only the point cloud information in a specific distance range near the human representative point is reserved, so that the filtered point cloud information sequence is obtained; n pk is an integer greater than 1;
Based on the human body joint point, selecting a plurality of key points on the human body, marking out the space position information of the human body key points corresponding to each filtered point cloud information sequence according to the reference information recorded in the data set formation, collecting a plurality of information sequences, filtering and marking to form a second sample set;
Training a model MK capable of identifying a plurality of key point information of M human bodies in a scene by using a machine learning method based on the second sample set; the output target of the model MK is key point information ok= { (Pr k,Psk) of each human body target to be detected, k=1, 2,3, … …, K }, and K is the number of selected human body key points; pr k is the probability that the kth key point of a certain human body target to be detected exists; ps k is the spatial position where the kth key point of a certain human target to be detected exists;
Step S305 is continued on the basis of the training of the model MK to acquire the model MA:
Obtaining a third sample set, wherein the third sample set comprises a positive example sample and a negative example sample, the positive example sample comprises a certain specific human body gesture or gesture obtained from reference information, and N ma times of continuous output results of MK corresponding to the specific human body gesture or gesture, and the rest MK output results are taken as negative example samples in which the certain specific human body gesture or gesture does not occur; n ma is an integer greater than 1;
Training a model MA capable of recognizing a specific human body gesture or gesture according to a plurality of point cloud information sequences by using a machine learning method based on a third sample set; the output target of the model MA is the probability that a certain human body target to be detected presents a certain specific gesture or gesture.
The non-perception intelligent device control method further comprises the steps of obtaining environment information and collecting the environment information as control information, wherein the environment information comprises environment temperature and/or light intensity.
In step S105, for each type of control information, the controlled device in which the type of control information is registered is obtained, according to which the type of control information is distributed to each registered controlled device control information group, then each control information group is matched with the corresponding controlled device according to the preset correspondence between the control information and the intelligent device control instruction, the corresponding control instruction is determined, and the controlled device is controlled by using the determined control instruction.
In a second aspect, an electronic device includes:
One or more processors;
a storage device having one or more programs stored thereon;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect.
In a third aspect, a sensorless home control system includes:
the environment monitoring module is used for monitoring the environment temperature and/or the light intensity to obtain environment information;
The radio frequency detection module is used for carrying out radio frequency scanning on the controlled scene to obtain radio frequency scanning signals;
The electronic device of claim 7, for obtaining environmental information; the method comprises the steps of acquiring a radio frequency scanning signal, judging whether a person exists in a scene according to the radio frequency scanning signal, and acquiring various human body information; determining a device control instruction according to control information containing environment information, whether people exist or not and human body information;
And the communication module is used for sending the control instruction determined by the electronic equipment to the control module of the controlled equipment.
In order to facilitate installation, reduce user wiring investment and improve the signal acquisition effect of radio frequency scanning, the environment monitoring module, the radio frequency detection module, the electronic equipment and the communication module are integrated in the lamp or the smoke sensor.
Further, the non-perception home control system further comprises a wireless signal receiving module, wherein the wireless signal receiving module is used for receiving other control signals acquired by sensors installed at other positions.
The intermediate frequency signal obtained by radio frequency scanning generally needs to obtain point cloud information through twice FFT and CFAR algorithm, and has large operation amount and high requirement on operation equipment. The control method of the non-perception intelligent equipment provided by the invention firstly carries out preliminary judgment of whether a person exists or not on a controlled scene according to the acquired radio frequency scanning signals, if the controlled scene is judged to exist, further operation is carried out to acquire more abundant human body information; if no person is judged, the judgment result is directly collected without more operation, so that the operation capability is saved. Further, when judging whether a person exists, firstly, judging whether a moving object exists in the scene through FFT (fast Fourier transform) simple calculation, and accordingly, directly acquiring a judging result or judging whether the person exists in the scene through point cloud calculation, so that the operation amount can be further reduced. In the process of obtaining various information by using MO, MK and MA, the invention also implements a design thought of screening step by step to save operation capability, so as to reduce the volume of corresponding electronic equipment and control system, and enable the corresponding control system to have the condition of being integrated in a small household appliance such as a lamp or a smoke sensor.
Drawings
FIG. 1 is a flow chart of example 1;
FIG. 2 is a schematic diagram of a layout of key points of a human body;
FIG. 3 is a general flow chart of example 2;
FIG. 4 is a partial flow chart of example 2;
Fig. 5 is a schematic diagram of a sliding window.
Detailed Description
The application will be further described with reference to the drawings and specific examples.
Example 1
A method for obtaining a human perception model of models MO, MK and MA, fig. 1 shows a flow of the method, may include the following steps:
S301: and based on radio frequency scanning, acquiring point cloud data of a certain scene. The acceptable frequency range of the radio frequency signal is 3Ghz-90Ghz, and the bandwidth is 500Mhz-20Ghz. The transmission and reception of radio frequency signals can be achieved by MIMO antennas pre-installed in the scene. In order to obtain stereo signals, multiple groups of antennas may be laid out in a scene to obtain gridded point cloud data. And in the radio frequency scanning process, synchronously video the scene or adopting other marking means to acquire the reference information.
S302: taking point cloud data obtained by a group of N times (N is an integer greater than or equal to 2) of radio frequency scanning as 1 point cloud data set, and calculating point cloud information corresponding to each point cloud data set; the information corresponding to each reflection point P in the point cloud information includes at least a spatial position (x, y, z) of the reflection point, a velocity v (velocity information can be obtained when N is 2 or more), and a signal intensity g information, and may further include an acceleration a (acceleration information can be obtained when N is 3 or more) and a noise amplitude N, which are denoted as P { (x, y, z), v, g, a, N }.
The space position information of the object to be detected is acquired by the equipment which is required to linearly scan the bandwidth frequency band B in the Tc time period, transmit the radio frequency signal and simultaneously receive the radio frequency signal, mix the radio frequency signal and the radio frequency signal, filter the high frequency signal to obtain an intermediate frequency signal, and then sample the signal, wherein the scanning frequency is linearly increased to obtainWhere τ is the time required for a back and forth movement of the transmitted signal from the device to the target to be detected, tc is the time period, ƒ τ is the frequency at which the intermediate frequency signal is received, and B is the bandwidth frequency band, to obtain the distance d=/>, between the target to be detected and the deviceAnd C is the light speed, and the ƒ tau value of the reflection point is obtained by carrying out Fourier transform on the sampling signal, so that the distance information of the reflection point, namely the target to be detected, is obtained.
The acceleration information of the target to be detected is acquired, the target to be detected is in a moving state, the phase of the radio frequency received by two times of detection can generate great change due to the existence of Doppler phenomenon, and the displacement between two reflection points of the target to be detected can be obtained through the phase changeInstantaneous speed is/>Wherein/>For the wavelength of the radio frequency used,/>And acquiring the acceleration of the object to be detected at each reflection point through at least three scans for the phase difference of the two scans. Typically, one radio frequency scan period T c = 20-3500 mus.
In order to reduce the burden of subsequent data operation, the point cloud data obtained by radio frequency scanning can be filtered according to the data obtained by radio frequency scanning unmanned scenes in the resolving process, and fixed scene information is filtered.
S303: marking the space position information of the human body representative point corresponding to the point cloud information acquired by each group of radio frequency scanning in the scene according to the reference information recorded in the data acquisition process, wherein the central point of the human body trunk can be selected as the representative point as an implementation mode; when in marking, the human body representative point position can be extracted from the reference information by utilizing the existing artificial intelligent identification method, and then the point cloud information is automatically marked based on the same time axis;
Collecting point cloud information obtained by multiple groups of radio frequency scanning and corresponding human body representative point space position information to form a first sample set;
Based on the first sample set, a model MO capable of identifying the number M of people in a scene and the spatial position of each human representative point is trained by using a machine learning method, such as a random forest, a support vector machine, adaBoost or GRADIENT TREE Boosting based on a decision tree, a neural network and the like, the output target of the model MO is O= { (Pr m,Psm),m=1,2,3,……,M},Prm) which is the probability that the M-th human target to be detected exists, ps m is the spatial position of the M-th human target to be detected, and M is the number of people in the scene.
S304: regarding point cloud information obtained by radio frequency scanning of N pk groups (N pk times N, N pk is an integer larger than 1, preferably 2-25) as a point cloud information sequence, filtering the point cloud information sequence according to the output result of a model MO, and only reserving point cloud information of a specific distance range, even a specific speed range (data of a human body size range are used as effective data, so that the data operand is further reduced) near a human body representative point to obtain a filtered point cloud information sequence;
Based on the human body joint point, selecting a plurality of key points on the human body, marking out the space position information of the human body key points corresponding to each filtered point cloud information sequence according to the reference information recorded in the data set formation, collecting a plurality of information sequences, filtering and marking to form a second sample set; selection of human body key points can be referred to fig. 2, in the embodiment shown in fig. 2, k=8, and reference numerals 1 to 8 in fig. 2 indicate 8 human body key points, namely, torso 1 (which corresponds to the human body representative point), head 2, elbows 3 and 4, knee joints 5 and 6, and hands 7 and 8, respectively;
Training a model MK capable of identifying a plurality of key point information of M human bodies in a scene by using a machine learning method, such as random forest, support vector machine, adaBoost or GRADIENT TREE Boosting based on a decision tree, a neural network and the like based on the second sample set; the output target of the model MK is key point information ok= { (Pr k,Psk) of each human body target to be detected, k=1, 2,3, … …, K }, and K is the number of selected human body key points; pr k is the probability that the kth key point of a certain human body target to be detected exists; ps k is the spatial position where the kth key point of a certain human target to be detected exists. According to the selected algorithm, a numerical class loss function such as MSE, maihattan distance, etc. between the input and output values is used as an evaluation method to improve model accuracy.
S305: obtaining a third sample set, wherein the third sample set comprises a positive sample and a negative sample, the positive sample comprises a gesture of a specific person, a gesture of the person (such as a fall) and N ma times (N ma is an integer greater than 1, preferably 18-750) corresponding to the gesture of the specific person, which are obtained from the reference information, and the rest MK output results are taken as negative samples without occurrence of a specific action;
Based on a third sample set, training a model MA capable of identifying the gesture of a specific person and the gesture of the person according to a plurality of point cloud information sequences by using a machine learning method, such as a random forest, a support vector machine, adaBoost or GRADIENT TREE Boosting based on a decision tree, a neural network and the like; the output target of the model MA is the probability that a certain human target to be detected has a certain specific behavior. According to the selected algorithm, a class loss function such as cross entropy in a neural network or finger in a support vector machine is used as an evaluation method to improve model accuracy.
Similarly, other specific gestures or gestures, such as sitting, standing, walking, running, jumping, waving, clapping, etc., are set with reference to step S305, and the parameters N ma are adjusted according to the action occurrence time, so that a plurality of models MA capable of recognizing different gestures or gestures can be obtained by repeating step S305.
As one embodiment, T c=1000μs,N=3,Npk=10,Nma =50, monitoring the fall behavior by MA, monitoring N ma times corresponds to 1500ms, which is substantially the same time required for a fall behavior to occur.
In order to improve the application universality of the model, different settings can be carried out on the scene, and different activities of different numbers of people in the scene are arranged, so that a richer sample set is obtained.
Example 2
As shown in fig. 3 and 4, the control method of the non-perception intelligent device comprises the following steps,
S101:
Acquiring a radio frequency scanning signal; acquiring environmental information including environmental temperature and light intensity;
S102:
judging whether a person exists in the controlled scene according to the acquired radio frequency scanning signals:
acquiring radio frequency scanning signals, referring to the description of the step S302 of the embodiment 1, processing the signals acquired through radio frequency scanning through FFT Fourier transform, calculating the speeds of all reflection points in a scene, and searching the reflection points generating the speeds to judge whether moving objects exist in the scene;
if no moving object is found and no person is in the previous round of scanning judgment scene, directly judging that no person is in the current round of scanning scene (no further operation is performed on the signals acquired by the current round of scanning);
If a moving object is found or if no moving object is found but a person is in the scene is determined by the previous scanning, the radio frequency scanning signal is further resolved into point cloud information, and whether the person is in the scene is determined by using the model MO based on the point cloud information by calculation and identification, wherein the steps comprise:
S201: acquiring point cloud information: regarding point cloud data obtained by a group of N radio frequency scans as 1 point cloud data set, and calculating point cloud information corresponding to each point cloud data set, wherein the information corresponding to each reflection point P in the point cloud information comprises the spatial position, speed, acceleration, signal intensity and noise amplitude information of the reflection point;
s202: the point cloud information calculated in the step S201 is input into a model MO, the output target is o= { (Pr m,Psm),m=1,2,3,……,M},Prm is the probability of existence of the mth human target to be detected, ps m is the spatial position of the representing point of the mth human target to be detected, M is the number of people in the scene, the spatial position of the representing point of the human body corresponds to the position of the person, m=0 represents no people in the scene, and M > 0 represents the existence of the person in the scene.
When the person is judged to exist, acquiring the position information of the person according to the output result of the MO; the method comprises the steps of obtaining the gesture of a person and the gesture of the person through deployment models MK and MA, wherein the steps comprise:
S203: judging the output result of the MO, when the number M of human bodies output by the MO is more than or equal to 1, filtering point cloud information obtained by radio frequency scanning according to the spatial position information of human body representative points output by the model MO, only keeping the point cloud information in a specific distance range and even a specific speed range near the human body representative points, inputting the filtered point cloud information into the model MK, and starting the model MK; the model MK uses a sliding window method to scan and identify the input point cloud information, the window length corresponds to N pk groups of radio frequency scanning, and a plurality of key point information of M human bodies is output. The principle of the sliding window is shown in fig. 5, in the embodiment shown in fig. 5, the window length corresponds to N pk =10, the pane width S pk =2, that is, when the model MK receives the point cloud information obtained by 4 sets of radio frequency scanning, the initial state of the window is formed, and corresponding to step S401, one-time scanning identification is performed; MK continues to receive the point cloud information obtained by 2 groups of radio frequency scanning, the window slides forward for 1 time, the forefront 2 groups of failure information is subtracted, and 2 groups of latest information are added to form a current window, corresponding to step S402, MK performs second scanning identification on the current window information; MK continues to receive the point cloud information obtained by 2 groups of radio frequency scanning, the window slides forward for 1 time, 2 groups of failure information in front of the window at S402 are subtracted, 2 groups of latest information are added to form a new current window again, corresponding to step S403, and third scanning identification is carried out on the information corresponding to the new window; and so on, traverse all received information. Of course, S pk may take smaller values, such as S pk =1, or larger integers, and the larger S pk, the lower the computational burden of the device, but the accuracy of the identification will be reduced. When the human body is in a state with small activity amplitude such as sleep, the value of S pk can be properly increased, for example, S pk=Npk/2=5 or S pk=Npk =10.
S204: inputting the output result of the model MK into the model MA, scanning and identifying the input information by the model MA by using a sliding window method, continuously outputting the result for N ma times by the window length, and outputting the specific gesture of the person and the gesture of the person;
And inputting the output result of the model MK into one or more models MA, wherein the models MA scan and identify the input information by using a sliding window method, and the window length corresponds to N ma MK continuous output results and outputs a specific human body gesture or gesture. The working principle of the sliding window is identical to that described in S40, but as an embodiment, the pane width S ma is preferably N ma/2 (keeping in accordance with example 1, N ma =50) in step S50. It will be appreciated that S ma may also be reduced appropriately, in exchange for accuracy by the amount of calculation; or increased, possibly sacrificed in accuracy in exchange for speed. The model MA for identifying a plurality of different human body gestures or gestures operates synchronously, and each model MA performs scanning identification at set intervals, inferentially identifies the possibility of different behaviors and outputs the possibility.
Further, a step of determining the type of human body behavior output by the model MA may be added, and a value is assigned to the pane width S pk for different human body behaviors and fed back to step S203, and a value is assigned to the pane width S ma for different human body behaviors and fed back to step S204.
There are a number of prior art related methods for obtaining physiological information of a person, for example, the method described with reference to CN109729632 a.
S103:
Collecting control information, wherein the control information comprises a judgment result of whether a person exists in a previous round of scanning scene, a judgment result of whether the person exists in a current round of scanning scene, and human body information and environment information acquired when the person exists in the current round of scanning judgment scene;
S104:
For each type of control information, obtaining the controlled equipment registered with the type of control information, distributing the type of control information to each registered controlled equipment control information group according to the type of control information, and then respectively matching each group of information: and matching each control information group with the corresponding controlled equipment according to the preset corresponding relation between the control information and the control instruction of the controlled equipment, determining the corresponding control instruction and outputting the control instruction to the corresponding controlled equipment.
As another implementation mode, various control information is distributed to a control module of the controlled device registered with the controlled information, the control module of the controlled device matches the control instruction, and the control instruction is sent to the executing mechanism.
Example 3
An electronic device, comprising:
One or more processors;
a storage device having one or more programs stored thereon;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method described in embodiment 2.
Example 4
A sensorless home control system, comprising:
The environment monitoring module is used for monitoring the environment temperature and the light intensity to obtain environment information; the radio frequency detection module is used for carrying out radio frequency scanning on the controlled scene to obtain radio frequency scanning signals; the electronic apparatus of embodiment 3, configured to acquire acquisition environment information; the method comprises the steps of acquiring radio frequency scanning signals, resolving the radio frequency scanning signals into point cloud information, and performing point cloud calculation based on the point cloud information to acquire human body information; determining a device control instruction according to control information containing environment information and human body information; and the communication module is used for sending the control instruction determined by the electronic equipment to the control module of the controlled equipment.
The environment monitoring module, the radio frequency detection module, the electronic equipment and the communication module are integrated in the lamp or the smoke sensor.
The controlled device may include: electric lamps, air conditioners, door locks, switch cabinets, patch boards, sound boxes, loudspeakers, smoke sensors, spraying, curtains, bathroom heaters, computers, electric fans, cameras, television boxes, routers, electric windows and the like.
Taking control of intelligent lamps, air conditioners and switch cabinets (especially a switch which does not need to be normally opened, or called energy-saving control switch) as an example, the following provides a reference setting for distribution and matching of control information:
1. Device number:
1-intelligent lamp, 2-air conditioner and 3-energy-saving control switch
2. Control information class number and description:
0-environment information, 1-existence of a person, 2-position of a person, 3-posture of a person, 4-physiological information of a person and 5-gesture of a person.
0-The environmental information provides values of environmental temperature, light intensity and the like; 1-the presence information provides the presence or absence state of the whole household space; 2-the position information of the person is used for distinguishing different positions of the person on the sofa, the bed and the like; the posture information of the 3-person is used for distinguishing the postures of sitting, standing (including standing/walking/running jump and other standing postures), lying or falling and the like; 4-the physiological information of the person provides a respiration or heartbeat frequency value of the person; the 5-person gesture refers to a preset specific gesture expressing a specific control instruction, such as waving a hand, drawing a circle in space, and the like. When the respiration or heartbeat frequency value of the person reaches a low threshold range for a period of time, indicating that the person enters a sleep state; when the respiration or heartbeat frequency value of the person reaches a high threshold value range for a period of time, the person is in a waking activity state; if the threshold range is exceeded, the person is considered to be in a non-health hazard state. If someone is in the previous round of scanning space and no person is in the present round of scanning space, the system considers that the person just leaves; no one is in the previous round of scanning space, and one is in the present round of scanning space, then the person is considered to just enter.
The control information acquired by the previous scan is recorded in a database, and if necessary, the database stores the control information acquired by the previous scan for a period of time, and the control information is called and compared.
After the six kinds of control information are output, how to combine and correspond to the specific control instructions can be preset according to the needs of the controlled equipment. For example, when the controlled device is a 3-energy-saving control switch, only the presence or absence (1-presence or absence) of a person in the room is required to be known, and the position information of 2-persons is not required to be known, so that the 3-energy-saving control switch of the device only needs to register the control information 1-presence or absence. When the controlled device is a 1-intelligent light, the user may require that the user lies (3-person position) in the house very black (0-environmental information), heart rate and respiration (4-person physiological information) frequency in the bed (2-person position) are reduced, and after 15 minutes, the light is dimmed, so that the control of the light is more accurate. Even requiring the user to make a specific gesture (5-person gesture) to issue an active control command. The registration control information 1-whether a person exists or not can quickly confirm the incoming state or the leaving state of the person by comparing the information of the existence of the person in the front and back time spaces, and the control instructions such as turning on or turning off the light can be conveniently given.
Table 1 shows a way of registering different devices and control information, such as a 1-intelligent lamp, a 2-air conditioner, a 3-energy-saving control switch, etc.
The control information for the 1-intelligent lamp registration in table 1 is: 0-environment information (light intensity), 1-presence/absence of a person, 2-position of a person, 3-posture of a person, 4-physiological information of a person, 5-gesture of a person; the information of the 2-air conditioner registration is: 0-environmental information (temperature), 1-presence or absence of a person, 2-position of a person, 4-physiological information of a person; the control information registered by the 3-energy saving control switch is only: 1-presence or absence of a person. When some control information is acquired, after the system inquires the equipment number through the table 1, whether to distribute the control signal acquired by the scanning of the round is also required to be judged according to the control frequency and the sending time; if the time interval has not yet arrived, retransmission is required. Thus, the controlled device can adjust the control time according to the requirement.
The control information is distributed to each registered controlled device control information group according to table 1, and then the information of each group is matched respectively: and matching each control information group with the corresponding controlled equipment according to the preset corresponding relation between the control information and the control instruction of the controlled equipment, determining the corresponding control instruction, outputting the control instruction to the corresponding controlled equipment, and executing the corresponding instruction by the controlled equipment.
Taking an energy-saving control switch as an example, the matching relation of control instructions can be as follows: when the acquired control information 1- "whether there is a person" or not and the information of the previous round is "no person", matching the control instruction of opening the control circuit "; when the acquired control information 1- "whether there is a person" or not indicates that the information of the current round is "no person" and the information of the previous round is "person", the control instruction of "closing the control circuit" is matched. Other information is ignored.
Taking an intelligent air conditioner with automatic switch, portable blowing and automatic sleep temperature adjustment as an example, the matching relation of control instructions can be as follows: when the acquired control information 1- "whether the person exists" item gives the information of the present round as "person exists" and the information of the previous round as "person does not exist", matching the control instruction of "opening the air conditioner"; when the acquired control information 1- "whether there is a person" indicates that the information of the present round is "no person" and the information of the previous round is "person", the control instruction of "closing the air conditioner" is matched.
The 2-position information corresponds to a control instruction for adjusting the blowing direction of the air conditioner.
2-Position information gives information that the person is in bed, 4-physiological information gives information that the heart rate or respiration of the person is slow and reaches a certain time; at this time, if the air conditioner is in a refrigeration mode, a control instruction of 'raising the temperature' is matched; at this time, if the air conditioner is in the heating mode, a control instruction of 'temperature regulation' is matched.
Other information is ignored.

Claims (8)

1. A control method of a non-perception intelligent device is characterized by comprising the following steps,
S101: acquiring a radio frequency scanning signal;
S102: judging whether a person exists in the controlled scene based on the radio frequency scanning signal, and calculating and acquiring any one or more of the following human body information: the position of the person, the posture of the person, the physiological information of the person;
s103: collecting and storing control information, wherein the control information comprises whether a person exists in a scene or not and human body information acquired when the person exists in the scene;
S104: matching the collected control information according to the preset corresponding relation between the control information and the intelligent equipment control instruction, determining the corresponding control instruction and outputting the control instruction;
Step S102, judging whether a person exists in a controlled scene according to a radio frequency scanning signal; when the person is judged to exist, acquiring human body information based on point cloud computing;
Step S102 is a process of judging whether a person exists in the scene:
Performing FFT signal processing on the acquired radio frequency scanning signals to judge whether moving objects exist in a scene or not;
if no moving object is found and no person is in the previous round of scanning judgment scene, directly judging that no person is in the current round of scanning scene;
If a moving object is found or if no moving object is found but a person is in the scene is judged in the previous round of scanning, the radio frequency scanning signal is further calculated to be point cloud information, and whether the person is in the scene is judged based on the point cloud information;
step S102 determines whether a person is present based on the point cloud information using the machine learning model MO obtained by training, including,
S201: acquiring point cloud information: regarding point cloud data obtained by a group of N radio frequency scans as 1 point cloud data set, and calculating point cloud information corresponding to each point cloud data set, wherein the information corresponding to each reflection point in the point cloud information at least comprises the spatial position, the speed and the signal intensity information of the reflection point;
S202: inputting the point cloud information calculated in the step S201 into a model MO, outputting the probability that the target is O= { (Pr m,Psm),m=1,2,3,……,M},Prm is the existence of the mth human body target to be detected, ps m is the spatial position of the representing point of the mth human body target to be detected, M is the number of people in the scene, the spatial position of the representing point of the human body corresponds to the position of the person, M=0 represents no people in the scene, and M > 0 represents the existence of the person in the scene;
the model MO is obtained through training the following steps:
S301: based on radio frequency scanning, acquiring point cloud data of a scene;
S302: regarding point cloud data obtained by a group of N radio frequency scans as 1 point cloud data set, and calculating point cloud information corresponding to each point cloud data set, wherein the information corresponding to each reflection point in the point cloud information at least comprises the spatial position, the speed and the signal intensity information of the reflection point; n is an integer greater than or equal to 2;
S303: marking the space position information of the human body representative point corresponding to the point cloud information acquired by each group of radio frequency scanning in the scene according to the reference information recorded in the data acquisition process; collecting point cloud information obtained by multiple groups of radio frequency scanning and corresponding human body representative point space position information to form a first sample set; based on the first sample set, a model MO capable of identifying the number M of people in the scene and the spatial position of each human representative point is trained by using a machine learning method.
2. The sensorless smart device control method of claim 1, wherein the human information further includes a human gesture.
3. The method for controlling a sensorless smart device of claim 2, wherein the models MK and MA obtained by training are used to obtain the gesture of the person and the gesture of the person based on the result of MO output, comprising,
S203: when the number M of human bodies output by the MO is more than or equal to 1, filtering the point cloud information obtained by radio frequency scanning according to the spatial position information of the human body representative points output by the model MO, and only retaining the point cloud information in a specific distance range near the human body representative points; inputting the filtered point cloud information into a model MK, wherein the model MK uses a sliding window method to scan and identify the input point cloud information, the window length corresponds to N pk groups of radio frequency scanning, and a plurality of M pieces of key point information of human bodies are output;
S204: inputting the output result of the model MK into the model MA, scanning and identifying the input information by the model MA by using a sliding window method, continuously outputting the result for N ma times by the window length, and outputting the specific gesture of the person and the gesture of the person;
step S304 is continued on the basis of the training of the model MO to obtain the model MK:
The point cloud information obtained by N pk groups of radio frequency scanning is regarded as a point cloud information sequence, the point cloud information sequence is filtered according to the output result of the model MO, and only the point cloud information in a specific distance range near the human representative point is reserved, so that the filtered point cloud information sequence is obtained; n pk is an integer greater than 1;
Based on the human body joint point, selecting a plurality of key points on the human body, marking out the space position information of the human body key points corresponding to each filtered point cloud information sequence according to the reference information recorded in the data set formation, collecting a plurality of information sequences, filtering and marking to form a second sample set;
Training a model MK capable of identifying a plurality of key point information of M human bodies in a scene by using a machine learning method based on the second sample set; the output target of the model MK is key point information ok= { (Pr k,Psk) of each human body target to be detected, k=1, 2,3, … …, K }, and K is the number of selected human body key points; pr k is the probability that the kth key point of a certain human body target to be detected exists; ps k is the spatial position where the kth key point of a certain human target to be detected exists;
Step S305 is continued on the basis of the training of the model MK to acquire the model MA:
Obtaining a third sample set, wherein the third sample set comprises a positive example sample and a negative example sample, the positive example sample comprises a certain specific human body gesture or gesture obtained from reference information, and N ma times of continuous output results of MK corresponding to the specific human body gesture or gesture, and the rest MK output results are taken as negative example samples in which the certain specific human body gesture or gesture does not occur; n ma is an integer greater than 1;
Training a model MA capable of recognizing a specific human body gesture or gesture according to a plurality of point cloud information sequences by using a machine learning method based on a third sample set; the output target of the model MA is the probability that a certain human body target to be detected presents a certain specific gesture or gesture.
4. The sensorless smart device control method of claim 1, further comprising the steps of obtaining environmental information including ambient temperature and/or light intensity and collecting the environmental information as control information.
5. The method for controlling a non-aware intelligent device according to claim 1, wherein in step S105, for each type of control information, a controlled device in which the type of control information is registered is obtained, according to which the type of control information is distributed to each registered controlled device control information group, then each control information group is matched with a corresponding controlled device according to a preset correspondence between control information and a controlled device control instruction, a corresponding control instruction is determined, and the intelligent device is controlled by using the determined control instruction.
6. An electronic device, comprising:
One or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-5.
7. A sensorless home control system, comprising:
the environment monitoring module is used for monitoring the environment temperature and/or the light intensity to obtain environment information;
The radio frequency detection module is used for carrying out radio frequency scanning on the controlled scene to obtain radio frequency scanning signals;
The electronic device of claim 6, for obtaining environmental information; the method comprises the steps of acquiring a radio frequency scanning signal, judging whether a person exists in a scene according to the radio frequency scanning signal, and acquiring various human body information; determining a device control instruction according to control information containing environment information, whether people exist or not and human body information;
And the communication module is used for sending the control instruction determined by the electronic equipment to the control module of the controlled equipment.
8. The sensorless home control system of claim 7, wherein the environmental monitoring module, radio frequency detection module, electronic device, communication module are integrated in a light fixture or smoke sensor.
CN202010760967.5A 2020-07-31 2020-07-31 Non-perception intelligent device control method, electronic device and control system Active CN114063572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010760967.5A CN114063572B (en) 2020-07-31 2020-07-31 Non-perception intelligent device control method, electronic device and control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010760967.5A CN114063572B (en) 2020-07-31 2020-07-31 Non-perception intelligent device control method, electronic device and control system

Publications (2)

Publication Number Publication Date
CN114063572A CN114063572A (en) 2022-02-18
CN114063572B true CN114063572B (en) 2024-05-31

Family

ID=80227711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010760967.5A Active CN114063572B (en) 2020-07-31 2020-07-31 Non-perception intelligent device control method, electronic device and control system

Country Status (1)

Country Link
CN (1) CN114063572B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115061380A (en) * 2022-06-08 2022-09-16 深圳绿米联创科技有限公司 Device control method and device, electronic device and readable storage medium
CN116991089B (en) * 2023-09-28 2023-12-05 深圳市微琪思网络有限公司 Intelligent control method and system for electric iron based on wireless connection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109484935A (en) * 2017-09-13 2019-03-19 杭州海康威视数字技术股份有限公司 A kind of lift car monitoring method, apparatus and system
CN110045370A (en) * 2019-05-10 2019-07-23 成都宋元科技有限公司 Human perception method and its system based on millimetre-wave radar
CN110632849A (en) * 2019-08-23 2019-12-31 珠海格力电器股份有限公司 Intelligent household appliance, control method and device thereof and storage medium
CN110686376A (en) * 2019-09-18 2020-01-14 珠海格力电器股份有限公司 Air conditioner and fan combined control method based on human body sleeping posture recognition, computer readable storage medium and air conditioner
CN110728213A (en) * 2019-09-26 2020-01-24 浙江大学 Fine-grained human body posture estimation method based on wireless radio frequency signals
CN111126314A (en) * 2019-12-26 2020-05-08 杭州中科先进技术研究院有限公司 Passenger flow statistical visual method and system based on 3D point cloud data
CN111360819A (en) * 2020-02-13 2020-07-03 平安科技(深圳)有限公司 Robot control method and device, computer device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100951890B1 (en) * 2008-01-25 2010-04-12 성균관대학교산학협력단 Method for simultaneous recognition and pose estimation of object using in-situ monitoring

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109484935A (en) * 2017-09-13 2019-03-19 杭州海康威视数字技术股份有限公司 A kind of lift car monitoring method, apparatus and system
CN110045370A (en) * 2019-05-10 2019-07-23 成都宋元科技有限公司 Human perception method and its system based on millimetre-wave radar
CN110632849A (en) * 2019-08-23 2019-12-31 珠海格力电器股份有限公司 Intelligent household appliance, control method and device thereof and storage medium
CN110686376A (en) * 2019-09-18 2020-01-14 珠海格力电器股份有限公司 Air conditioner and fan combined control method based on human body sleeping posture recognition, computer readable storage medium and air conditioner
CN110728213A (en) * 2019-09-26 2020-01-24 浙江大学 Fine-grained human body posture estimation method based on wireless radio frequency signals
CN111126314A (en) * 2019-12-26 2020-05-08 杭州中科先进技术研究院有限公司 Passenger flow statistical visual method and system based on 3D point cloud data
CN111360819A (en) * 2020-02-13 2020-07-03 平安科技(深圳)有限公司 Robot control method and device, computer device and storage medium

Also Published As

Publication number Publication date
CN114063572A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
US10872213B2 (en) Virtual mapping of an indoor space using indoor position and vector tracking
CN106444415B (en) Intelligent home furnishing control method and system
CN107528753B (en) Intelligent household voice control method, intelligent equipment and device with storage function
CN104852975B (en) Household equipment calling method and device
CN110410964B (en) Control method and control system of air conditioner
US20180293367A1 (en) Multi-Factor Authentication via Network-Connected Devices
CN105144023B (en) Other devices are waken up for additional data
CN105045140B (en) The method and apparatus of intelligent control controlled plant
CN114063572B (en) Non-perception intelligent device control method, electronic device and control system
Kim et al. Resident location-recognition algorithm using a Bayesian classifier in the PIR sensor-based indoor location-aware system
CN109974235A (en) Method and device for controlling household appliance and household appliance
CN106705385A (en) Control method, control device and control system of air conditioner
US20160299480A1 (en) Context awareness control device, system and method
CN106325250A (en) Electrical appliance linkage control method and system based on information detection
CN104615244A (en) Automatic gesture recognizing method and system
EP3871429A1 (en) Motion detection using wireless local area networks
CN110535732A (en) A kind of apparatus control method, device, electronic equipment and storage medium
CN111025922B (en) Target equipment control method and electronic equipment
WO2019236120A1 (en) Systems and methods of ultrasonic sensing in smart devices
CN110426962A (en) A kind of control method and system of smart home device
CN105094298A (en) Terminal and terminal based gesture recognition method
CN109963194A (en) Video progress based on recognition of face intelligently follows playback method and system
CN107070701A (en) The control method and device of internet of things home appliance equipment
CN109974225A (en) Air conditioner control method and device, storage medium and air conditioner
CN115334730A (en) Intelligent lamp effect control method, device and system and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant