CN116065329A - Control method and device for clothes treatment equipment - Google Patents

Control method and device for clothes treatment equipment Download PDF

Info

Publication number
CN116065329A
CN116065329A CN202111271514.7A CN202111271514A CN116065329A CN 116065329 A CN116065329 A CN 116065329A CN 202111271514 A CN202111271514 A CN 202111271514A CN 116065329 A CN116065329 A CN 116065329A
Authority
CN
China
Prior art keywords
probability value
user
equipment
model
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111271514.7A
Other languages
Chinese (zh)
Inventor
万文鑫
许升
张信耶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Washing Machine Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Washing Machine Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Washing Machine Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Washing Machine Co Ltd
Priority to CN202111271514.7A priority Critical patent/CN116065329A/en
Publication of CN116065329A publication Critical patent/CN116065329A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • DTEXTILES; PAPER
    • D06TREATMENT OF TEXTILES OR THE LIKE; LAUNDERING; FLEXIBLE MATERIALS NOT OTHERWISE PROVIDED FOR
    • D06FLAUNDERING, DRYING, IRONING, PRESSING OR FOLDING TEXTILE ARTICLES
    • D06F33/00Control of operations performed in washing machines or washer-dryers 
    • D06F33/30Control of washing machines characterised by the purpose or target of the control 
    • D06F33/32Control of operational steps, e.g. optimisation or improvement of operational steps depending on the condition of the laundry
    • DTEXTILES; PAPER
    • D06TREATMENT OF TEXTILES OR THE LIKE; LAUNDERING; FLEXIBLE MATERIALS NOT OTHERWISE PROVIDED FOR
    • D06FLAUNDERING, DRYING, IRONING, PRESSING OR FOLDING TEXTILE ARTICLES
    • D06F2105/00Systems or parameters controlled or affected by the control systems of washing machines, washer-dryers or laundry dryers
    • D06F2105/50Starting machine operation, e.g. delayed start or re-start after power cut

Landscapes

  • Engineering & Computer Science (AREA)
  • Textile Engineering (AREA)
  • Control Of Washing Machine And Dryer (AREA)

Abstract

The invention provides a control method and a device of clothes treatment equipment, wherein the method comprises the following steps: acquiring a step sound frequency signal of a user; extracting features of the footstep sound audio signals of the user to obtain extraction features corresponding to the footstep sound audio signals; inputting the extracted features into a pre-trained first model for representing the straight movement of the user towards the equipment to obtain a first probability value, inputting the extracted features into a pre-trained second model for representing the straight movement of the user towards the equipment to obtain a second probability value, and inputting the extracted features into a pre-trained third model for representing the straight movement of the user towards the equipment to obtain a third probability value; and based on the first probability value, the second probability value and the third probability value, obtaining a recognition result of the footstep sound audio signal, and sending a control instruction to the equipment according to the recognition result. The method of the present invention realizes accurate judgment of user's willingness to use the laundry treating apparatus.

Description

Control method and device for clothes treatment equipment
Technical Field
The invention belongs to the technical field of intelligent household appliances, and particularly relates to a control method and device for clothes treatment equipment.
Background
At present, intelligent household appliances have various intelligent control functions through various control methods. The intelligent household appliance is greatly convenient for a user to control.
The existing intelligent washing machine starting control method comprises the steps of collecting footstep sounds of a user, identifying the footstep sounds, and determining that the user has a willingness to use the washing machine if the footstep sounds are identified to be close to the washing machine, wherein the washing machine is started; otherwise, if the washing machine recognizes that the footstep of the user is far away from the washing machine, determining that the user does not use the washing machine, and keeping the washing machine in a shutdown state. The step sound is mainly identified by comparing the loudness of the step sound with a preset threshold value, and judging whether the step sound is far from or near from the washing machine according to a comparison result. Specifically, if the loudness of the footstep sound is greater than or equal to a preset threshold, the footstep sound is identified as being close to the washing machine, otherwise, if the loudness of the footstep sound is less than the preset threshold, the footstep sound is identified as being far away from the washing machine.
The existing intelligent washing machine start-up control method is inaccurate in determining user will and misjudges the user will.
Disclosure of Invention
The invention provides a control method and a control device for clothes treatment equipment, which are used for solving the problems that the existing intelligent washing machine startup control method is inaccurate in determining user will and misjudges the user will.
In a first aspect, the present invention provides a control method of a laundry treatment apparatus, comprising:
acquiring a step sound frequency signal of a user;
extracting features of the footstep sound audio signals of the user to obtain extraction features corresponding to the footstep sound audio signals;
inputting the extracted features into a pre-trained first model for representing the straight movement of the user towards the equipment to obtain a first probability value, inputting the extracted features into a pre-trained second model for representing the straight movement of the user towards the equipment to obtain a second probability value, and inputting the extracted features into a pre-trained third model for representing the straight movement of the user towards the equipment to obtain a third probability value;
and based on the first probability value, the second probability value and the third probability value, obtaining a recognition result of the footstep sound audio signal, and sending a control instruction to the equipment according to the recognition result.
Optionally, the feature extracting the step sound audio signal of the user to obtain an extracted feature corresponding to the step sound audio signal includes:
performing short-time Fourier transform on the step sound audio signals of the user to obtain a sound spectrum diagram corresponding to the step sound audio signals;
and inputting the spectrogram into a deep learning network to obtain the frequency change characteristics corresponding to the step sound audio signals.
Optionally, the frequency variation characteristic is a curve of frequency variation over time after numerical linear fitting.
Optionally, the inputting the extracted feature into a first model that is trained in advance and indicates that the user is going straight towards the device to obtain a first probability value, inputting the extracted feature into a second model that is trained in advance and indicates that the user is turning after going straight towards the device to obtain a second probability value, and inputting the extracted feature into a third model that is trained in advance and indicates that the user passes through the device to obtain a third probability value, which includes:
inputting the extracted features into a pre-trained first model representing the straight movement of a user towards the equipment, and carrying out similarity probability calculation on the extracted features and first identification features of the first model to obtain first probability values of the extracted features relative to the first identification features;
inputting the extracted features into a pre-trained second model for representing the steering of the user after the user moves straight towards the equipment, and performing similarity probability calculation on the extracted features and second identification features of the second model to obtain second probability values of the extracted features relative to the second identification features;
inputting the extracted features into a pre-trained third model representing the user passing through the equipment, and carrying out similarity probability calculation on the extracted features and third recognition features of the third model to obtain third probability values of the extracted features relative to the third recognition features.
Optionally, the obtaining the recognition result of the step sound audio signal based on the first probability value, the second probability value and the third probability value, and sending a control instruction to the device according to the recognition result, includes:
comparing the first probability value, the second probability value and the third probability value to obtain a maximum probability value, comparing the maximum probability value with a preset threshold value to obtain the following identification result of the footstep sound audio signal, and sending the following corresponding control instruction to the equipment:
if the maximum probability value is greater than or equal to a preset threshold value and the maximum probability value is a first probability value, the footstep sound audio signal is a signal representing that a user moves straight towards the equipment, and correspondingly, a first control instruction is sent to the equipment, wherein the first control instruction is any one instruction of a starting instruction, a starting door opening instruction and other functional instructions;
if the maximum probability value is greater than or equal to a preset threshold value and the maximum probability value is greater than a second probability value, the footstep sound audio signal is a signal representing steering after the user moves straight towards the equipment, and correspondingly, a second control instruction is sent to the equipment, wherein the second control instruction is a null data instruction;
if the maximum probability value is greater than or equal to a preset threshold value and the maximum probability value is greater than a third probability value, the footstep sound audio signal is a signal representing that the user passes through the equipment, and correspondingly, the second control instruction is sent to the equipment;
and if the maximum probability value is smaller than a preset threshold value, sending the second control instruction to the equipment.
Optionally, after sending the second control instruction to the device if the maximum probability value is smaller than a preset threshold, the method includes:
and taking the step sound frequency signal as an audio training sample, and training and updating the first model, the second model and the third model which are trained in advance.
Optionally, the collecting the step audio signal of the user includes:
and acquiring a footstep sound audio signal within a threshold value of a preset range of the equipment.
In a second aspect, the present invention provides a signal processing apparatus comprising:
a processor and a memory;
the memory stores the processor-executable instructions;
wherein the processor executes the executable instructions stored in the memory, causing the processor to perform the method described above.
In a third aspect, the present invention provides a storage medium having stored therein computer-executable instructions for performing the above-described method when executed by a processor.
In a fourth aspect, the invention provides a program product comprising a computer program which, when executed by a processor, implements the above method.
According to the control method and device for the clothes treatment equipment, the footstep sound audio signals of the user are obtained and the characteristics of the footstep sound audio signals are extracted, the extracted characteristics are obtained, then the extracted characteristics are input into the first model, the second model and the third model which are trained in advance and can represent different intentions of the user, the first probability value, the second probability value and the third probability value are obtained respectively, the recognition result of the footstep sound audio signals is obtained based on the first probability value, the second probability value and the third probability value, and therefore the intentions of the user for using the clothes treatment equipment are accurately judged, the accurate control of the functional operation of the clothes treatment equipment is achieved, and false awakening of the clothes treatment equipment by the footstep sound of the user is avoided.
Drawings
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. The attached drawings are as follows:
FIG. 1 is a diagram of a prior art intelligent washing machine start-up control scenario;
fig. 2 is a control system diagram of a laundry treating apparatus provided in an embodiment of the present invention;
fig. 3 is an effect schematic view of a control method of a laundry treating apparatus according to an embodiment of the present invention;
fig. 4 is a flowchart of a control method of a laundry treating apparatus provided in an embodiment of the present invention;
FIG. 5 is a graph of frequency versus time for three steps of an audio signal after fitting according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a signal processing device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to avoid false wake-up operation of the washing machine, the prior intelligent washing machine start-up control method often judges the willingness of a user to use the washing machine according to the loudness of the footstep sound after collecting the footstep sound of the user. If the loudness value of the collected user footstep sounds is compared with a preset threshold value of the washing machine, if the loudness value is larger than or equal to the preset threshold value, the user is considered to have a willingness to use the washing machine, and the washing machine is automatically opened for the user in advance; otherwise, if the loudness value is smaller than the preset threshold value, the user is considered to have no intention of using the washing machine, and even if the washing machine collects the footstep sound of the user, the shutdown state of the washing machine is still maintained.
The existing intelligent washing machine start-up control method is inaccurate in determining user will and misjudges the user will. As shown in fig. 1, fig. 1 is a view of a conventional intelligent washing machine start-up control scene, and fig. 1 shows three exemplary scenes A, B, C when a user walks around the washing machine. As shown in a scenario a in fig. 1, when a user needs to use the washing machine, the user can go straight to the washing machine, and as the user gets closer to the washing machine, the loudness value of the user's footstep sound collected by the washing machine is larger, the user can determine that the user wants to use the washing machine, and the user starts the washing machine in advance. However, when the user goes straight toward the washing machine and then turns around or the user passes through the washing machine, as shown in the B and C scenes of fig. 1, the user does not want to use the washing machine, but has a close to the washing machine during the walking process so that the washing machine is erroneously awakened to start the washing machine. Based on the existing intelligent washing machine starting control method, the washing machine collects footstep sounds of the scene B and the scene C in fig. 1, and since the footstep sounds of the two scenes are close to the washing machine, the footstep sound loudness value larger than a preset threshold value exists, even if a user does not want to use the washing machine, the washing machine can still start the washing machine. In addition, if the user is far away from the washing machine and still makes a step sound with a loudness value greater than the preset threshold value, the washing machine is awakened by mistake and started. Therefore, the existing intelligent washing machine starting control method has the problem that judgment of user will is inaccurate.
Based on the control method, the invention provides a control method of clothes treatment equipment, and aims to solve the problem that the conventional intelligent washing machine starting control method is inaccurate in judging user intention.
A control method of the laundry treating apparatus according to the present invention will be described with reference to fig. 2 and 3. Fig. 2 is a control system diagram of a laundry treating apparatus provided in an embodiment of the present invention. Fig. 3 is an effect schematic view of a control method of a laundry treating apparatus according to an embodiment of the present invention. As shown in fig. 2, the control system of the laundry treating apparatus provided by the present invention includes: a laundry treating apparatus 11 and a signal processing device 12, wherein the laundry treating apparatus comprises a controller 111 and a collector 112. The signal processing device 12 may be in communication with a controller 111 and a collector 112, respectively. Alternatively, the signal processing device 12 may be disposed at the network edge cloud. The signal processing device 12 may also be mounted on the laundry treating apparatus 11. Preferably, when the signal processing device 12 is mounted on the laundry treating apparatus 11, the signal processing device 12 is wired to the controller 111 and the collector 112, respectively, to avoid abnormal blocking of communication between the signal processing device 12 and the controller 111 and the collector 112 by the network wireless signal. The collector 112 collects the user's foot step audio signals around the laundry treating apparatus 11. The signal processing device 12 acquires the user step sound audio signal from the collector 112, and performs recognition processing on the user step sound audio signal to obtain a recognition result. The signal processing device 12 determines the user's willingness to use the laundry treatment apparatus 11 based on the recognition result, and transmits a control instruction corresponding to the recognition result to the controller 111, implementing the corresponding function execution of the laundry treatment apparatus 11. Alternatively, the laundry treating apparatus 11 may be a washing machine or a laundry dryer, and the present invention is not particularly limited herein.
Specifically, the signal processing device 12 obtains the extraction features corresponding to the step audio signal by acquiring the step audio signal of the user and then performing feature extraction on the step audio signal of the user. Then, the signal processing device 12 inputs the extracted features into a pre-trained first model for representing the straight movement of the user towards the equipment to obtain a first probability value, inputs the extracted features into a pre-trained second model for representing the straight movement of the user towards the equipment to obtain a second probability value, and inputs the extracted features into a pre-trained third model for representing the straight movement of the user towards the equipment to obtain a third probability value; finally, the signal processing device 12 obtains the recognition result of the step sound audio signal based on the first probability value, the second probability value, and the third probability value. Specifically, the recognition results include, but are not limited to, the following: the obtained user footstep audio signals are signals belonging to the straight movement of the user towards the equipment, the obtained user footstep audio signals are signals belonging to the steering of the user after the straight movement of the user towards the equipment, and the obtained user footstep audio signals are signals belonging to the passing equipment of the user. Wherein the signal that the user is going straight towards the device indicates that the user has a willingness to use the device, and the signal that the user turns after going straight towards the device and the signal that the user passes the device both indicate that the user has no willingness to use the device. According to the recognition result, the signal processing device 12 sends a control instruction corresponding to the recognition result, such as an instruction for guiding the device to start up and an instruction for guiding the device not to perform a functional operation, to the device, so as to achieve the scene effect shown in fig. 3. Illustratively, as shown in a D scenario in fig. 2, when the user is straight toward the laundry treating apparatus 11, the laundry treating apparatus 11 turns on the laundry treating apparatus under the function execution of the controller 111. When the user turns to the laundry treating apparatus 11 after going straight, the laundry treating apparatus 11 keeps the laundry treating apparatus 11 in a turned-off state under the function execution of the controller 111, as shown in a scene E in fig. 2. When the user passes the laundry treating apparatus 11, the laundry treating apparatus 11 keeps the laundry treating apparatus 11 in a turned-off state under the function execution of the controller 111, as shown in the F scene of fig. 2.
According to the control method of the clothes treatment equipment, provided by the embodiment of the invention, the footstep sound of the user around the clothes treatment equipment is identified, so that the willingness of the user to use the clothes treatment equipment can be accurately identified and judged. The control method of the clothes treatment equipment provided by the embodiment of the invention realizes the accurate control of the functional operation of the clothes treatment equipment, and avoids the false wake-up of the clothes treatment equipment by the foot step sound of a user.
A control method of the laundry treating apparatus provided by the present invention will be described in detail with reference to fig. 4. Fig. 4 is a flowchart of a control method of a laundry treating apparatus provided in an embodiment of the present invention. The execution body of the embodiment shown in fig. 4 is the signal processing device 12 in the embodiment shown in fig. 2, and as shown in fig. 4, the method includes:
s101, acquiring a step sound frequency signal of a user;
specifically, the signal processing device 12 acquires the step audio signal of the user from the collector 112, and stores it in the signal processing device 12.
Optionally, the collector 112 collects the step sound audio signal within a preset range threshold of the laundry treatment apparatus 11. Illustratively, a monitoring space range, such as an infrared monitoring space range, may be set on the laundry treating apparatus 11 in advance, and the collector 112 starts collecting the step audio signal of the user once the user enters the monitoring space range, avoiding an increase in the signal processing workload of the signal processing device 12 caused by collecting the step audio signal outside the monitoring space range and an occupation of the storage space of the signal processing device 12 by the collected useless audio signal.
S102, extracting features of the footstep sound audio signals of the user to obtain extraction features corresponding to the footstep sound audio signals;
specifically, the signal processing device 12 performs feature extraction on the step sound audio signal of the user acquired in step S101, and obtains extraction features corresponding to the step sound audio signal.
Specifically, the signal processing device 12 performs short-time fourier transform on the step sound audio signal of the user, and obtains a spectrogram corresponding to the step sound audio signal. Because the spectrogram is the spectrogram of the sound signal, the relation between the frequency, the time and the intensity of the sound signal is described, and the subsequent extraction of the frequency information of the sound signal is more convenient. Next, the signal processing device 12 inputs the spectrogram to the deep learning network, and obtains a frequency change characteristic corresponding to the step audio signal through local frequency characteristic extraction and local frequency characteristic clustering.
Further, the frequency variation characteristic may be a curve of frequency variation with time after numerical linear fitting. Illustratively, fig. 5 is a fitted frequency-time curve corresponding to three step sound audio signals provided by an embodiment of the present invention. As shown in fig. 5, when the user is straight toward the laundry treatment apparatus 11, the user keeps a constant frequency according to the walking habit frequency as he is away from the laundry treatment apparatus 11 due to the need to use the laundry treatment apparatus 11, and the frequency is reduced until stopping as he approaches the laundry treatment apparatus 11. When the user turns after going straight toward the laundry treatment apparatus 11, the frequency remains unchanged while the user is away from the laundry treatment apparatus 11, and when the user approaches the laundry treatment apparatus 11 and is ready to turn, the frequency decreases until the user continues to walk at a certain frequency after turning is completed. The walking frequency remains constant as the user passes the laundry treating apparatus 11. The frequency change characteristic characterizes the frequency change rule of the footstep sounds of the clothes treatment equipment which are needed to avoid obstacles in the clothes treatment equipment or walking process, so that adverse effects of the loudness value difference of the footstep sounds sent by different users on user willingness judgment in the prior art are avoided by using the frequency change characteristic. The frequency change characteristic is suitable for any user, and the influence of the difference of footstep sounds (such as loudness difference) of the user on the accurate judgment of the user's willingness to use the clothes treatment equipment is eliminated.
S103, inputting the extracted features into a pre-trained first model for representing the straight movement of the user towards the equipment to obtain a first probability value, inputting the extracted features into a pre-trained second model for representing the straight movement of the user towards the equipment to obtain a second probability value, and inputting the extracted features into a pre-trained third model for representing the straight movement of the user towards the equipment to obtain a third probability value;
specifically, the signal processing apparatus 12 inputs the extracted feature obtained in step 102 into a first model that is trained in advance and characterizes the user as going straight toward the device to obtain a first probability value, inputs the extracted feature into a second model that is trained in advance and characterizes the user as going straight toward the device to obtain a second probability value, and inputs the extracted feature into a third model that is trained in advance and characterizes the user as going straight toward the device to obtain a third probability value.
Further, the signal processing device 12 inputs the frequency variation characteristic obtained in the step 102 into a pre-trained first model representing the straight movement of the user towards the laundry treatment apparatus 11, performs similarity probability calculation on the frequency variation characteristic and a first identification characteristic of the first model, and obtains a first probability value W1 of the frequency variation characteristic relative to the first identification characteristic.
Next, the signal processing device 12 inputs the frequency variation feature obtained in step 102 into a pre-trained second model representing the user's straight movement toward the laundry treatment apparatus 11, performs similarity probability calculation on the frequency variation feature and a second identification feature of the second model, and obtains a second probability value W2 of the frequency variation feature with respect to the second identification feature.
Continuously, the signal processing device 12 inputs the frequency variation characteristic obtained in the step 102 into a pre-trained third model representing the straight movement of the user towards the clothes processing apparatus 11, performs similarity probability calculation on the frequency variation characteristic and a third identification characteristic of the third model, and obtains a third probability value W3 of the frequency variation characteristic relative to the third identification characteristic.
The first, second and third identification features may also be frequency-time curves corresponding to the numerical linear fit as shown in fig. 5.
Optionally, the signal processing device 12 may synchronously input the frequency variation feature obtained in the step 102 into the first model, the second model, and the third model, which are trained in advance, and perform similarity probability calculation of the frequency variation feature and the first identification feature, the second identification feature, and the third identification feature, so as to obtain a first probability value W1, a second probability value W2, and a third probability value W3, respectively, so as to improve probability value calculation timeliness.
S104, based on the first probability value, the second probability value and the third probability value, obtaining a recognition result of the footstep sound audio signal, and sending a control instruction to the equipment according to the recognition result;
specifically, the signal processing device 12 determines and obtains the recognition result of the step sound audio signal based on the first probability value W1, the second probability value W2, and the third probability value W3 obtained in step S103. Then, the signal processing device 12 transmits a control instruction to the laundry treating apparatus 11 according to the recognition result, specifically, the signal processing device 12 transmits a control instruction to the controller 111 of the laundry treating apparatus 11 so that the controller 111 performs a corresponding functional operation.
Specifically, the signal processing device 12 determines and obtains the recognition result of the step sound audio signal by the following steps, based on the first probability value W1, the second probability value W2, and the third probability value W3 obtained in step S103:
first, the signal processing apparatus 12 compares the first probability value W1, the second probability value W2, and the third probability value W3 obtained in step S103 to obtain a maximum probability value thereof; then, the signal processing device 12 compares the maximum probability value selected by the comparison with the preset threshold value W0, obtains the following recognition result of the footstep sound audio signal, and sends the following corresponding control instruction to the laundry processing apparatus 11:
if the maximum probability value is greater than or equal to the preset threshold value W0 and the maximum probability value is the first probability value W1, the recognition result is: the step audio signal is a signal that the user moves straight toward the laundry treating apparatus 11. Accordingly, the signal processing device 12 sends the first control instruction to the controller 111 of the laundry treating apparatus 11, and the controller 111 controls the laundry treating apparatus 11 to perform corresponding functional operations, such as enabling the laundry treating apparatus 11 to start up as in the D scenario in fig. 3, based on the received first control instruction.
Optionally, the first control instruction is any one of a startup instruction, a startup door opening instruction and other functional instructions. For example, the first control command may be a power-on command, a power-on door-opening command, or another functional command.
If the maximum probability value is greater than or equal to the preset threshold value W0 and the maximum probability value is the second probability value W2, the recognition result is: the step audio signal is a signal that the user turns after traveling straight toward the laundry treating apparatus 11. Accordingly, the signal processing device 12 transmits a second control instruction, specifically, a null data instruction, to the controller 111 of the laundry treating apparatus 11. The controller 111 does not perform any functional operation for controlling the laundry treating apparatus 11 based on the received second control instruction, such as maintaining the laundry treating apparatus 11 in the off state as in the E-scenario of fig. 3.
Similarly, if the maximum probability value is greater than or equal to the preset threshold value W0 and the maximum probability value is the third probability value W3, the recognition result is: the step audio signal is a signal that the user passes through the laundry treating apparatus 11. Accordingly, the signal processing device 12 transmits the second control instruction to the controller 111 of the laundry treating apparatus 11, and the controller 111 does not perform any functional operation on controlling the laundry treating apparatus 11 based on the received second control instruction, such as maintaining the laundry treating apparatus 11 in the off state as in the F-scenario of fig. 3.
If the maximum probability value is smaller than the preset threshold value W0, it is indicated that the step sound audio signal does not belong to any one of the signal of the user going straight toward the laundry treatment apparatus 11, the signal of the user turning after going straight toward the laundry treatment apparatus 11, and the signal of the user passing through the laundry treatment apparatus 11. Accordingly, the signal processing device 12 transmits the second control instruction to the controller 111 of the laundry treating apparatus 11, and the controller 111 does not perform any functional operation for controlling the laundry treating apparatus 11 based on the received second control instruction.
Further, if the signal processing device 12 compares the first probability value W1, the second probability value W2, and the third probability value W3 obtained in step S103, the obtained maximum probability value is smaller than the preset threshold value W0. Accordingly, after the signal processing device 12 sends the second control instruction to the controller 111 of the laundry treatment apparatus 11, the signal processing device 12 performs training update on the first model, the second model, and the third model trained in advance using the step audio signal as an audio training update sample.
Alternatively, the signal processing device 12 may perform training update on the pre-trained first model, second model and third model in the signal processing device 12 directly based on the audio training update samples.
Further, if the first model, the second model, and the third model which are trained in advance are employed in the signal processing apparatus 12, they are acquired from the model training device. The signal processing means 12 may also send the audio training update samples to the model training device for training updates to the first model, the second model and the third model used by the signal processing means 12. The signal processing means 12 may obtain updated first, second and third models from the model training device for the recognition of the step sound audio signal.
According to the control method of the clothes treatment equipment, provided by the embodiment of the invention, the recognition result of the user footstep sound audio signal is determined by extracting the frequency change characteristics of the user footstep sound around the clothes treatment equipment and calculating the similarity probability value of the recognition characteristics in the model. And further accurately judging the willingness of the user to use the clothes treating apparatus based on the identification result. In addition, the control method of the clothes treatment equipment provided by the embodiment of the invention uses the frequency change characteristic as the identification characteristic, so that adverse effects of footstep sound variability (such as loudness value) sent by different users on user willingness judgment in the prior art are avoided. Compared with the existing intelligent washing machine starting control method, the control method for the clothes treatment equipment provided by the embodiment of the invention has the advantages that the identification and judgment for the use will are more accurate, the accurate control of the functional operation of the clothes treatment equipment is realized, and the false wake-up of the clothes treatment equipment by the foot step sound of a user is avoided.
The embodiment of the invention also provides a signal processing device. Fig. 6 is a schematic structural diagram of a signal processing device according to an embodiment of the present invention. As shown in fig. 6, the signal processing device includes a processor 61 and a memory 62, where the memory 62 stores instructions executable by the processor 61, so that the processor 61 can be used to execute the technical scheme of the above method embodiment, and the implementation principle and technical effect are similar, and the embodiment is not repeated here. It should be understood that the processor 61 may be a central processing unit (in english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, digital signal processors (in english: digital Signal Processor, abbreviated as DSP), application specific integrated circuits (in english: application Specific Integrated Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution. The memory 62 may comprise a high-speed RAM memory or may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc.
The embodiment of the invention also provides a storage medium, wherein computer execution instructions are stored in the storage medium, and when the computer execution instructions are executed by a processor, the control method of the clothes treatment device is realized. The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuits, ASIC for short). It is also possible that the processor and the storage medium reside as discrete components in an electronic device or a master device.
The embodiment of the invention also provides a program product, such as a computer program, which realizes the control method of the clothes treatment device covered by the invention when the computer program is executed by a processor.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the essence of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A control method of a laundry treatment apparatus, comprising:
acquiring a step sound frequency signal of a user;
extracting features of the footstep sound audio signals of the user to obtain extraction features corresponding to the footstep sound audio signals;
inputting the extracted features into a pre-trained first model for representing the straight movement of the user towards the equipment to obtain a first probability value, inputting the extracted features into a pre-trained second model for representing the straight movement of the user towards the equipment to obtain a second probability value, and inputting the extracted features into a pre-trained third model for representing the straight movement of the user towards the equipment to obtain a third probability value;
and based on the first probability value, the second probability value and the third probability value, obtaining a recognition result of the footstep sound audio signal, and sending a control instruction to the equipment according to the recognition result.
2. The method according to claim 1, wherein the feature extraction of the step audio signal of the user to obtain the extracted feature corresponding to the step audio signal includes:
performing short-time Fourier transform on the step sound audio signals of the user to obtain a sound spectrum diagram corresponding to the step sound audio signals;
and inputting the spectrogram into a deep learning network to obtain the frequency change characteristics corresponding to the step sound audio signals.
3. The method of claim 2, wherein the frequency variation is characterized by a curve of frequency over time after a numerical linear fit.
4. The method of claim 1, wherein the inputting the extracted features into a pre-trained first model that characterizes the user's straight-through towards the device to obtain a first probability value, inputting the extracted features into a pre-trained second model that characterizes the user's straight-through towards the device to obtain a second probability value, and inputting the extracted features into a pre-trained third model that characterizes the user's straight-through towards the device to obtain a third probability value, comprises:
inputting the extracted features into a pre-trained first model representing the straight movement of a user towards the equipment, and carrying out similarity probability calculation on the extracted features and first identification features of the first model to obtain first probability values of the extracted features relative to the first identification features;
inputting the extracted features into a pre-trained second model for representing the steering of the user after the user moves straight towards the equipment, and performing similarity probability calculation on the extracted features and second identification features of the second model to obtain second probability values of the extracted features relative to the second identification features;
inputting the extracted features into a pre-trained third model representing the user passing through the equipment, and carrying out similarity probability calculation on the extracted features and third recognition features of the third model to obtain third probability values of the extracted features relative to the third recognition features.
5. The method according to any one of claims 1-4, wherein the obtaining the recognition result of the step sound audio signal based on the first probability value, the second probability value, and the third probability value, and sending a control instruction to the device according to the recognition result, includes:
comparing the first probability value, the second probability value and the third probability value to obtain a maximum probability value, comparing the maximum probability value with a preset threshold value to obtain the following identification result of the footstep sound audio signal, and sending the following corresponding control instruction to the equipment:
if the maximum probability value is greater than or equal to a preset threshold value and the maximum probability value is a first probability value, the footstep sound audio signal is a signal representing that a user moves straight towards the equipment, and correspondingly, a first control instruction is sent to the equipment, wherein the first control instruction is any one instruction of a starting instruction, a starting door opening instruction and other functional instructions;
if the maximum probability value is greater than or equal to a preset threshold value and the maximum probability value is greater than a second probability value, the footstep sound audio signal is a signal representing steering after the user moves straight towards the equipment, and correspondingly, a second control instruction is sent to the equipment, wherein the second control instruction is a null data instruction;
if the maximum probability value is greater than or equal to a preset threshold value and the maximum probability value is greater than a third probability value, the footstep sound audio signal is a signal representing that the user passes through the equipment, and correspondingly, the second control instruction is sent to the equipment;
and if the maximum probability value is smaller than a preset threshold value, sending the second control instruction to the equipment.
6. The method according to claim 5, wherein after sending the second control instruction to the device if the maximum probability value is smaller than a preset threshold value, the method comprises:
and taking the step sound frequency signal as an audio training sample, and training and updating the first model, the second model and the third model which are trained in advance.
7. The method of any one of claims 1-4, wherein the capturing the user's step audio signal comprises:
and acquiring a footstep sound audio signal within a threshold value of a preset range of the equipment.
8. A signal processing apparatus, comprising:
a processor and a memory;
the memory stores the processor-executable instructions;
wherein execution of the executable instructions stored by the memory by the processor causes the processor to perform the method of any one of claims 1-7.
9. A storage medium having stored therein computer-executable instructions which, when executed by a processor, are adapted to carry out the method of any one of claims 1 to 7.
10. A program product comprising a computer program which, when executed by a processor, implements the method of any of claims 1-7.
CN202111271514.7A 2021-10-29 2021-10-29 Control method and device for clothes treatment equipment Pending CN116065329A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111271514.7A CN116065329A (en) 2021-10-29 2021-10-29 Control method and device for clothes treatment equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111271514.7A CN116065329A (en) 2021-10-29 2021-10-29 Control method and device for clothes treatment equipment

Publications (1)

Publication Number Publication Date
CN116065329A true CN116065329A (en) 2023-05-05

Family

ID=86175497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111271514.7A Pending CN116065329A (en) 2021-10-29 2021-10-29 Control method and device for clothes treatment equipment

Country Status (1)

Country Link
CN (1) CN116065329A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740846A (en) * 2023-08-02 2023-09-12 深圳零和壹物联科技有限公司 RFID intelligent top-mounted access control terminal control method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740846A (en) * 2023-08-02 2023-09-12 深圳零和壹物联科技有限公司 RFID intelligent top-mounted access control terminal control method

Similar Documents

Publication Publication Date Title
CN107578776B (en) Voice interaction awakening method and device and computer readable storage medium
CN109448711A (en) Voice recognition method and device and computer storage medium
CN111968644B (en) Intelligent device awakening method and device and electronic device
CN111354357A (en) Audio resource playing method and device, electronic equipment and storage medium
CN116065329A (en) Control method and device for clothes treatment equipment
CN108038947B (en) Intelligent door lock system based on Bluetooth
CN112489648A (en) Wake-up processing threshold adjustment method, voice home appliance, and storage medium
CN110936920A (en) Vehicle trunk opening method and device, vehicle and storage medium
CN110464281A (en) Dish-washing machine drying control method, device, storage medium and dish-washing machine
CN112311635A (en) Voice interruption awakening method and device and computer readable storage medium
CN110657561A (en) Air conditioner and voice instruction recognition method, control device and readable storage medium thereof
CN111128174A (en) Voice information processing method, device, equipment and medium
CN113160815A (en) Intelligent control method, device and equipment for voice awakening and storage medium
CN111028841A (en) Method and device for awakening system to adjust parameters, computer equipment and storage medium
CN109976703B (en) Guidance instruction method, computer-readable storage medium, and cooking apparatus
CN116705033A (en) System on chip for wireless intelligent audio equipment and wireless processing method
CN115881126A (en) Switch control method and device based on voice recognition and switch equipment
CN115148199A (en) Voice false wake-up processing method and electronic equipment
CN111124512B (en) Awakening method, device, equipment and medium for intelligent equipment
CN111883109A (en) Voice information processing and verification model training method, device, equipment and medium
CN115478397A (en) Clothes treatment equipment control method and device and clothes treatment equipment
CN113507680B (en) Type-C earphone identification method and device, electronic equipment and storage medium
CN117828282B (en) Data efficient processing method based on adaptive filtering
CN112331210B (en) Speech recognition device
CN108072180A (en) Fingerprint identification control method and device for water heater

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination