CN114143359B - Control method, equipment and system of Internet of things equipment - Google Patents

Control method, equipment and system of Internet of things equipment Download PDF

Info

Publication number
CN114143359B
CN114143359B CN202111266247.4A CN202111266247A CN114143359B CN 114143359 B CN114143359 B CN 114143359B CN 202111266247 A CN202111266247 A CN 202111266247A CN 114143359 B CN114143359 B CN 114143359B
Authority
CN
China
Prior art keywords
scene
control
internet
determining
things
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111266247.4A
Other languages
Chinese (zh)
Other versions
CN114143359A (en
Inventor
王波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202111266247.4A priority Critical patent/CN114143359B/en
Publication of CN114143359A publication Critical patent/CN114143359A/en
Application granted granted Critical
Publication of CN114143359B publication Critical patent/CN114143359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Selective Calling Equipment (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the application discloses a control method, equipment and system of Internet of things equipment; the method comprises the following steps: determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment; determining the position distribution of controlled equipment in each control scene in a preset area; determining scene identifiers corresponding to the control scenes respectively according to the position distribution; determining a target scene identifier according to voice input, and controlling controlled equipment in a control scene corresponding to the target scene identifier; the method is used for simplifying the voice of the user when the user controls the equipment and improving the use experience of the user.

Description

Control method, equipment and system of Internet of things equipment
Technical Field
The application relates to the field of internet of things, in particular to a control method, equipment and a system of internet of things equipment.
Background
With the continuous development and progress of the internet of things technology, for example, the internet of things equipment application of intelligent home appliances is increasingly popular. The intelligent household appliances are household appliances formed by introducing a microprocessor, a sensor technology, a network communication technology and the like into household appliances, and common intelligent household appliances comprise lamps, air conditioners, refrigerators, sound equipment and the like.
In order to improve the control intelligence of various Internet of things devices, a user can control the state of the devices through voice. In general, various control scenes are set for each internet of things device, and when different control scenes are executed through user voice, the same device can be controlled to be in different working states, or different devices can be controlled.
Currently, in order to realize the control of devices under different scenes, a voice keyword representing the action of a user and the position of the device is generally set, and the control scene is obtained by recognizing the voice of the user, so as to complete the control of the device. However, in the above voice control manner for the device, it is required that the voice of the user can provide the user action information and the information of the location of the device at the same time, otherwise, the wrong device may be controlled, so that the voice of the user is more complex, and the use experience of the user is reduced.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method, an apparatus, and a system for controlling an internet of things device, which are used to simplify the voice when a user controls the device, and improve the use experience of the user.
In a first aspect, the present application provides a control method of an internet of things device, where the method includes:
determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment;
determining the position distribution of controlled equipment in each control scene in a preset area;
determining scene identifiers corresponding to the control scenes respectively according to the position distribution;
and determining a target scene identifier according to voice input, and controlling controlled equipment in a control scene corresponding to the target scene identifier.
In the prior art, when a user controls a device through voice, the voice of the user is required to simultaneously provide user action information and information of the position of the device, otherwise, the user may control the wrong device.
In the embodiment of the application, by determining the controlled equipment comprising one or more internet of things equipment in each control scene, the position distribution of the controlled equipment in a preset area, determining the scene identifications corresponding to each control scene respectively according to the position distribution, and when equipment control is performed through voice, determining the target scene identifications according to voice input, and controlling the controlled equipment in the control scene corresponding to the target scene identifications. The scene identification of the control scene is determined according to the position distribution of the controlled equipment in the preset area, so that the scene identification contains the position information of the controlled equipment in the control scene, and therefore, when a user controls the equipment through voice, the voice of the user can not contain the information of the position of the equipment, thereby simplifying the voice when the user controls the equipment, and improving the use experience of the user.
In a possible implementation manner, the determining, according to the location distribution, a scene identifier corresponding to each control scene respectively includes:
obtaining the number of sub-areas of a preset area where the controlled equipment in each control scene is located according to the position distribution;
and determining scene identification of each control scene according to the number of the subareas.
In a possible implementation manner, the determining, according to the number of the sub-areas, a scene identifier of each control scene includes:
when the number of the subareas of the preset area where the controlled equipment in the control scene is located is one, determining a scene identification of the control scene according to the identification of the subareas;
when the number of the sub-areas of the preset area where the controlled equipment in the control scene is located is larger than a number threshold, determining the scene identification of the control scene according to the identification of the preset area, wherein the number threshold is larger than or equal to 2.
In a possible implementation manner, the determining, according to the location distribution, a scene identifier corresponding to each control scene respectively includes:
obtaining the number of sub-areas of a preset area where the controlled equipment in each control scene is located and the number distribution of the Internet of things equipment in each control scene in each sub-area according to the position distribution;
when the number of the subareas of the preset area where the controlled equipment in the control scene is located meets the preset condition, determining the scene identification of the control scene according to the number distribution of the controlled equipment in the control scene in each subarea.
In a possible implementation manner, the determining, according to the number distribution of the controlled devices in the control scene in each sub-area, the scene identifier of the control scene includes:
determining a subarea with the largest quantity of the Internet of things equipment in the control scene according to the quantity distribution of the controlled equipment in the control scene in each subarea;
and determining the scene identification of the control scene according to the identification of the sub-region with the largest number of the controlled devices.
In a possible implementation manner, the determining, according to the number distribution of the controlled devices in the control scene in each sub-area, the scene identifier of the control scene includes:
obtaining a target sub-region according to the quantity distribution of the controlled devices in the control scene in each sub-region and the types of the controlled devices in the control scene in each sub-region;
and determining the scene identification of the control scene according to the identification of the target sub-region.
In one possible implementation, the method further includes:
and when receiving a modification instruction of a user for the scene identifier, modifying the scene identifier according to the modification instruction.
In one possible implementation, the determining the target scene identifier according to the voice input includes:
determining a user location based on the voice input;
and determining a target scene identification according to the user position.
In a second aspect, the present application provides an internet of things gateway device, where the internet of things gateway device is configured to execute a control method of any one of the internet of things devices, so as to control the internet of things device.
In a third aspect, the application provides an internet of things system, where the internet of things system includes the gateway device of the internet of things, and further includes one or more devices of the internet of things.
Drawings
Fig. 1 is a schematic structural diagram of an internet of things system according to an embodiment of the present application;
fig. 2 is a flowchart of a control method of an internet of things device according to an embodiment of the present application;
fig. 3 is a flowchart of a control method of an internet of things device according to another embodiment of the present application;
fig. 4 is a flowchart of a control method of an internet of things device according to another embodiment of the present application;
fig. 5 is a flowchart of a control method of an internet of things device according to another embodiment of the present application.
Detailed Description
In order to facilitate understanding of the technical solutions provided by the embodiments of the present application, the following describes a method, an apparatus, and a system for determining a scene identifier provided by the embodiments of the present application with reference to the accompanying drawings.
While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Based on the embodiments herein, other embodiments that may be obtained by those skilled in the art without making any inventive contribution are within the scope of the application.
In the claims and specification of this application and in the drawings of the specification, the terms "comprise" and "have" and any variations thereof, are intended to cover a non-exclusive inclusion.
In order to facilitate understanding of the technical solution provided by the embodiments of the present application, first, application scenarios common to the embodiments of the present application are described.
In order to improve the control intelligence of the devices of the Internet of things, a user can control the state of the devices through voice. For example, each internet of things device is typically one or more home devices. In general, one or more home devices are provided with various control scenarios, and when different control scenarios are executed through user voice, the same home device can be controlled to be in different working states, or different devices can be controlled.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an internet of things system provided in an embodiment of the present application, where an internet of things device may include a home device.
As shown in fig. 1, the internet of things system 100 includes an internet of things gateway device 101, and one or more internet of things devices. The control method is executed by the Internet of things equipment to control one or more pieces of Internet of things equipment.
In fig. 1, an internet of things system 100 includes an internet of things device 102, an internet of things device 103, and an internet of things device 104. In practical application, the internet of things system can further comprise one internet of things device, two internet of things devices, or more than three internet of things devices.
In the prior art, when a user controls a device through voice, the voice of the user is required to simultaneously provide user action information and information of the position of the device, otherwise, the user may control the wrong device.
Based on the above, in the embodiment of the present application provided by the applicant, by determining the position distribution of the controlled device including one or more devices of the internet of things in each control scene in the preset area, determining the scene identifier corresponding to each control scene respectively according to the position distribution, when performing device control through voice, determining the target scene identifier according to the voice input, and controlling the controlled device in the control scene corresponding to the target scene identifier.
Since the scene identifier of the control scene is determined according to the position distribution of the controlled device in the preset area, the scene identifier contains the position information of the controlled device in the control scene. Therefore, when the user controls the equipment through the voice, the voice of the user does not contain the information of the position of the equipment, so that the voice of the user when the user controls the equipment is simplified, and the use experience of the user is improved.
Referring to fig. 2, fig. 2 is a flowchart of a control method of an internet of things device according to an embodiment of the present application.
As shown in fig. 2, the control method of the internet of things device in the embodiment of the present application includes S201-S204.
S201, determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment.
In S201, the control scene refers to a scene in which the internet of things device is controlled. When different control scenes are executed through the voice of the user, the same pair of equipment can be controlled to be in different working states or different equipment can be controlled.
The controlled equipment in the control scene refers to the Internet of things equipment with the working state controlled when the control scene is executed. The controlled devices may include one or more internet of things devices.
In order to meet different demands of users for controlling devices, by providing one or more control scenes, the object determined in S201 is a controlled device, which belongs to each control scene. In some possible cases, the one or more internet of things devices may be home devices, such as air conditioners, lamps, fans, speakers, and the like.
S202, determining the position distribution of the controlled equipment in each control scene in a preset area.
In S202, the location distribution refers to a distribution of locations where the controlled devices are located in space.
In some possible cases, the preset area may be an area corresponding to a home house, and specifically may include an area corresponding to one or more rooms. One or more home devices are distributed within a room in a house.
The controlled devices in each control scene are respectively distributed in the preset area, so that each control scene corresponds to the position distribution. S202 determines the above-described position distribution corresponding to each control scene.
S203, determining scene identifiers corresponding to the control scenes respectively according to the position distribution.
In S203, the result of the determination is that each control scene corresponds to a respective scene identifier, that is, each control scene corresponds to its own scene identifier, where the scene identifier is used to identify the control scene.
For each control scene, the scene identification has the position information of the devices in the control scene, since the scene identification is obtained from the above-described position distribution.
S204, determining a target scene identifier according to voice input, and controlling controlled equipment in a control scene corresponding to the target scene identifier.
In S204, the target scene identifier is obtained from a voice input and corresponds to the voice input. The target scene identification and the control scene have a corresponding relationship.
Based on S201-S204, scene identification of each control scene is obtained according to the position distribution of the controlled equipment in each control scene in the preset area. For each control scene, there is a correspondence between the scene identification and the control scene, and the scene identification contains the location information of the controlled device in the control scene. When the Internet of things equipment is controlled through voice, a target scene identifier is determined according to voice input, and controlled equipment in a control scene corresponding to the target scene identifier is obtained through a corresponding relation between the scene identifier and the control scene, so that the controlled equipment is controlled. And determining a target scene identifier according to the voice input, so that corresponding controlled equipment is obtained, and the position information of the equipment is not needed to be contained in the voice of the user because the scene identifier contains the position information of the equipment, so that the voice of the user when the user controls the equipment is simplified, and the use experience of the user is improved.
In addition, compared with the prior art that the scene identifier is generally determined by the user, the processes in the embodiments S201 to S204 of the present application may be completed by a machine/device, for example, through the gateway device of the internet of things, so that the embodiment of the present application may improve the automation degree of the scene identifier for generating the control scene to a certain extent, and further improve the user experience.
In some possible cases, determining the controlled devices of each control scene, and/or determining the location distribution of the controlled devices in each control scene within a preset area may be performed when the user newly enables the control scene. In some possible cases, the new enabling control scenario may be enabled by the user himself, or may be enabled automatically, for example, by an internet of things gateway device.
In one possible implementation manner, in order to further improve the use experience of the user, after determining, according to the location distribution, the scene identifier corresponding to each control scene respectively, the method may further include: and when receiving a modification instruction of a user for the scene identifier, modifying the scene identifier according to the modification instruction.
The user's instruction for modifying the scene identifier refers to an instruction issued by the user for modifying the scene identifier.
In order to make the voice control device more accurate, further improve the user experience, for determining the scene identifier of the control scene, the modification instruction sent by the user has higher priority. Thus, when the modification instruction is received, the modification is performed according to the modification instruction.
In some possible cases, the receiving of the modification instruction may be implemented by an application program (APP) of the intelligent terminal.
Another embodiment of the present application is provided below, which provides a possible implementation manner of determining a scene identifier of a control scene according to a position distribution of a controlled device in a preset area.
In one possible implementation manner, according to the position distribution, determining the scene identifier corresponding to each control scene respectively may be implemented in the following manner:
obtaining the number of sub-areas of the preset area where the controlled equipment in each control scene is located according to the position distribution;
and determining scene identification of each control scene according to the number of the subareas.
Referring to fig. 3, fig. 3 is a flowchart of a control method of an internet of things device according to another embodiment of the present application. As shown in fig. 3, the control method of the internet of things device in the embodiment of the present application includes S301-S304.
S301, determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more pieces of Internet of things equipment;
s302, determining the position distribution of controlled equipment in each control scene in a preset area;
s303, obtaining the number of sub-areas of the preset area where the controlled equipment in each control scene is located according to the position distribution;
in S303, the sub-region refers to a region included in a preset region, that is, the preset region includes one or more sub-regions. For example, the preset area is the entire house, and the sub-areas are the respective rooms in the entire house.
The controlled devices are distributed in a preset area, and particularly the controlled devices are distributed in a subarea of the preset area. For example, the preset area is the whole house, the subareas are the rooms in the whole house, and the household devices are distributed in the rooms.
The distribution of controlled devices in the sub-areas may be different or the same for different control scenarios. For example, for a "sleeping" scenario, the controlled device may be a light located in a bedroom, while for a "getting up" scenario, the controlled device may be a light located in the bedroom and a sound box located in the living room.
The number of the subregions of the preset region where the controlled devices in the control scene are located refers to how many subregions of the controlled devices in the control scene are contained in the subregions of the preset region. For example, for the "sleeping" scenario described above, the controlled device is only in the bedroom, the number of sub-areas described above is 1; for the "get up" scenario described above, the controlled devices are in bedrooms and living rooms, and the number of subregions is 2.
S303 obtains the number of sub-areas corresponding to each control scene.
S304, determining scene identification of each control scene according to the number of the subareas.
In S304, for each control scene, the scene identification of the control scene is determined according to the number of sub-areas in S303 of the control scene.
Based on S301-S304, the number of sub-areas can describe the distribution of the controlled devices in the control scene in the sub-areas. For each control scene, the distribution situation of the controlled devices in the control scene in the subareas belongs to the position information of the controlled devices in the control scene. Therefore, the number of the subareas can provide reasonable basis for determining the target control scene in the process of the voice control equipment. In one possible implementation manner, determining the scene identifier of each control scene according to the number of sub-regions may be implemented in the following manner. The implementation mode is divided into the following two cases:
in the first case, when the number of sub-areas of a preset area where the controlled device in the control scene is located is one, determining the scene identification of the control scene according to the identification of the sub-areas.
The number of the subregions of the preset region where the controlled devices in the control scene are located is one, which means that the controlled devices in the control scene are all located in the same subregion. At this time, the sub-region has a larger correlation with the control scene, and the other sub-regions have a smaller correlation.
The identification of sub-regions is used to identify sub-regions and may be used to distinguish between different sub-regions. In some possible cases, the identity of a sub-region may be the name of the sub-region. For example, when the preset area is the whole house, the subareas are different rooms in the whole house, and the identification of the subareas may be the names of rooms such as "child rooms", "main lying" and the like.
The following is an example of case one. The preset area is the whole house.
For example, the control scene is a "child sleep" scene.
Executing a 'children sleeping' scene to realize the control of household equipment is as follows: the desk lamp in the children's room is closed, the night lamp in the children's room is closed, and the game machine in the children's room is closed. The controlled devices in the "child sleep" scenario include: desk lamp in children's room, night-light in children's room and game machine in children's room.
The controlled devices are all located in the child rooms in the house, namely, the controlled devices in the 'child sleeping' scene are located in the same sub-area in the preset area, and the number of the sub-areas is one.
The scene identification of the "child sleep" scene is determined as the name "child room" of the sub-area.
For example, the control scene is a "parent sleep" scene.
Executing a 'parent sleep' scene to realize the control of household equipment is as follows: the pendant lamp in the main sleeping position is turned off, and the television in the main sleeping position is turned off. The controlled devices in the "parent sleep" scenario include: a pendant in main sleeping and a television in main sleeping.
The controlled devices are all positioned in a main bedroom in a house; namely, the controlled equipment in the 'parent sleep' scene is in the same sub-area in the preset area, and the number of the sub-areas is one; the scene identification of the "parent sleep" scene is determined as the name "main sleeper" of the sub-region.
And in the second case, when the number of the sub-areas of the preset area where the controlled equipment in the control scene is located is larger than a number threshold, determining the scene identification of the control scene according to the identification of the preset area, wherein the number threshold is larger than or equal to 2.
The number of sub-areas of a preset area where the controlled devices in the control scene are located is larger than a number threshold, which means that the controlled devices in the control scene are located in two or more sub-areas.
At this time, two or more sub-areas and the control scene have correlation. Considering the actual life scene, the control scene and the whole preset area have a high possibility of correlation.
The identification of the preset area is used for identifying the preset area. In some possible cases, the identification of the preset area may be the name of the preset area. For example, when the preset area is the whole house, the identification of the preset area may be "whole house".
The following is an example of case two. The preset area is the whole house.
For example, the control scene is a "parent gets up" scene. The number threshold is set to two.
Executing a parent getting up scene to realize the control of household equipment is as follows: the pendant lamp in the main sleeping room is turned on, the television in the living room is turned on, and the sound box in the study room is turned on. The controlled devices in the "parent get up" scene include: ceiling lamp in the main bed, television in living room and the audio amplifier of study.
In the three subregions of main lying, living room and study in the above-mentioned controlled equipment house, the quantity of subregion is three, is greater than quantity threshold value.
And determining the scene identification of the 'parents getting up' scene as the name 'whole house' of the preset area.
The following provides another embodiment of the present application, which provides an implementation manner of determining a scene identifier of each control scene according to the number of sub-regions:
obtaining the number of sub-areas of the preset area where the controlled devices in the control scenes are located and the number distribution of the controlled devices in the control scenes in the sub-areas according to the position distribution;
when the number of the subareas of the preset area where the controlled equipment in the control scene is located meets the preset condition, determining the scene identification of the control scene according to the number distribution of the controlled equipment in the control scene in each subarea.
Referring to fig. 4, fig. 4 is a flowchart of a control method of an internet of things device according to another embodiment of the present application. As shown in fig. 4, the control method of the internet of things device in the embodiment of the present application includes S401-S404.
S401, determining controlled equipment in each execution control scene, wherein the controlled equipment comprises one or more pieces of Internet of things equipment;
s402, determining the position distribution of the controlled equipment in each control scene in a preset area;
s403, according to the position distribution, obtaining the number of sub-areas of the preset area where the controlled devices in the control scenes are located and the number distribution of the controlled devices in the control scenes in each sub-area;
in S403, the position distribution refers to the position distribution of the controlled devices in each of the control scenes within the preset area.
The number distribution of the controlled devices in the control scene in each sub-area refers to how the controlled devices in the control scene are distributed in each sub-area, specifically, since the controlled devices include one or more internet of things devices, the number distribution refers to the case of the number of the internet of things devices contained in each sub-area.
S404, when the number of sub-areas of a preset area where the controlled equipment in the control scene is located meets a preset condition, determining a scene identifier of the control scene according to the number distribution of the controlled equipment in the control scene in each sub-area.
Generally, the more the internet of things devices in a subarea, the greater the correlation between the subarea and the control scene, so that the quantity distribution contains the position information of the controlled devices, and the correlation between each subarea and the control scene can be reflected.
In some possible cases, the preset condition may be set such that the number of sub-areas is within a preset range.
In one possible implementation manner, determining the scene identifier of the control scene according to the quantity distribution of the controlled devices in the control scene in each sub-area may be implemented in the following manner:
determining a subarea with the largest quantity of the Internet of things equipment in the control scene according to the quantity distribution of the controlled equipment in the control scene in each subarea;
and determining the scene identification of the control scene according to the identification of the sub-region with the largest number of the Internet of things devices in the control scene.
Aiming at a control scene, the sub-area with the largest quantity of the Internet of things devices in the control scene is the sub-area of a preset area, wherein the largest quantity of the Internet of things devices in the control scene is located in the sub-area, and the Internet of things devices are controlled devices in the control scene.
For a control scenario, the number of the internet of things devices in the control scenario may be obtained in a common counting manner, that is, in a sub-area, the number corresponding to one internet of things device is one, the number corresponding to two internet of things devices is two, and so on … …, it will be appreciated that other implementation manners may also be adopted.
In some possible cases, the sub-area containing the largest number of the internet of things devices in the control scene may be more than one, and at this time, the scene identifier of the control scene may be determined according to the identifier of the preset area.
In some possible cases, the sub-area containing the largest number of the internet of things devices in the control scene may be more than one, and at this time, the scene identifier of the control scene may also be determined according to the identifiers of the plurality of sub-areas.
The explanation, explanation and examples of the identification of the preset area and the identification of the sub-area can be found in the description above.
For a control scenario, when the control scenario is generally executed, the more the internet of things devices in a certain subarea, the greater the correlation between the subarea and the control scenario. Therefore, the sub-area with the largest number of the Internet of things devices in the control scene is included, and the information of the sub-area can reflect the information of the control scene and the position information of the controlled device.
In one possible implementation manner, in order to make the scene identifier of the control scene reflect the correlation between the sub-region and the control scene more accurately, determining the scene identifier of the control scene according to the number distribution of the controlled devices in the control scene in each sub-region may be implemented by:
obtaining a target sub-region according to the quantity distribution of the controlled devices in the control scene in each sub-region and the types of the controlled devices in the control scene in each sub-region;
and determining the scene identification of the control scene according to the identification of the target sub-region.
For controlled devices in different control scenarios, more than one kind of internet of things device may be included. For example, when the controlled devices in the control scene are a desk lamp and a game machine, the desk lamp and the game machine are of different kinds, and the kind of the controlled device in the control scene is two.
According to the above description, the number distribution of the controlled devices in each sub-area is used for reflecting the correlation between the sub-area and the control scene.
Aiming at different kinds of Internet of things equipment, the quantity of the Internet of things equipment reflects the correlation to different degrees. The number is obtained by adopting a common counting mode, namely, the number corresponding to one piece of internet of things equipment is one, the number corresponding to two pieces of internet of things equipment is two, and the like … …. The above-described general counting method may not reflect the above-described correlation more accurately. Therefore, considering different kinds of internet of things devices, the number may be different for the degree of the above-described relevance description. Therefore, the number distribution of the controlled devices in the scene and the class of the Internet of things devices in the control scene can be controlled, the target subarea is determined, and the identification of the target subarea is used as the basis for determining the scene identification.
For example, the preset area may include the entire house, and the sub-area of the preset area may include the child's house and the main sleeper. The control scene is a "get up" scene, and the execution of the "get up" scene realizes the control of the equipment as follows: the first desk lamp in the child room is turned on, the second desk lamp in the child room is turned on (the first desk lamp and the second desk lamp are two different controlled devices), and the pendant lamp in the main sleeping room is turned on.
In the above example, the number distribution of the controlled devices in each sub-area may be: the number of the houses of the children is two, and the number of the main houses is one. According to the quantity distribution, the sub-region with the largest quantity of the Internet of things equipment in the control scene can be obtained to be the child room.
However, if the "getting up" scene is determined according to the identity of the child room, a control situation may occur that does not meet the user's needs; therefore, although the number of the controlled devices in the child room is two, the first desk lamp and the second desk lamp in the child room are of the same class, and at the moment, the target subareas can be obtained to be the child room and the main sleeping, so that the scene identification of the 'getting up' scene is determined according to the identification of the child room and the identification of the main sleeping.
According to the above discussion, the target sub-region is obtained according to the quantity distribution of the controlled devices in each sub-region and the class of the internet of things devices in the control scene, so that the scene identification can more accurately reflect the correlation between the sub-region and the control scene.
Another embodiment of the present application is provided below, which provides a method for implementing a voice control device. Referring to fig. 5, fig. 5 is a flowchart of a control method of an internet of things device according to another embodiment of the present application.
As shown in fig. 5, the control method of the internet of things device in the embodiment of the present application includes S501-S505.
S501, determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment;
s502, determining the position distribution of controlled equipment in each control scene in a preset area;
s503, determining scene identifiers corresponding to the control scenes respectively according to the position distribution;
s504, determining the position of the user according to the voice input;
s505, determining a target scene identifier according to the user position, and controlling the controlled equipment in the control scene corresponding to the target scene identifier.
The voice input includes user action information and location information of the controlled device, resulting in the voice input requiring more information to be provided than in the prior art. However, in the embodiment of the application, the user position is determined according to the voice input, and the target representation is determined according to the user position, so that the controlled device in the control scene corresponding to the target scene representation is controlled. The scene identification of the control scene contains the position information of the controlled equipment, and can provide basis for selecting the target field identification in the process of controlling the equipment through voice. Therefore, for voice, the position information of the controlled equipment is not required to be included, so that the user information is simplified, and the use experience of the user is improved. Another embodiment of the present application is an internet of things system. As shown in fig. 1, fig. 1 is a schematic structural diagram of an internet of things system provided in an embodiment of the present application, where an internet of things device may include a home device.
As shown in fig. 1, the internet of things system 100 includes an internet of things gateway device 101, and one or more internet of things devices. The gateway device of the internet of things is used for executing the control method of any one of the devices of the internet of things, and is used for controlling the devices of the internet of things.
In fig. 1, an internet of things system 100 includes an internet of things device 102, an internet of things device 103, and an internet of things device 104. In practical application, the internet of things system can further comprise one internet of things device, two internet of things devices, or more than three internet of things devices.
In some possible implementations, the scene identifier is obtained through a control method of any of the internet of things devices.
The internet of things system 100, the devices in the system, and the relationships between the devices, the achieved beneficial effects are the same as the above, and the detailed description is not hindered here.
Another embodiment of the present application is an internet of things gateway device. As shown in fig. 1, the gateway device of the internet of things is configured to execute the control method of any one of the devices of the internet of things, so as to control the devices of the internet of things.
In some possible implementations, the scene identifier is obtained through a control method of any of the internet of things devices.
In an embodiment of the present application, a computer readable storage medium is further provided, where the computer readable storage medium is configured to store a computer program, where the computer program is configured to execute the control method of the internet of things device and achieve the same technical effect, and in order to avoid repetition, no further description is provided herein. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. The control method of the internet of things equipment is characterized by comprising the following steps:
determining controlled equipment in each control scene, wherein the controlled equipment comprises one or more Internet of things equipment;
determining the position distribution of controlled equipment in each control scene in a preset area;
obtaining the number of sub-areas of a preset area where the controlled equipment in each control scene is located according to the position distribution;
determining scene identifiers of the control scenes according to the number of the subareas;
and determining a target scene identifier according to voice input, and controlling controlled equipment in a control scene corresponding to the target scene identifier.
2. The method of claim 1, wherein determining the scene identity of each of the control scenes based on the number of sub-regions comprises:
when the number of the subareas of the preset area where the controlled equipment in the control scene is located is one, determining a scene identification of the control scene according to the identification of the subareas;
when the number of the sub-areas of the preset area where the controlled equipment in the control scene is located is larger than a number threshold, determining the scene identification of the control scene according to the identification of the preset area, wherein the number threshold is larger than or equal to 2.
3. The method according to claim 1, wherein determining a scene identifier corresponding to each of the control scenes according to the position distribution includes:
obtaining the number of sub-areas of a preset area where the controlled equipment in each control scene is located and the number distribution of the Internet of things equipment in each control scene in each sub-area according to the position distribution;
when the number of the subareas of the preset area where the controlled equipment in the control scene is located meets the preset condition, determining the scene identification of the control scene according to the number distribution of the controlled equipment in the control scene in each subarea.
4. A method according to claim 3, wherein said determining a scene identity of the control scene based on a distribution of a number of controlled devices in the control scene within each sub-area comprises:
determining a subarea with the largest quantity of the Internet of things equipment in the control scene according to the quantity distribution of the controlled equipment in the control scene in each subarea;
and determining the scene identification of the control scene according to the identification of the sub-region with the largest number of the controlled devices.
5. A method according to claim 3, wherein said determining a scene identity of the control scene based on a distribution of a number of controlled devices in the control scene within each sub-area comprises:
obtaining a target sub-region according to the quantity distribution of the controlled devices in the control scene in each sub-region and the types of the controlled devices in the control scene in each sub-region;
and determining the scene identification of the control scene according to the identification of the target sub-region.
6. The method according to claim 1, wherein the method further comprises:
and when receiving a modification instruction of a user for the scene identifier, modifying the scene identifier according to the modification instruction.
7. The method of claim 1, wherein said determining a target scene identification from a speech input comprises:
determining a user location based on the voice input;
and determining a target scene identification according to the user position.
8. An internet of things gateway device, wherein the internet of things gateway device is configured to perform the control method of the internet of things device according to any one of claims 1-7, so as to control the internet of things device.
9. An internet of things system, comprising the internet of things gateway device of claim 8, further comprising one or more internet of things devices.
CN202111266247.4A 2021-10-28 2021-10-28 Control method, equipment and system of Internet of things equipment Active CN114143359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111266247.4A CN114143359B (en) 2021-10-28 2021-10-28 Control method, equipment and system of Internet of things equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111266247.4A CN114143359B (en) 2021-10-28 2021-10-28 Control method, equipment and system of Internet of things equipment

Publications (2)

Publication Number Publication Date
CN114143359A CN114143359A (en) 2022-03-04
CN114143359B true CN114143359B (en) 2023-12-19

Family

ID=80395776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111266247.4A Active CN114143359B (en) 2021-10-28 2021-10-28 Control method, equipment and system of Internet of things equipment

Country Status (1)

Country Link
CN (1) CN114143359B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109725632A (en) * 2017-10-30 2019-05-07 速感科技(北京)有限公司 Removable smart machine control method, removable smart machine and intelligent sweeping machine
FR3085216A1 (en) * 2018-08-24 2020-02-28 Thomas Guillaumin VIRTUAL ASSISTANT
CN110890094A (en) * 2019-12-02 2020-03-17 苏州思必驰信息科技有限公司 Voice control method of Internet of things equipment and voice server
CN111665737A (en) * 2020-07-21 2020-09-15 宁波奥克斯电气股份有限公司 Intelligent household scene control method and system
WO2020228032A1 (en) * 2019-05-16 2020-11-19 深圳市欢太科技有限公司 Scene pushing method, apparatus and system, and electronic device and storage medium
CN112202648A (en) * 2019-07-08 2021-01-08 九阳股份有限公司 Control method and system of networked home equipment
CN112861011A (en) * 2021-03-04 2021-05-28 海尔(深圳)研发有限责任公司 Scene recommendation method and device and terminal equipment
CN112947098A (en) * 2021-02-01 2021-06-11 杭州雅观科技有限公司 Construction installation deployment method based on artificial intelligence Internet of things equipment
CN112987580A (en) * 2019-12-12 2021-06-18 华为技术有限公司 Equipment control method and device, server and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180323996A1 (en) * 2017-05-08 2018-11-08 Essential Products, Inc. Automatic generation of scenes using an assistant device
US11152001B2 (en) * 2018-12-20 2021-10-19 Synaptics Incorporated Vision-based presence-aware voice-enabled device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109725632A (en) * 2017-10-30 2019-05-07 速感科技(北京)有限公司 Removable smart machine control method, removable smart machine and intelligent sweeping machine
FR3085216A1 (en) * 2018-08-24 2020-02-28 Thomas Guillaumin VIRTUAL ASSISTANT
WO2020228032A1 (en) * 2019-05-16 2020-11-19 深圳市欢太科技有限公司 Scene pushing method, apparatus and system, and electronic device and storage medium
CN112202648A (en) * 2019-07-08 2021-01-08 九阳股份有限公司 Control method and system of networked home equipment
CN110890094A (en) * 2019-12-02 2020-03-17 苏州思必驰信息科技有限公司 Voice control method of Internet of things equipment and voice server
CN112987580A (en) * 2019-12-12 2021-06-18 华为技术有限公司 Equipment control method and device, server and storage medium
CN111665737A (en) * 2020-07-21 2020-09-15 宁波奥克斯电气股份有限公司 Intelligent household scene control method and system
CN112947098A (en) * 2021-02-01 2021-06-11 杭州雅观科技有限公司 Construction installation deployment method based on artificial intelligence Internet of things equipment
CN112861011A (en) * 2021-03-04 2021-05-28 海尔(深圳)研发有限责任公司 Scene recommendation method and device and terminal equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
郑彬彬,贾珈,蔡莲红.基于多模态信息融合的语音意图理解方法.中国科技论文在线.2011,第6卷(第07期),495-500. *
面向智能语音设备的家居***协议设计与实现;牛飞;中国科学院大学(中国科学院大学人工智能学院);29-63 *

Also Published As

Publication number Publication date
CN114143359A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN108667697B (en) Voice control conflict resolution method and device and voice control system
CN107294793B (en) Replacement method, device and equipment of intelligent household equipment and storage medium
CN110597075B (en) Method and device for detecting control conflict, electronic equipment and storage medium
CN106842968B (en) Control method, device and system
CN105700389B (en) Intelligent home natural language control method
CN110161875A (en) The control method and system of smart home operating system based on Internet of Things
CN111665737B (en) Smart home scene control method and system
WO2017016432A1 (en) Intelligent home appliance control method and intelligent home appliance controller
CN111367188B (en) Control method and device for intelligent home, electronic equipment and computer storage medium
WO2022262526A1 (en) Control method and apparatus for household appliance, and household appliance
CN106338922B (en) The generation method and device of intelligent scene mode
JP2014098962A (en) Behavior control device, behavior control method, and control program
CN108572554A (en) A kind of intelligent home control system, method and relevant device
CN111487884A (en) Storage medium, and intelligent household scene generation device and method
CN114120996A (en) Voice interaction method and device
CN113111199A (en) Method and device for continuing playing of multimedia resource, storage medium and electronic device
CN109150675A (en) Interaction method and device for household appliances
CN111431776A (en) Information configuration method, device and system
CN114143359B (en) Control method, equipment and system of Internet of things equipment
CN110794773A (en) Click-type scene creating method and device
JP2019190785A (en) Environment reproduction program and environment reproduction system
CN105491114B (en) Controlled plant switching method, apparatus and system
CN111294726B (en) Storage medium, sweeping robot and equipment control method thereof
CN111833577B (en) Control instruction processing and sending method, electronic equipment, control equipment and equipment control system
CN113542689A (en) Image processing method based on wireless Internet of things and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant