CN116700067A - Instruction optimization method and device, electronic equipment and storage medium - Google Patents

Instruction optimization method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116700067A
CN116700067A CN202310511468.6A CN202310511468A CN116700067A CN 116700067 A CN116700067 A CN 116700067A CN 202310511468 A CN202310511468 A CN 202310511468A CN 116700067 A CN116700067 A CN 116700067A
Authority
CN
China
Prior art keywords
scene
target scene
parameter
instruction
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310511468.6A
Other languages
Chinese (zh)
Inventor
吴慧芳
黄露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lumi United Technology Co Ltd
Original Assignee
Lumi United Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi United Technology Co Ltd filed Critical Lumi United Technology Co Ltd
Priority to CN202310511468.6A priority Critical patent/CN116700067A/en
Publication of CN116700067A publication Critical patent/CN116700067A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The application provides an instruction optimization method, an instruction optimization device, electronic equipment and a storage medium, and relates to the technical field of Internet of things. Wherein the method comprises the following steps: acquiring first equipment parameters of a target scene, wherein the first equipment parameters are generated by the intelligent equipment executing initial actions configured in the target scene; acquiring a scene steady-state set of the target scene, wherein the scene steady-state set comprises second equipment parameters generated in a plurality of scene observation periods; the second device parameter is generated by the intelligent device executing additional actions in a scene observation period; updating the device control instruction when the target scene is triggered according to the first device parameter and the second device parameter in the scene steady-state set; the device control instruction is used for instructing the intelligent device to execute a corresponding initial action. The application solves the problem that the optimization process of the equipment control instruction is too complicated in the related art.

Description

Instruction optimization method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of the Internet of things, in particular to an instruction optimization method, an instruction optimization device, electronic equipment and a storage medium.
Background
With the development of the internet of things, the device control modes of all intelligent devices in the intelligent home network are more and more abundant, and besides the device control modes of the intelligent devices, the device control modes of all scenes are also available. The device control is performed through the scene, the steps of sequentially controlling a plurality of intelligent devices can be reduced, and different scenes are more intelligent according to different user demands.
Currently, each smart device may configure a corresponding action in a scenario, and when the scenario is triggered, a device control instruction for instructing the smart device to perform the action is sent to the corresponding smart device, so that each smart device performs the corresponding action in response to the device control instruction. Based on this, if the device control instructions need to be optimized, the configuration needs to be modified in the scene.
However, as the number of scenes, or the number of smart devices that need to configure initial actions in a scene, increases, the modification of the scene configuration becomes increasingly difficult, and increasing difficulty in modifying the scene configuration reduces the aggressiveness of the user optimizing device control instructions for the user.
From the above, the optimization process of the device control command is too complicated, which becomes a problem to be solved.
Disclosure of Invention
The embodiment of the application provides an instruction optimization method, an instruction optimization device, electronic equipment and a storage medium, which can solve the problem that the equipment control instruction optimization process is too complicated in the related technology. The technical scheme is as follows:
according to one aspect of the application, a method of instruction optimization, the method comprising: acquiring first equipment parameters of a target scene, wherein the first equipment parameters are generated by the intelligent equipment executing initial actions configured in the target scene; acquiring a scene steady-state set of the target scene, wherein the scene steady-state set comprises second equipment parameters generated in a plurality of scene observation periods; the second device parameter is generated by the intelligent device executing additional actions in a scene observation period; updating the device control instruction when the target scene is triggered according to the first device parameter and the second device parameter in the scene steady-state set; the device control instruction is used for instructing the intelligent device to execute a corresponding initial action.
According to one aspect of the present application, an instruction optimization apparatus, the apparatus includes: the intelligent device comprises a first parameter acquisition module, a second parameter acquisition module and a first parameter generation module, wherein the first parameter acquisition module is used for acquiring first device parameters of a target scene, and the first device parameters are generated by the intelligent device executing initial actions configured in the target scene; the second parameter acquisition module is used for acquiring a scene steady-state set of the target scene, wherein the scene steady-state set comprises second equipment parameters generated in a plurality of scene observation periods; the second device parameter is generated by the smart device performing additional actions during a scene observation period. The instruction updating module is used for updating the equipment control instruction when the target scene is triggered according to the first equipment parameter and the second equipment parameter in the scene steady-state set; the device control instruction is used for instructing the intelligent device to execute a corresponding initial action.
In an exemplary embodiment, the instruction update module includes: the characterization unit is used for performing characterization processing on the environment parameters when the target scene is triggered to obtain the environment state corresponding to the target scene; the clustering unit is used for clustering the second equipment parameters in the scene steady-state set according to the obtained environment state to obtain a third equipment parameter in the corresponding environment state; the third device parameter is related to an action that the smart device is expected to perform in a corresponding environmental state; and the updating unit is used for updating the equipment control instruction when the target scene is triggered according to the third equipment parameter and the first equipment parameter in the corresponding environment state.
In an exemplary embodiment, the instruction updating module is further configured to compare differences between the third device parameter and the first device parameter in the corresponding environmental states, respectively, to obtain a comparison result; and updating the equipment control instruction when the target scene is triggered according to the comparison result.
In an exemplary embodiment, the instruction updating module is further configured to determine, for the third device parameter in the corresponding environmental state, a confidence level corresponding to the third device parameter; the confidence is determined based on the number of times the third device parameter occurs within a preset time period or based on a ratio between the number of times the third device parameter occurs within a preset time period and the number of times the target scene is triggered; and if the confidence coefficient meets a set condition, updating the device control instruction when the target scene is triggered according to the third device parameter and the first device parameter in the corresponding environment state.
In an exemplary embodiment, the instruction updating module is further configured to determine a scene steady-state set in various environmental states when there are multiple environmental states corresponding to the target scene; and clustering the second equipment parameters in the determined scene steady-state set for each environmental state to obtain the third equipment parameters in the corresponding environmental state.
In an exemplary embodiment, the instruction update module is further configured to detect whether the third device parameters in the various environmental states are the same; if not, creating or pushing a new scene based on the third equipment parameters in various environment states.
In an exemplary embodiment, the instruction update module is further configured to receive a feedback message created for the new scene; if the feedback message indicates that the creation of a new scene is allowed, creating a new scene based on the third device parameters respectively, and storing the created new scene to update the device control instruction when the new scene is triggered based on the third device parameters of the new scene.
In an exemplary embodiment, the instruction optimizing means is further configured to determine satisfaction of the target scene based on a frequency of the intelligent device performing additional actions in a plurality of the scene observation periods; and if the satisfaction degree of the target scene indicates that the equipment control instruction needs to be updated when the target scene is triggered, acquiring a scene steady-state set of the target scene.
In an exemplary embodiment, the instruction optimizing means is further configured to determine, in a set period of time, an amount of feedback for the scene observation period; the feedback quantity is used for indicating the number of times that the intelligent device executes additional actions in the scene observation period is detected; and adjusting the scene observation period based on the feedback quantity.
In an exemplary embodiment, the instruction optimizing apparatus is further configured to send an updated device control instruction to the corresponding smart device if the target scenario is triggered, so that the smart device performs an initial action in response to the updated device control instruction.
In an exemplary embodiment, the instruction optimizing device is further configured to detect whether an instruction optimizing function is turned on; the starting of the instruction optimization function refers to allowing updating processing of equipment control instructions when the target scene is triggered; and if the instruction optimization function is detected to be started, acquiring a scene steady-state set of the target scene.
According to one aspect of the present application, an instruction optimization apparatus, the apparatus includes: the data display module is used for displaying instruction optimization data of the target scene, wherein the instruction optimization data is used for indicating the updated equipment control instruction; the updated device control instruction is determined by the gateway or the cloud according to the first device parameters of the target scene and the second device parameters in the scene steady-state set; the first device parameter is generated by the intelligent device executing an initial action configured in the target scene and is used for indicating the device state of the intelligent device; the scene steady-state set includes second device parameters over a plurality of scene observation periods; the second device parameter is generated by the intelligent device executing additional actions in a scene observation period; the message sending module is used for responding to the confirmation operation triggered by the instruction optimization data and sending a confirmation message to the gateway or the cloud end so that the gateway or the cloud end updates the equipment control instruction when the target scene is triggered.
In an exemplary embodiment, the instruction optimization device is further configured to request the gateway or cloud to mark that the instruction optimization function is turned on in response to a turn-on operation triggered for the instruction optimization function.
According to one aspect of the application, an electronic device comprises at least one processor and at least one memory, wherein the memory has computer readable instructions stored thereon; the computer readable instructions are executed by one or more of the processors to cause an electronic device to implement the instruction optimization method as described above.
According to one aspect of the application, a storage medium has stored thereon computer readable instructions that are executed by one or more processors to implement an instruction optimization method as described above.
According to one aspect of the application, a computer program product includes computer readable instructions stored in a storage medium, one or more processors of an electronic device reading the computer readable instructions from the storage medium, loading and executing the computer readable instructions, causing the electronic device to implement an instruction optimization method as described above.
The technical scheme provided by the application has the beneficial effects that:
in the above technical solution, on the one hand, the scene steady-state set includes second equipment parameters generated in a plurality of scene observation periods, and the target scene is updated based on the scene steady-state set, so that the target scene is more suitable for actual situations, and the equipment control based on the target scene is more in line with the actual intention of the user; on the other hand, when the target scene is separated from the actual situation, the device control instruction when the target scene is triggered is automatically updated based on the first device parameter and the second device parameter in the scene steady state set, so that the step of modifying the configuration of the target scene by a user can be reduced, the operation of instruction optimization is simple and not complex, the user experience is improved, and the problem that the device control instruction optimization process in the related technology is excessively complex can be effectively solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic illustration of an implementation environment in accordance with the present application;
FIG. 2 is a flow chart illustrating a method of instruction optimization according to an exemplary embodiment;
FIG. 3 is a flow chart of step 350 in one embodiment of the corresponding embodiment of FIG. 2;
FIG. 4 is a flowchart illustrating another instruction optimization method, according to an example embodiment;
FIG. 5 is a schematic diagram of an implementation of an instruction optimization method in an application scenario;
FIG. 6 is a block diagram illustrating an instruction optimization device, according to an example embodiment;
FIG. 7 is a block diagram illustrating another instruction optimization device, according to an example embodiment;
FIG. 8 is a hardware block diagram of an electronic device shown in accordance with an exemplary embodiment;
fig. 9 is a hardware configuration diagram of a terminal shown according to an exemplary embodiment;
fig. 10 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
As previously described, as the number of scenes, or the number of smart devices that need to configure initial actions in a scene, gradually increases, modification of the scene configuration becomes increasingly difficult.
Taking smart home as an example, there are a large number of scene-based equipment linkage control services in the smart home, for example, a target scene is configured as [ there is a person and the temperature is higher than 28 ℃ for hot-air-conditioning to refrigerating 26 ℃), wherein the triggering condition for triggering the target scene is "there is a person and the temperature is higher than 28 ℃", then the target scene can be automatically triggered when the triggering condition is met, an equipment control instruction "the refrigerating mode of starting the air conditioner" is set as 26 ℃ "and sent to the air conditioner, so that the air conditioner is opened and refrigerated to 26 ℃.
The device linkage control service greatly simplifies the step of controlling the device by the user, but because each user has different adaptation degree to different environment states and device using habit, some users may feel heat at 29 ℃ and some users are used to set the air conditioner at 22 ℃. Therefore, the device linkage control needs to be adaptively adjusted to meet different user requirements, and further, if the device linkage control of other intelligent devices is also configured in the target scene, the user needs to find the device linkage control related to the air conditioner in a plurality of device linkage controls configured in the target scene, and modify the triggering condition to be ' someone and the temperature is higher than 29 ℃, or modify the device control instruction to be ' the cooling mode of turning on the air conditioner ', and set to be 22 ℃.
However, since there are many scenes in the smart home and there may be linkage control between a plurality of smart devices in the scenes, if a user desires to adjust a target scene, not only the target scene is searched in the plurality of scenes to adjust a device control instruction in the target scene according to the desire, but also the connection relationship between the smart devices needs to be considered when adjusting the device control instruction.
Therefore, the steps of carrying out instruction optimization on the equipment control instruction are complicated, and the optimization difficulty is high, so that most users do not have strong driving force to modify the configuration of equipment linkage control after configuration, the equipment linkage control which can not be adjusted in a self-adaptive manner cannot meet individual requirements, and the requirement change of the users cannot be met.
From the above, the related art still has the defect that the optimization process of the device control command is too complicated.
Therefore, the instruction optimizing method provided by the application can effectively reduce steps in instruction optimization, simplify instruction optimizing operation, and is correspondingly suitable for an instruction optimizing device, wherein the instruction optimizing device can be deployed in electronic equipment, for example, the electronic equipment can be computer equipment with a von neumann architecture, the computer equipment comprises a desktop computer, a notebook computer, a server and the like, and can also be electronic equipment with a central control function, for example, the electronic equipment can be a gateway and the like.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of an implementation environment involved in a method of generating an environmental model. The implementation environment includes at least a user terminal 110, an intelligent device 130, a server side 170, and a network device, which in fig. 1 includes a gateway 150 and a router 190, which are not limited in this regard.
The user terminal 110 may be considered as a user terminal or a terminal, and may be configured (also understood as installation) with a client associated with the smart device 130, where the user terminal 110 may be an electronic device such as a smart phone, a tablet computer, a notebook computer, a desktop computer, an intelligent control panel, or other devices having display and control functions, which is not limited herein.
The client is associated with the smart device 130, and is essentially that the user registers an account in the client, and configures the smart device 130 in the client, for example, the configuration includes adding a device identifier to the smart device 130, so that when the client is operated in the user terminal 110, functions related to device display, device control, and the like of the smart device 130 can be provided for the user, where the client may be in the form of an application program or a web page, and correspondingly, an interface where the client performs device display may be in the form of a program window or a web page, which is not limited herein.
The intelligent device 130 is disposed in the gateway 150 and communicates with the gateway 150 through its own configured communication module, and is further controlled by the gateway 150. It should be understood that smart device 130 is generally referred to as one of a plurality of smart devices 130, and embodiments of the present application are merely illustrated with smart device 130, i.e., embodiments of the present application are not limited in the number and type of smart devices deployed in gateway 150. In one application scenario, intelligent device 130 accesses gateway 150 via a local area network, thereby being deployed in gateway 150. The process of intelligent device 130 accessing gateway 150 through a local area network includes: a local area network is first established by gateway 150 and intelligent device 130 joins the local area network established by gateway 150 by connecting to gateway 150. Such local area networks include, but are not limited to: ZIGBEE or bluetooth. The intelligent device 130 may be an intelligent printer, an intelligent fax machine, an intelligent camera, an intelligent air conditioner, an intelligent door lock, an intelligent lamp, or a human body sensor, a door and window sensor, a temperature and humidity sensor, a water immersion sensor, a natural gas alarm, a smoke alarm, a wall switch, a wall socket, a wireless switch, a wireless wall-mounted switch, a magic cube controller, a curtain motor, a millimeter wave radar, etc. which are provided with a communication module.
Interaction between user terminal 110 and intelligent device 130 may be accomplished through a local area network, or through a wide area network. In an application scenario, the ue 110 establishes a communication connection between the router 190 and the gateway 150 in a wired or wireless manner, for example, including but not limited to WIFI, so that the ue 110 and the gateway 150 are disposed in the same local area network, and further the ue 110 may implement interaction with the smart device 130 through a local area network path. In another application scenario, the ue 110 establishes a wired or wireless communication connection between the server 170 and the gateway 150, for example, but not limited to, 2G, 3G, 4G, 5G, WIFI, etc., so that the ue 110 and the gateway 150 are deployed in the same wide area network, and further the ue 110 may implement interaction with the smart device 130 through a wide area network path.
The server 170 may be considered as a cloud, a cloud platform, a server, etc., where the server 170 may be a server, a server cluster formed by a plurality of servers, or a cloud computing center formed by a plurality of servers, so as to better provide background services to a large number of user terminals 110. For example, the background service includes an instruction optimization service.
In one application scenario, the server 170 obtains a first device parameter and a device steady state value of the target scenario, and updates a device control instruction when the target scenario is triggered according to the first device parameter and a second device parameter in the scenario steady state set.
In another application scenario, the user terminal 110 displays the instruction optimization data of the target scenario to display the updated device control instruction to the user, and if the user confirms that the instruction optimization is performed, the user terminal 110 sends a confirmation message to the server 170, so that the server 170 performs update processing on the device control instruction when the target scenario is triggered.
Referring to fig. 2, an embodiment of the present application provides a method for optimizing instructions, which is applicable to an electronic device, and the electronic device may be the gateway 150 or the server 170 in the implementation environment shown in fig. 1.
In the following method embodiments, for convenience of description, the execution subject of each step of the method is described as an electronic device, but this configuration is not particularly limited.
As shown in fig. 2, the method may include the steps of:
in step 310, a first device parameter of a target scene is obtained.
Firstly, the target scene is a device control mode of linkage configuration of a plurality of intelligent devices, the target scene can comprise one or more device control instructions, and after the target scene is triggered, each device control instruction is sent to the corresponding intelligent device, so that the intelligent device executes initial actions configured in the target scene. For example, the configured target scene is a home-returning scene, that is, "the hall lamp, the vestibule lamp and the hall air conditioner are turned on when the home-returning scene is triggered, the hall lamp and the vestibule lamp are turned on, and at the same time, the hall air conditioner is also turned on.
Secondly, the first device parameter is generated by the intelligent device executing the initial action configured in the target scene, which can also be understood that the first device parameter is used for indicating the device state after the intelligent device executes the initial action. For example, in the target scene "home scene", the initial action configured for the living room air conditioner is "on cooling mode, cooling 23 ℃", and correspondingly, the device control instruction is "on cooling mode, cooling 23 ℃", when the home scene is triggered, the device control instruction is sent to the living room air conditioner, so that the living room air conditioner performs the action "on cooling mode, cooling 23 ℃", and therefore, the first device parameter can indicate that the device state after the living room air conditioner performs the initial action is "cooling mode, temperature 23 ℃". Further, the target scene may include a plurality of device control instructions, when the target scene is triggered, each device control instruction is sent to a corresponding intelligent device to perform an initial action, and then, the plurality of intelligent devices perform the initial action to generate a plurality of corresponding first device parameters, and still taking the home returning scene as an example, after the home returning scene is triggered, 3 first device parameters can be generated to respectively indicate "living room lamp on", "vestibule lamp on" and "living room air conditioner on".
Of course, the number of first device parameters when different target scenarios are triggered may be different, and may be determined by the number of device control instructions configured in the target scenarios, which is not specifically limited herein.
Step 330, a scene steady-state set of the target scene is obtained.
The scene steady-state set comprises second device parameters generated in a plurality of scene observation periods, wherein the second device parameters are generated by the intelligent device executing additional actions in one scene observation period. It is noted herein that the additional action refers to an initial action performed by the smart device in the target scene during the scene observation period, which is different from the configuration for the smart device in the target scene.
When the target scene is triggered, the initial action performed by the intelligent device configured in the target scene may not meet the user's expectations, for example, the current ambient air temperature drops, and the "living room air conditioner on cooling mode" is configured in the home-returning scene, that is, the initial action "on cooling mode" configured for the living room air conditioner in the home-returning scene does not meet the actual situation. Based on the above, the user turns off the living room air conditioner manually quickly after triggering the home scene. The user manually turns off the living room air conditioner, namely, the living room air conditioner is regarded as an additional action executed by the living room air conditioner in a scene observation period, and the generated second equipment parameter can be "turn off the living room air conditioner". It can be appreciated that the second device parameter is not generated every time the target scene is triggered, and accordingly, if the intelligent device does not perform any additional actions during the scene measurement period, the corresponding second device parameter does not exist when the target scene is triggered.
The scene observation period refers to a period of time set for observing whether the intelligent device in the target scene performs an additional action after the target scene is triggered. For example, the scene observation period may be within 1 minute after the target scene is triggered. The setting of the scene observation period can be used for determining whether the user performs additional control on the intelligent device after the target scene is triggered, so as to determine whether the user is satisfied with the device linkage control based on the target scene. For example, if the user has performed additional manual control of the smart device during multiple scene observations, it indicates that the device control based on the target scene does not meet the user's expectations.
It should be noted that, the accuracy of instruction optimization may be affected by the length of the scene observation period, because the second device parameter is generated by the intelligent device executing the additional action in one scene observation period, if the scene observation period is too short, the intelligent device does not execute the additional action yet, and the scene observation period is already over, then the attitude of the user to the target scene configuration cannot be known, so that the instruction optimization cannot be smoothly and accurately performed; conversely, if the scene observation period is too long, there may be a lot of untrusted information for the second device parameters in the steady state set of scenes. For example, when the target scene is triggered, the time comes in the morning from the evening, the user turns off the lamp device in the target scene due to the influence of solar illumination, and the user turns off the lamp device under normal device control, if the scene observation period is infinitely prolonged, the lamp device is also regarded as being turned off to perform additional actions in one scene observation period to generate a second device parameter, then the instruction optimization based on the second device parameter will misunderstand the user intention, so that the instruction optimization result is error.
In any of the above cases, the instruction optimization result may be inaccurate, and for this reason, in one possible implementation, the adjustment of the scene observation period may include the following steps: determining the feedback quantity for the scene observation period in a set time period; based on the feedback amount, the scene observation period is adjusted. The feedback amount is used to indicate the number of times the smart device performs additional actions during the detected scene observation period. It should be appreciated that when the feedback amount indicates that the number of times the intelligent device performs the additional action within the scene observation period is abnormally small, the scene observation period may be appropriately prolonged after it is determined that the cause of the abnormality is due to the scene observation period being too short; when the feedback quantity indicates that the number of times that the intelligent device executes the additional action is abnormal frequently in the scene observation period, the scene observation period can be properly shortened after the reason of the abnormality is determined to be caused by the overlong scene observation period, so that the real intention of a user is better known, and the instruction optimization is accurately performed.
In addition, scene observation periods with different durations can be set for different users so as to adapt to the equipment use habits of different users, for example, when a target scene is not expected, the user A can quickly carry out additional manual control on the intelligent equipment, and then the duration of the scene observation period can be set to be shorter; conversely, when the target scene is not expected, the user B looks at the intelligent device for an additional manual control for a period of time, and accordingly, the duration of the scene observation period can be set longer. Therefore, based on setting scene observation periods which are more in line with the use habit of the user equipment for different users, the real intention of the different users is better known, and the instruction optimization function is more intelligent.
Of course, the electronic device may always detect whether the corresponding device control instruction needs to be updated when the target scene is triggered, or may update the device control instruction when the target scene is triggered according to the instruction, and in one possible implementation manner, the instruction is implemented by whether the instruction optimization function is turned on, where the turning on of the instruction optimization function refers to allowing the update of the device control instruction when the target scene is triggered. Specifically, whether an instruction optimization function is started or not is detected; if the instruction optimization function is detected to be started, the instruction optimization function indicates that the updating processing is allowed to be carried out on the equipment control instruction when the target scene is triggered, a scene steady-state set of the target scene is obtained, and whether the corresponding equipment control instruction needs to be updated or not is detected based on the scene steady-state set of the target scene; otherwise, if the instruction optimization function is detected to be closed, the device control instruction is not allowed to be updated when the target scene is triggered, the device control instruction is directly sent to the corresponding intelligent device without acquiring a scene steady-state set of the target scene, and the intelligent device executes corresponding initial actions.
In this way, whether to start the instruction optimizing function can be flexibly set according to the actual needs of the application scene, for example, the instruction optimizing function is closed in the application scene with limited resources, and at this time, the scene steady-state set of the target scene is not acquired, so as to avoid the calculation of instruction optimization, thereby reducing the occupation of resources.
Further, the intelligent device performs additional actions during one scene observation period, which may prompt that the initial actions performed by the intelligent device after the current target scene is triggered do not conform to the expectations of the user, but do not indicate that the configuration of the target scene by the user is not satisfied. For example, when the weather suddenly becomes cold, the target scene includes a device control instruction for opening the air conditioner refrigeration mode, and after the target scene is triggered, the user manually controls the air conditioner to raise the temperature in the scene observation period, and the weather becomes cold and belongs to a temporary air temperature change, so that the user operation is only directed at the triggered target scene. That is, based on the number and/or frequency of additional actions performed by the smart device over multiple scene observations, the party can indicate whether the user is satisfied with the configuration of the target scene so that a decision can be made as to whether to perform instruction optimization.
Based on this, in one possible implementation manner, the determining whether to update the device control instruction when the target scene is triggered according to the satisfaction degree of the target scene may specifically include the following steps: determining satisfaction of the target scene based on the frequency of the intelligent device executing the additional actions in the multiple scene observation periods; and if the satisfaction degree of the target scene indicates that the equipment control instruction needs to be updated when the target scene is triggered, acquiring a scene steady-state set of the target scene.
Wherein, satisfaction is used to indicate how often the intelligent device performs additional actions over multiple scene observations, which may refer to times and/or frequencies. For example, if the smart device performs the additional action 2 times in 10 scene observations, the satisfaction is 80%, and if the smart device performs the additional action 8 times in 10 scene observations, the satisfaction is 20%.
It should be appreciated that the higher the frequency of additional actions performed by the smart device over multiple scene observation periods, the lower the satisfaction of the target scene, which means that it is more necessary to update the device control instructions when the target scene is triggered, for example, when the satisfaction of the target scene is 50%, the device control instructions when the target scene is triggered may or may not be updated; when the satisfaction degree of the target scene is 20%, it is necessary to update the device control instruction when the target scene is triggered, otherwise, the initial action configured for the intelligent device in the target scene may not meet the user expectation.
Based on this, a threshold may be set for the satisfaction, and when the satisfaction is lower than the threshold, it is determined that the device control instruction when the target scene needs to be updated is triggered, specifically, the threshold may be set manually by the user, or may be set after being calculated by the server based on big data analysis, which is not limited herein specifically.
In summary, whether to update the device control instruction when the target scene is triggered is determined by using the satisfaction degree of the target scene, on one hand, the device control instruction when the target scene is triggered can be updated in time when the device control instruction does not meet the requirement, so that the target scene is more practical and meets the actual requirement; on the one hand, the situation that a scene steady-state set is acquired and analyzed every time the intelligent device executes additional actions is avoided, so that the calculated amount can be effectively reduced, and resources are saved.
And step 350, updating the device control instruction when the target scene is triggered according to the first device parameter and the second device parameter in the scene steady state set.
The device control instruction is used for indicating the intelligent device to execute corresponding initial actions.
It should be noted that, since the second device parameters are obtained by the intelligent device executing the additional actions in the one-time scene observation period, each second device parameter in the scene steady-state set can reflect the device usage habit of the user, and the first device parameters are generated by the intelligent device executing the initial actions configured in the target scene, that is, the first device parameters are related to the device control command when the target scene is triggered, so that the device control command when the target scene is triggered can be updated based on the first device parameters and each second device parameter in the scene steady-state set.
For example, the target scene a includes a plurality of device control instructions such as turning on a cooling mode of a living room air conditioner, turning on a living room lamp, turning on a living room television, etc., if the living room television is turned off multiple times in a scene observation period after the target scene a is triggered multiple times, a second device parameter in which the living room television is turned off will be recorded in the scene observation period, and when the instructions are optimized, the electronic device can update the device control instruction corresponding to the target scene a based on the second device parameter, that is, can update the device control instruction of "turning on the living room television" in the target scene a by using a new device control instruction of "turning off the living room television".
Of course, the target scene is configured based on linkage of multiple intelligent devices, if the target scene is triggered, the user manually controls the multiple intelligent devices configured in the target scene in the scene observation period to perform additional actions, and accordingly, multiple second device parameters are obtained, and each second device parameter corresponds to the additional actions performed by different intelligent devices in the scene observation period. It can be understood that, based on the second device parameters corresponding to each intelligent device in the multiple scene observation period, the electronic device can directly update the corresponding device control instruction when the target scene is triggered, so as to obtain the updated target scene; or the updating content of the target scene is obtained based on the second equipment parameter, then the updating content of the target scene is pushed to the user for confirmation, and after the confirmation instruction of the user is obtained, the updating processing is carried out on the corresponding equipment control instruction when the target scene is triggered.
Taking the target scene a as an example, if the living room television is turned off for multiple times and the living room air purifier is turned on for multiple times in the scene observation period after the target scene a is triggered for multiple times, the electronic device may update the device control instruction for turning on the living room television in the target scene a by using the new device control instruction for turning off the living room television, and may additionally add the device control instruction for turning on the living room air purifier for the target scene a.
After the instruction optimization is completed on the target scene, the updated equipment control instruction can be sent to the corresponding intelligent equipment under the condition that the target scene is triggered, so that the intelligent equipment responds to the updated equipment control instruction to execute the initial action.
Through the process, on one hand, the scene steady-state set comprises the second equipment parameters generated in the multiple scene observation periods, and the target scene is updated based on the scene steady-state set, so that the target scene is more suitable for the actual situation, and the equipment control based on the target scene is more in accordance with the actual intention of the user; on the other hand, when the target scene is separated from the actual situation, the device control instruction when the target scene is triggered is automatically updated based on the first device parameter and the second device parameter in the scene steady state set, so that the step of modifying the target scene configuration by a user can be reduced, the operation of instruction optimization is simple and not complicated, and the user experience is further improved.
Referring to fig. 3, in an exemplary embodiment, step 350 may include the steps of:
and 351, performing characterization processing on the environment parameters when the target scene is triggered to obtain the environment state corresponding to the target scene.
Wherein the environment parameter is used to indicate an environmental condition of the actual environment when the target scene is triggered. The environmental parameters may be obtained by an intelligent sensor configured in the internet of things, or may be obtained by an intelligent device configured in the target scene, which is not limited herein. For example, a temperature sensor in the internet of things detects that the temperature value of the actual environment is 26 ℃, and a humidity sensor detects that the relative humidity value of the actual environment is 75%.
The characterization processing refers to determining the actual environmental characteristics of the target scene according to different environmental parameters, namely converting continuous environmental parameters into discrete environmental states, so that the environmental parameters can have obvious characteristic expression. For example, for a time in an environmental parameter, successive time points are converted into discrete time periods: morning (6 to 11), noon (11 to 13), afternoon (13 to 19), evening (19 to 1), and early morning (1 to 6); for another example, for temperature values in an environmental parameter, each continuous temperature value is converted into a discrete temperature representation: low temperature (below 18 ℃), comfort (18 ℃ -26 ℃) and high temperature (above 26 ℃).
And step 353, clustering the second device parameters in the scene steady state set according to the obtained environmental state to obtain a third device parameter in the corresponding environmental state.
Wherein the third device parameter relates to an action that the smart device is expected to perform in the corresponding environmental state.
It can be understood that if the electronic device clusters the second device parameters in the steady state set of the scene for each environmental parameter, because the environmental parameters are difficult to be completely consistent each time the target scene is triggered, the number of the environmental parameters is large, and the number of the second device parameters under one environmental parameter is limited, it is difficult to determine the corresponding third device parameter for one of the environmental parameters; in addition, because the number of the second equipment parameters corresponding to one environment parameter is small, the equipment usage rule of the user is difficult to discover according to a small number of the second equipment parameters, and the obtained third equipment parameters cannot reflect the real intention of the user, so that the instruction optimization result is inaccurate.
Based on the method, the environment state of the target scene is obtained by carrying out characteristic processing by utilizing the environment parameters, and the environment state corresponds to a plurality of environment parameters, so that the number of the second equipment parameters in the environment state is large, and further, the second equipment parameters in the environment state are clustered, so that the equipment usage rule of the user in the environment state can be fully explored, the obtained third equipment parameters can restore the real intention of the user, the practicability of the target scene is improved, and the user experience is improved.
The second device parameters in the scene steady state set may be clustered by using a clustering method such as a hierarchy method, a density algorithm, a graph theory clustering method, a grid algorithm, and the like, which is not particularly limited herein.
For example, the second device parameters in the steady state set of scenes in the a-environment state include a, a, a, a, a, a, B, B, c, c, c, d and the second device parameters in the steady state set of scenes in the B-environment state include a, a, a, B, B, B, B, B, c, d, d, wherein different letters refer to the smart device performing different additional actions; clustering the second equipment parameters in the environment A, wherein the obtained clustering result is a, a is the third equipment parameter in the environment A, the second equipment parameters in the environment B are clustered, and the obtained clustering result is B, and similarly, B is the third equipment parameter in the environment B.
Further, the target scene may be triggered in different environmental states, so there may be multiple environmental states corresponding to the target scene, and it may be understood that the additional actions performed by the intelligent device configured in the target scene in the different environmental states may reflect the device usage habits of the user in the different environmental states.
Then, in order to understand the device usage habit of the user in different environmental states, so that the device control instruction when the target scene is triggered better conforms to the real intention of the user, in one possible implementation manner, the method for clustering the second device parameters in the scene steady state set may further include the following steps: under the condition that a plurality of environment states corresponding to the target scene exist, determining a scene steady-state set under various environment states; and clustering the second equipment parameters in the determined scene steady state set for each environmental state to obtain third equipment parameters in the corresponding environmental state.
The third device parameter in each environment state can be regarded as an action expected to be executed by the user on each intelligent device configured in the target scene when the target scene is triggered in the environment state. Of course, if the third device parameters in the environmental states obtained by clustering are the same, it may indicate that there is no difference in actions expected to be executed by the user on each intelligent device in the target scene between the environmental states, that is, the device usage habits of the user in different environmental states are the same, and conversely, if the third device parameters in the environmental states are different, it may indicate that there is a difference in actions expected to be executed by the user on each intelligent device in the target scene between the environmental states, that is, the device usage habits of the user in different environmental states are different.
Based on this, in one possible implementation, it is detected whether the third device parameters are the same in the various environmental states; if not, creating or pushing a new scene based on third equipment parameters in various environment states.
It should be noted that, the electronic device may directly create a new scene based on the third device parameters in various environmental states, or may push the new scene to the user, and the user decides whether to create the new scene, which is not limited herein.
In one possible implementation, the creation of the new scene is determined by the user and may comprise the steps of: receiving a feedback message created for the new scene; if the feedback message indicates that the creation of the new scene is allowed, creating the new scene based on the third device parameters, respectively, and storing the created new scene to update the device control instructions when the new scene is triggered based on the third device parameters of the new scene.
It can be understood that the created new scene can cover actions expected to be executed by the user on each intelligent device in various environment states, in short, the created new scene can further meet actual needs, and the actual intention of the user about different environment states is met. The new scene created may be one or more, depending on the third device parameters.
Step 355, updating the device control command when the target scene is triggered according to the third device parameter and the first device parameter in the corresponding environment state.
Firstly, the third device parameter in the corresponding environment state can be regarded as the action executed by the corresponding intelligent device when the target scene is triggered in the environment state, and based on the action, the corresponding device control instruction in the target scene can be updated by using the third device parameter.
In one possible implementation manner, the updating process of the device control instruction when the target scene is triggered may include the following steps: respectively comparing the difference between the third equipment parameter and the first equipment parameter in the corresponding environment state to obtain a comparison result; and updating the equipment control instruction when the target scene is triggered according to the comparison result. For example, the first device parameter indicates that the living room air conditioner is on, the working mode is 24 ℃ for cooling, the living room light is on, and the living room television is on, the third device parameter indicates that the living room air conditioner is on, the working mode is 22 ℃ for cooling, the living room light is on, and the living room television is on; and comparing the first equipment parameter with the third equipment parameter, wherein the comparison result indicates that the first equipment parameter and the third equipment parameter have difference in the execution action of the guest room air conditioner, so that the equipment control instruction when the target scene is triggered is updated according to the comparison result.
It is further noted that, if the second device parameter is obtained by clustering the second device parameter, for example, if the number of occurrences of the second device parameter is small, for example, the target scene is triggered 20 times, the second device parameter recorded in the scene steady-state set has a, b, b, b, c, d, where different letters refer to different additional actions of the intelligent device, if the second device parameter is clustered, the third device parameter obtained by clustering is b, it should be understood that, although b is a clustering result, the number of occurrences of b is only 3, that is, the clustering result is not significant, and if the device control instruction when the target scene is triggered is updated according to the third device parameter b, the situation that the target scene does not conform to the actual intention of the user occurs.
Based on this, in one possible implementation, before updating the device control instruction when the target scene is triggered according to the third device parameter, the method may further include the following steps: determining the confidence corresponding to the third equipment parameter aiming at the third equipment parameter in the corresponding environment state; if the confidence coefficient meets the set condition, comparing the difference between the third equipment parameter and the first equipment parameter in the corresponding environment state, and updating the equipment control instruction when the target scene is triggered according to the third equipment parameter and the first equipment parameter in the corresponding environment state.
The confidence is determined based on the number of times the third device parameter appears in the target scene in the preset time period or based on the ratio of the number of times the third device parameter appears in the target scene and the number of times the target scene is triggered in the preset time period. It should be appreciated that when the confidence of the third device parameter meets the set condition, then the third device parameter may be indicated to conform to the user's actual intent.
For example, the setting conditions are: the number of times the target scene is triggered is greater than 10 times in half a month, the confidence coefficient indicates that the ratio of the number of times the third equipment parameter appears in the target scene to the number of times the target scene is triggered is greater than 30%, and if the confidence coefficient of the third equipment parameter meets the set condition, the difference between the third equipment parameter and the first equipment parameter in the corresponding environment state can be compared to update the equipment control instruction when the target scene is triggered according to the third equipment parameter.
Under the action of the embodiment, on one hand, the third equipment parameters obtained by clustering according to the second equipment parameters in the scene steady-state set can discover the equipment use habit of the user, so that the updated target scene is more in line with the real intention of the user; on the other hand, aiming at the third equipment parameters in various environmental states, new scenes are pushed or created, instruction optimization suggestions in various environmental states can be actively provided for users, the practicability of equipment control based on the scenes is enhanced, and the user experience is greatly improved.
Referring to fig. 4, an embodiment of the present application provides a method for optimizing instructions, which is applicable to an electronic device, which may be the user terminal 110 in the implementation environment shown in fig. 1.
In the following method embodiments, for convenience of description, the execution subject of each step of the method is described as an electronic device, but this configuration is not particularly limited.
As shown in fig. 4, the method may include the steps of:
step 410, instruction optimization data of a target scene is displayed.
The target scene is a device control mode configured based on linkage of a plurality of intelligent devices, and can comprise one or more device control instructions; the instruction optimization data is used for indicating the device control instruction after the update processing.
Step 430, in response to the confirmation operation triggered by the instruction optimization data, sending a confirmation message to the gateway or the cloud end, so that the gateway or the cloud end updates the device control instruction when the target scene is triggered according to the first device parameter of the target scene and the second device parameter in the scene steady state set.
The first device parameter is generated by the intelligent device executing an initial action configured in a target scene and is used for indicating the device state of the intelligent device; the scene steady-state set comprises second device parameters in a plurality of scene observation periods; the second device parameter is generated by the smart device performing additional actions during a scene observation period. The updated device control instruction is determined by the gateway or the cloud according to the first device parameter of the target scene and the second device parameter in the scene steady-state set.
Regarding the confirmation message, the confirmation message is used for confirming that the device control instruction is updated when the target scene is triggered, and the user terminal can be triggered by the confirmation message, so that the user terminal sends the confirmation message to the gateway or the cloud, which is not limited herein.
Further, the user may select whether to update the device control instruction when the target scene is triggered, and in one possible implementation, in response to an opening operation triggered for the instruction optimization function, request the gateway or the cloud to mark that the instruction optimization function is opened. The starting of the instruction optimization function refers to allowing updating processing of a device control instruction when a target scene is triggered.
Through the process, the instruction optimization data are displayed, instruction optimization suggestions are provided for the user, if the user confirms updating, the gateway or the cloud end can update the equipment control instruction when the target scene is triggered, the user does not need excessive operations, only needs to trigger the confirming operation, the rest instruction optimization steps are automatically executed by the gateway or the cloud end, the instruction optimization complexity is reduced, and therefore user experience is improved.
Fig. 5 is a schematic diagram of a specific implementation of an instruction optimization method in an application scenario.
Through step 801, a target scene is set, and a device control instruction is configured for the target scene.
The instruction optimization function is turned on, via step 803.
The target scene is triggered, a first device parameter of the target scene is acquired, via step 805.
The second device parameters of the target scene are acquired, and the satisfaction of the target scene is determined, via step 807.
If the satisfaction indicates that the device control command needs to be updated when the target scene is triggered, then a scene steady-state set of the target scene is obtained through step 809.
Through step 811, the environmental parameters are subjected to characterization processing, so as to obtain the environmental state of the target scene.
Through step 813, third device parameters in various environmental states are obtained based on the environmental states and the second device parameters in the scene steady state set.
Through step 815, it is detected whether the third device parameters in the various environmental states are the same, and the confidence corresponding to the third device parameters is determined.
Through step 817, if the third device parameters in the various environmental states are the same and the confidence level meets the set condition, the device control instruction when the target scene is triggered is updated based on the third device parameters.
Through step 819, if the third devices in the various environmental states are different and the confidence level meets the set condition, pushing the new scene based on the third device parameters in the various environmental states.
A feedback message is received, via step 821.
If the feedback message indicates that creation of a new scene is allowed, creating a new scene based on the third device parameters, respectively, and storing the created new scene, through step 823.
In the application scene, firstly, a user only needs to start an instruction optimization function, then instruction optimization is automatically performed on a target scene, and instruction optimization steps are reduced; secondly, whether a scene steady-state set is acquired or not is determined according to the satisfaction degree of the target scene, so that instruction optimization calculation can be realized only when the target scene is separated from the actual situation, and resources are saved; thirdly, updating the equipment control instruction under the condition that the confidence coefficient of the third equipment parameter meets the set condition, so as to avoid the error instruction optimization, and enable the result of the instruction optimization to be more accurate; fourth, obtain the third equipment parameter under various environmental conditions based on environmental parameter and scene steady state collection for instruction optimization is more intelligent, provides the suggestion that new scene was created for the user, increases the practicality that utilizes the scene to carry out equipment control, promotes user experience.
The following is an embodiment of the apparatus of the present application, which may be used to perform the instruction optimization method according to the present application. For details not disclosed in the apparatus embodiments of the present application, please refer to a method embodiment of the instruction optimization method related to the present application.
Referring to fig. 6, in an embodiment of the present application, an instruction optimizing apparatus 900 is provided, including but not limited to: a first parameter acquisition module 910, a second parameter acquisition module 930, and an instruction update module 950.
The first parameter obtaining module 910 is configured to obtain a first device parameter of the target scene, where the first device parameter is generated by the smart device executing an initial action configured in the target scene.
A second parameter obtaining module 930, configured to obtain a scene steady-state set of the target scene, where the scene steady-state set includes second device parameters generated during multiple scene observation periods; the second device parameter is generated by the smart device performing additional actions during a scene observation period.
The instruction update module 950 is configured to update the device control instruction when the target scene is triggered according to the first device parameter and the second device parameter in the scene steady state set; the device control instructions are used for instructing the intelligent device to execute corresponding initial actions.
In an exemplary embodiment, the instruction update module 950 includes a characterizing unit, configured to characterize an environmental parameter when the target scene is triggered, to obtain an environmental state corresponding to the target scene; the clustering unit is used for clustering the second equipment parameters in the scene steady-state set according to the obtained environment state to obtain a third equipment parameter in the corresponding environment state; an updating unit for associating the third device parameter with an action desired to be performed by the smart device in the corresponding environmental state; and updating the device control instruction when the target scene is triggered according to the third device parameter and the first device parameter in the corresponding environment state.
In an exemplary embodiment, the instruction update module 950 is further configured to compare differences between the third device parameter and the first device parameter in the corresponding environmental states, respectively, to obtain a comparison result; and updating the equipment control instruction when the target scene is triggered according to the comparison result.
In an exemplary embodiment, the instruction update module 950 is further configured to determine, for the third device parameter in the corresponding environmental state, a confidence level corresponding to the third device parameter; the confidence is determined based on the number of times the third device parameter occurs within a preset time period or based on a ratio between the number of times the third device parameter occurs within a preset time period and the number of times the target scene is triggered; and if the confidence coefficient meets a set condition, updating the device control instruction when the target scene is triggered according to the third device parameter and the first device parameter in the corresponding environment state.
In an exemplary embodiment, the instruction update module 950 is further configured to determine, in a case where there are multiple environmental states corresponding to the target scene, a scene steady-state set in the various environmental states; and clustering the second equipment parameters in the determined scene steady state set for each environmental state to obtain third equipment parameters in the corresponding environmental state.
In an exemplary embodiment, the instruction update module 950 is further configured to detect whether the third device parameters in the various environmental states are the same; if not, creating or pushing a new scene based on third equipment parameters in various environment states.
In an exemplary embodiment, the instruction update module 950 is further configured to receive a feedback message created for the new scene; if the feedback message indicates that the creation of the new scene is allowed, creating the new scene based on the third device parameters, respectively, and storing the created new scene to update the device control instructions when the new scene is triggered based on the third device parameters of the new scene.
In an exemplary embodiment, the instruction optimizing apparatus 900 is further configured to determine satisfaction of the target scene based on a frequency of the intelligent device performing additional actions during multiple scene observation periods; and if the satisfaction degree of the target scene indicates that the equipment control instruction needs to be updated when the target scene is triggered, acquiring a scene steady-state set of the target scene.
In an exemplary embodiment, the instruction optimizing apparatus 900 is further configured to determine, during a set period of time, an amount of feedback for a scene observation period; the feedback quantity is used for indicating the number of times that the intelligent device executes the additional action in the scene observation period is detected; based on the feedback amount, the scene observation period is adjusted.
In an exemplary embodiment, the instruction optimizing apparatus 900 is further configured to send an updated device control instruction to a corresponding smart device if the target scene is triggered, so that the smart device performs an initial action in response to the updated device control instruction.
In an exemplary embodiment, the instruction optimizing apparatus 900 is further configured to detect whether the instruction optimizing function is turned on; the starting of the instruction optimization function means that the updating processing of the equipment control instruction when the target scene is triggered is allowed; and if the instruction optimization function is detected to be started, acquiring a scene steady-state set of the target scene.
Referring to fig. 7, another instruction optimizing apparatus 1000 is provided in an embodiment of the present application, including but not limited to: a data display module 1010 and a messaging module 1030.
The data display module 1010 is configured to display instruction optimization data of a target scene, where the instruction optimization data is used to indicate an updated device control instruction; the updated device control instruction is determined by the gateway or the cloud according to the first device parameter of the target scene and the second device parameter in the scene steady-state set; the first device parameter is generated by the intelligent device executing an initial action configured in the target scene and is used for indicating the device state of the intelligent device; the scene steady-state set comprises second device parameters in a plurality of scene observation periods; the second device parameter is generated by the intelligent device executing additional actions in a scene observation period;
The message sending module 1030 is configured to send a confirmation message to the gateway or the cloud in response to a confirmation operation triggered by the instruction optimization data, so that the gateway or the cloud updates the device control instruction when the target scene is triggered.
In an exemplary embodiment, the instruction optimization apparatus 1000 is further configured to request that the gateway or the cloud end mark that the instruction optimization function is turned on in response to a turn-on operation triggered for the instruction optimization function.
It should be noted that, in the instruction optimizing apparatus provided in the foregoing embodiment, only the division of the functional modules is used for illustration, and in practical application, the above-mentioned function allocation may be performed by different functional modules according to needs, that is, the internal structure of the instruction optimizing apparatus will be divided into different functional modules to complete all or part of the functions described above.
In addition, the instruction optimizing apparatus and the instruction optimizing method provided in the foregoing embodiments belong to the same concept, and the specific manner in which each module performs the operation has been described in detail in the method embodiment, which is not described herein again.
Fig. 8 shows a structural schematic of an electronic device according to an exemplary embodiment. The electronic device is suitable for use in the server side 170 or gateway 150 shown in fig. 1 in an implementation environment.
It should be noted that the electronic device is only an example adapted to the present application, and should not be construed as providing any limitation on the scope of use of the present application. Nor should the electronic device be construed as necessarily relying on or necessarily having one or more of the components of the exemplary electronic device 2000 illustrated in fig. 8.
The hardware structure of the electronic device 2000 may vary widely depending on configuration or performance, and as shown in fig. 8, the electronic device 2000 includes: a power supply 210, an interface 230, at least one memory 250, and at least one central processing unit (CPU, central Processing Units) 270.
Specifically, the power supply 210 is configured to provide an operating voltage for each hardware device on the electronic device 2000.
The interface 230 includes at least one wired or wireless network interface 231 for interacting with external devices. For example, interactions between terminal 100 and electronic device 200 in the implementation environment shown in FIG. 1 are performed.
Of course, in other examples of the adaptation of the present application, the interface 230 may further include at least one serial-parallel conversion interface 233, at least one input-output interface 235, at least one USB interface 237, and the like, as shown in fig. 8, which is not particularly limited herein.
The memory 250 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, where the resources stored include an operating system 251, application programs 253, and data 255, and the storage mode may be transient storage or permanent storage.
The operating system 251 is used for managing and controlling various hardware devices and applications 253 on the electronic device 2000, so as to implement the operation and processing of the cpu 270 on the mass data 255 in the memory 250, which may be Windows server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The application 253 is based on computer readable instructions on the operating system 251 to perform at least one specific task, which may include at least one module (not shown in fig. 8), each of which may include computer readable instructions for the electronic device 2000, respectively. For example, the instruction optimization apparatus may be regarded as an application 253 deployed on the electronic device 2000.
The data 255 may be a photograph, a picture, etc. stored on a disk, or may be a first device parameter, a scene steady-state set, etc. stored in the memory 250.
The central processor 270 may include one or more of the above processors and is configured to communicate with the memory 250 via at least one communication bus to read computer readable instructions stored in the memory 250, thereby implementing operations and processing of the bulk data 255 in the memory 250. The instruction optimization method is accomplished, for example, by the central processor 270 reading a series of computer readable instructions stored in the memory 250.
Furthermore, the present application can be realized by hardware circuitry or by a combination of hardware circuitry and software, and thus, the implementation of the present application is not limited to any specific hardware circuitry, software, or combination of the two.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a structure of a terminal according to an exemplary embodiment. The terminal is suitable for use in the user terminal 110 in the implementation environment shown in fig. 1.
It should be noted that the terminal is only an example adapted to the present application and should not be construed as providing any limitation on the scope of use of the present application. Nor should the terminal be construed as necessarily relying on or necessarily having one or more of the components of the exemplary terminal 1100 shown in fig. 9.
As shown in fig. 9, the terminal 1100 includes a memory 101, a memory controller 103, one or more (only one is shown in fig. 9) processors 105, a peripheral interface 107, a radio frequency module 109, a positioning module 111, a camera module 113, an audio module 115, a touch screen 117, and a key module 119. These components communicate with each other via one or more communication buses/signal lines 121.
The memory 101 may be configured to store computer readable instructions, such as computer readable instructions corresponding to the instruction optimization method and apparatus according to the exemplary embodiment of the present application, and the processor 105 executes the computer readable instructions stored in the memory 101, thereby performing various functions and data processing, that is, completing the instruction optimization method.
Memory 101, which is the carrier of resource storage, may be random access memory, e.g., high speed random access memory, non-volatile memory, such as one or more magnetic storage devices, flash memory, or other solid state memory. The storage means may be a temporary storage or a permanent storage.
The peripheral interface 107 may include at least one wired or wireless network interface, at least one serial-to-parallel conversion interface, at least one input/output interface, at least one USB interface, etc. for coupling external various input/output devices to the memory 101 and the processor 105 to enable communication with the external various input/output devices.
The radio frequency module 109 is configured to receive and transmit electromagnetic waves, and to implement mutual conversion between the electromagnetic waves and the electrical signals, so as to communicate with other devices through a communication network. The communication network may include a cellular telephone network, a wireless local area network, or a metropolitan area network, and may employ various communication standards, protocols, and techniques.
The positioning module 111 is configured to obtain a current geographic location of the terminal 1100. Examples of the positioning module 111 include, but are not limited to, global satellite positioning system (GPS), wireless local area network or mobile communication network based positioning technology.
The camera module 113 is attached to a camera for taking pictures or videos. The photographed pictures or videos may be stored in the memory 101, and may also be transmitted to an upper computer through the rf module 109.
The audio module 115 provides an audio interface to the user, which may include one or more microphone interfaces, one or more speaker interfaces, and one or more earphone interfaces. The interaction of the audio data with other devices is performed through the audio interface. The audio data may be stored in the memory 101 or may be transmitted via the radio frequency module 109.
The touch screen 117 provides an input-output interface between the terminal 1100 and the user. Specifically, the user may perform an input operation, such as a gesture operation of clicking, touching, sliding, etc., through the touch screen 117 to make the terminal 1100 respond to the input operation. The terminal 1100 displays and outputs the output content formed by any one or combination of the text, the picture or the video to the user through the touch screen 117.
The key module 119 includes at least one key to provide an interface for a user to input to the terminal 1100, and the user can cause the terminal 1100 to perform different functions by pressing different keys. For example, the sound adjustment key may allow the user to adjust the volume of sound played by the terminal 1100.
It is to be understood that the configuration shown in fig. 9 is merely illustrative, and that terminal 1100 may also include more or fewer components than shown in fig. 9, or have different components than shown in fig. 9. The components shown in fig. 9 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 10, in an embodiment of the present application, an electronic device 4000 is provided, and the electronic device 400 may include: desktop computers, notebook computers, servers, etc.
In fig. 10, the electronic device 4000 includes at least one processor 4001 and at least one memory 4003.
Among other things, data interaction between the processor 4001 and the memory 4003 may be achieved by at least one communication bus 4002. The communication bus 4002 may include a path for transferring data between the processor 4001 and the memory 4003. The communication bus 4002 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus 4002 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 10, but not only one bus or one type of bus.
Optionally, the electronic device 4000 may further comprise a transceiver 4004, the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The processor 4001 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor 4001 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Memory 4003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory ), CD-ROM (Compact Disc ReadOnly Memory, compact disc Read Only Memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired computer-readable instructions in the form of instructions or data structures and that can be accessed by electronic device 400.
The memory 4003 has computer readable instructions stored thereon, and the processor 4001 can read the computer readable instructions stored in the memory 4003 through the communication bus 4002.
The computer readable instructions are executed by the one or more processors 4001 to implement the instruction optimization method in the above embodiments.
Furthermore, in an embodiment of the present application, a storage medium having stored thereon computer readable instructions that are executed by one or more processors to implement the instruction optimization method as described above is provided.
In an embodiment of the present application, a computer program product is provided, where the computer program product includes computer readable instructions, where the computer readable instructions are stored in a storage medium, and where one or more processors of an electronic device read the computer readable instructions from the storage medium, load and execute the computer readable instructions, so that the electronic device implements the instruction optimization method as described above.
Compared with the related technology, the first, the scene steady state set comprises the second equipment parameters generated in a plurality of scene observation periods, and the target scene is updated based on the scene steady state set, so that the target scene is more suitable for the actual situation, and the equipment control based on the target scene is more in line with the actual intention of the user; secondly, when the target scene is separated from the actual situation, the device control instruction when the target scene is triggered is automatically updated based on the first device parameter and the second device parameter in the scene steady state set, so that the step of modifying the target scene configuration by a user can be reduced, the operation of instruction optimization is simple and not complicated, and the user experience is further improved; thirdly, aiming at third equipment parameters in various environmental states, new scenes are pushed or created, advice of instruction optimization in various environmental states can be actively provided for users, and the practicability of equipment control based on the scenes is greatly enhanced.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (17)

1. A method of instruction optimization, the method comprising:
acquiring first equipment parameters of a target scene, wherein the first equipment parameters are generated by the intelligent equipment executing initial actions configured in the target scene;
Acquiring a scene steady-state set of the target scene, wherein the scene steady-state set comprises second equipment parameters generated in a plurality of scene observation periods; the second device parameter is generated by the intelligent device executing additional actions in a scene observation period;
updating the device control instruction when the target scene is triggered according to the first device parameter and the second device parameter in the scene steady-state set; the device control instruction is used for instructing the intelligent device to execute a corresponding initial action.
2. The method of claim 1, wherein the updating device control instructions when the target scene is triggered based on the first device parameter and the second device parameter in the steady state set of scenes comprises:
performing characterization processing on the environment parameters when the target scene is triggered to obtain an environment state corresponding to the target scene;
clustering the second equipment parameters in the scene steady state set according to the obtained environmental state to obtain a third equipment parameter in the corresponding environmental state; the third device parameter is related to an action that the smart device is expected to perform in a corresponding environmental state;
And updating the device control instruction when the target scene is triggered according to the third device parameter and the first device parameter in the corresponding environment state.
3. The method of claim 2, wherein updating the device control instructions when the target scene is triggered according to the third device parameter and the first device parameter in the corresponding environment state comprises:
respectively comparing the differences between the third equipment parameters and the first equipment parameters in the corresponding environment states to obtain comparison results;
and updating the equipment control instruction when the target scene is triggered according to the comparison result.
4. The method of claim 2, wherein before the updating the device control instruction when the target scene is triggered according to the third device parameter and the first device parameter in the corresponding environment state, the method further comprises:
determining the confidence corresponding to the third equipment parameter aiming at the third equipment parameter in the corresponding environment state; the confidence is determined based on the number of times the third device parameter occurs within a preset time period or based on a ratio between the number of times the third device parameter occurs within a preset time period and the number of times the target scene is triggered;
And if the confidence coefficient meets a set condition, updating the device control instruction when the target scene is triggered according to the third device parameter and the first device parameter in the corresponding environment state.
5. The method of claim 2, wherein the clustering the second device parameters in the steady state set of scenes according to the obtained environmental states to obtain third device parameters in the corresponding environmental states comprises:
under the condition that a plurality of environment states corresponding to the target scene exist, determining a scene steady-state set under various environment states;
and clustering the second equipment parameters in the determined scene steady-state set for each environmental state to obtain the third equipment parameters in the corresponding environmental state.
6. The method of claim 5, wherein the clustering the second device parameters in the determined steady state set of scenes separately for each environmental state, after obtaining the third device parameters in the corresponding environmental state, further comprises:
detecting whether the third device parameters are the same in various environmental states;
If not, creating or pushing a new scene based on the third equipment parameters in various environment states.
7. The method of claim 6, wherein after the pushing of new scene creation based on the third device parameters in various environmental states, the method further comprises:
receiving a feedback message created for the new scene;
if the feedback message indicates that the creation of a new scene is allowed, creating a new scene based on the third device parameters respectively, and storing the created new scene to update the device control instruction when the new scene is triggered based on the third device parameters of the new scene.
8. The method of any of claims 1 to 7, wherein prior to the acquiring the scene steady-state set of the target scene, the method further comprises:
determining satisfaction of the target scene based on how often the intelligent device performs additional actions over multiple scene observation periods;
and if the satisfaction degree of the target scene indicates that the equipment control instruction needs to be updated when the target scene is triggered, acquiring a scene steady-state set of the target scene.
9. The method of any one of claims 1 to 7, further comprising:
Determining a feedback amount for the scene observation period within a set time period; the feedback quantity is used for indicating the number of times that the intelligent device executes additional actions in the scene observation period is detected;
and adjusting the scene observation period based on the feedback quantity.
10. The method of any of claims 1 to 7, wherein after the updating of the device control instructions when the target scene is triggered based on the first device parameter and the second device parameter in the steady state set of scenes, the method further comprises:
and if the target scene is triggered, sending the updated equipment control instruction to the corresponding intelligent equipment, so that the intelligent equipment responds to the updated equipment control instruction to execute the initial action.
11. The method of any of claims 1 to 7, wherein prior to the acquiring the scene steady-state set of the target scene, the method further comprises:
detecting whether an instruction optimization function is started; the starting of the instruction optimization function refers to allowing updating processing of equipment control instructions when the target scene is triggered;
and if the instruction optimization function is detected to be started, acquiring a scene steady-state set of the target scene.
12. A method of instruction optimization, the method comprising:
displaying instruction optimization data of a target scene, wherein the instruction optimization data is used for indicating an equipment control instruction after updating processing; the updated device control instruction is determined by the gateway or the cloud according to the first device parameters of the target scene and the second device parameters in the scene steady-state set; the first device parameter is generated by the intelligent device executing an initial action configured in the target scene and is used for indicating the device state of the intelligent device; the scene steady-state set includes second device parameters over a plurality of scene observation periods; the second device parameter is generated by the intelligent device executing additional actions in a scene observation period;
and responding to the confirmation operation triggered by the data optimized for the instruction, and sending a confirmation message to a gateway or a cloud end so that the gateway or the cloud end updates the equipment control instruction when the target scene is triggered.
13. The method of claim 12, wherein the method further comprises:
and responding to a starting operation triggered by aiming at the instruction optimization function, and requesting the gateway or the cloud to mark that the instruction optimization function is started.
14. An instruction optimization apparatus, the apparatus comprising:
the intelligent device comprises a first parameter acquisition module, a second parameter acquisition module and a first parameter generation module, wherein the first parameter acquisition module is used for acquiring first device parameters of a target scene, and the first device parameters are generated by the intelligent device executing initial actions configured in the target scene;
the second parameter acquisition module is used for acquiring a scene steady-state set of the target scene, wherein the scene steady-state set comprises second equipment parameters generated in a plurality of scene observation periods; the second device parameter is generated by the intelligent device executing additional actions in a scene observation period;
the instruction updating module is used for updating the equipment control instruction when the target scene is triggered according to the first equipment parameter and the second equipment parameter in the scene steady-state set; the device control instruction is used for instructing the intelligent device to execute a corresponding initial action.
15. An instruction optimization apparatus, the apparatus comprising:
the data display module is used for displaying instruction optimization data of the target scene, wherein the instruction optimization data is used for indicating the updated equipment control instruction; the updated device control instruction is determined by the gateway or the cloud according to the first device parameters of the target scene and the second device parameters in the scene steady-state set; the first device parameter is generated by the intelligent device executing an initial action configured in the target scene and is used for indicating the device state of the intelligent device; the scene steady-state set includes second device parameters over a plurality of scene observation periods; the second device parameter is generated by the intelligent device executing additional actions in a scene observation period;
The message sending module is used for responding to the confirmation operation triggered by the instruction optimization data and sending a confirmation message to the gateway or the cloud end so that the gateway or the cloud end updates the equipment control instruction when the target scene is triggered.
16. An electronic device, comprising: at least one processor, and at least one memory, wherein,
the memory has computer readable instructions stored thereon;
the computer readable instructions are executed by one or more of the processors to cause an electronic device to implement the instruction optimization method of any one of claims 1-13.
17. A storage medium having stored thereon computer readable instructions that are executed by one or more processors to implement the instruction optimization method of any one of claims 1-13.
CN202310511468.6A 2023-05-08 2023-05-08 Instruction optimization method and device, electronic equipment and storage medium Pending CN116700067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310511468.6A CN116700067A (en) 2023-05-08 2023-05-08 Instruction optimization method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310511468.6A CN116700067A (en) 2023-05-08 2023-05-08 Instruction optimization method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116700067A true CN116700067A (en) 2023-09-05

Family

ID=87838289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310511468.6A Pending CN116700067A (en) 2023-05-08 2023-05-08 Instruction optimization method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116700067A (en)

Similar Documents

Publication Publication Date Title
CN108092861B (en) Configuration method for realizing equipment linkage, equipment linkage control method and device
CN110578999B (en) Air conditioner, control method and device thereof, and computer-readable storage medium
CN106487928B (en) Message pushing method and device
JP6207734B2 (en) Intelligent device scene mode customization method and apparatus
CN113341743B (en) Smart home equipment control method and device, electronic equipment and storage medium
CN111614524A (en) Multi-intelligent-device linkage control method, device and system
CN111367188B (en) Control method and device for intelligent home, electronic equipment and computer storage medium
CN106371326B (en) Storage method and device for equipment working scene
CN108234562B (en) Equipment control method, control equipment and controlled equipment
WO2020228033A1 (en) Sdk plug-in loading method and apparatus, and mobile terminal and storage medium
CN113168334A (en) Data processing method and device, electronic equipment and readable storage medium
CN115793481A (en) Device control method, device, electronic device and storage medium
CN114327332A (en) Internet of things equipment setting method and device, electronic equipment and storage medium
CN117092926B (en) Equipment control method and electronic equipment
CN111857477B (en) Display control method and device, mobile terminal and storage medium
WO2023216995A1 (en) Device control method and apparatus, electronic device, and storage medium
WO2023202678A1 (en) Device control method and apparatus, electronic device, and storage medium
CN116346869A (en) Equipment control method, device, electronic equipment and storage medium
CN116700067A (en) Instruction optimization method and device, electronic equipment and storage medium
CN111981632A (en) Information notification method and device and air conditioning system
CN115733705A (en) Space-based information processing method and device, electronic equipment and storage medium
CN114639383A (en) Device wake-up method, apparatus, electronic device and medium
CN116647593A (en) Intelligent service pushing method and device, electronic equipment and storage medium
CN117539174A (en) Equipment control method, device, electronic equipment and storage medium
CN118295263A (en) Method and device for generating scene control scheme, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination