CN117193031A - Smart home control method based on user image and related equipment - Google Patents

Smart home control method based on user image and related equipment Download PDF

Info

Publication number
CN117193031A
CN117193031A CN202311191836.XA CN202311191836A CN117193031A CN 117193031 A CN117193031 A CN 117193031A CN 202311191836 A CN202311191836 A CN 202311191836A CN 117193031 A CN117193031 A CN 117193031A
Authority
CN
China
Prior art keywords
instruction
user
control
voice input
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311191836.XA
Other languages
Chinese (zh)
Inventor
张铭浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Konka Electronic Technology Co Ltd
Original Assignee
Shenzhen Konka Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Konka Electronic Technology Co Ltd filed Critical Shenzhen Konka Electronic Technology Co Ltd
Priority to CN202311191836.XA priority Critical patent/CN117193031A/en
Publication of CN117193031A publication Critical patent/CN117193031A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The application discloses an intelligent home control method based on user image and related equipment, wherein the method comprises the following steps: acquiring portrait information of a user and a first voice input instruction of the user, and performing algorithm processing on the first voice input instruction to obtain a first control instruction; obtaining a to-be-controlled equipment name and an to-be-executed operation instruction according to the first control instruction, and obtaining an attribute relation table according to the to-be-controlled equipment name; comparing the portrait information with the operation instruction to be executed and the attribute relation table, if the portrait information and the operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, executing the first control instruction by the equipment to be controlled, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting the user to input a second voice input instruction conforming to the equipment control attribute again; the intelligent household control method and the intelligent household control system based on the user portrait data combine with the user portrait data to realize intelligent household control aiming at different users, and improve individuation and intellectualization of the household.

Description

Smart home control method based on user image and related equipment
Technical Field
The application relates to the technical field of voice control, in particular to an intelligent home control method based on user images and related equipment.
Background
In the field of intelligent home, various manufacturers realize voice control of home appliances such as air conditioners, humidifiers and fans by utilizing intelligent terminal equipment such as voice products such as sound equipment, panels and plates; in the field of voice control, the main stream mode in the industry is to realize semantic understanding of a user through deep learning, machine learning and other modes, so that the spoken language operation of the user is converted into a structured executable json format and is transmitted to a cloud to realize equipment control, the horizontal dispersion of the spoken language understanding in the industry is uneven, and the realization effect is approximately consistent.
However, the current voice control lacks linkage with the portrait information of the user, and sometimes the user sends instructions which are unfavorable for self experience or physical health, for example, an old person 70 years old says that an air conditioner of 18 degrees is turned on, and the temperature of the air conditioner is obviously unsuitable for the old person, so that a device control standard based on the portrait information (age, sex, weight and the like) of the user is highly demanded in the field to realize intelligent home control.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
The application mainly aims to provide an intelligent home control method, system, terminal and computer readable storage medium based on user image, and aims to solve the problem that in the prior art, voice control lacks linkage with image information of a user, and sometimes the user can give out instructions which are unfavorable for self experience or physical health.
In order to achieve the above purpose, the present application provides an intelligent home control method based on user image, the intelligent home control method based on user image includes the following steps:
acquiring portrait information of a user and a first voice input instruction of the user, and performing algorithm processing on the first voice input instruction to obtain a first control instruction;
obtaining a name of equipment to be controlled and a first operation instruction to be executed according to the first control instruction, and obtaining an attribute relation table of a user and equipment according to the name of the equipment to be controlled;
comparing the portrait information and the first operation instruction to be executed with the attribute relation table, executing the first control instruction on the equipment to be controlled if the portrait information and the first operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting a user to input a second voice input instruction conforming to the equipment control attribute again.
Optionally, in the smart home control method based on user image, the image information includes age information, weight information and gender information;
and when the age information, the weight information and the sex information exist at the same time, comparing the portrait information with the highest priority and the first operation instruction to be executed with the attribute relation table.
Optionally, in the smart home control method based on user image, the attribute relation table is used for storing device control attributes corresponding to users of different image information.
Optionally, the smart home control method based on user image, wherein the algorithm processing includes: preprocessing, semantic parsing and post-processing.
Optionally, in the smart home control method based on user image, the performing algorithm processing on the first voice input instruction to obtain a first control instruction specifically includes:
performing general preprocessing, entity normalization, nickname matching, pinyin error correction and NLP word segmentation on the first voice input instruction to obtain a preprocessed voice input instruction;
replacing APP names and scene names appearing in the preprocessed voice input instructions with placeholders, filtering commands which do not accord with user intention in the replaced voice input instructions, and performing white list matching and regular matching on the filtered voice input instructions to construct semantic analysis results;
and carrying out intention rewriting, special intention processing, equipment control return, scene control return and field correction on the semantic analysis result to obtain a first control instruction.
Optionally, in the smart home control method based on user image, after the prompting the user to input the second voice input instruction conforming to the device control attribute again, the method further includes:
judging whether the second voice input instruction is identical to the first voice input instruction or not, and executing the first control instruction if the second voice input instruction is identical to the first voice input instruction;
and if the second voice input instruction is different from the first voice input instruction, carrying out algorithm processing on the second voice input instruction to obtain a second control instruction, obtaining a second operation instruction to be executed according to the second control instruction, and comparing the image information with the second operation instruction to be executed with the attribute relation table again.
Optionally, in the smart home control method based on user image, after executing the first control instruction if the second voice input instruction is the same as the first voice input instruction, the method further includes:
and acquiring the first control instruction, and updating the equipment control attribute corresponding to the portrait information in the attribute relation table according to the first control instruction and the portrait information of the user.
In addition, in order to achieve the above object, the present application further provides an intelligent home control system based on user image, wherein the intelligent home control system based on user image includes:
the control instruction acquisition module is used for acquiring portrait information of a user and a first voice input instruction of the user, and processing the first voice input instruction to obtain a first control instruction;
the attribute relation table acquisition module is used for acquiring a to-be-controlled equipment name and a first to-be-executed operation instruction according to the first control instruction, and acquiring an attribute relation table of a user and equipment according to the to-be-controlled equipment name;
and the control instruction execution module is used for comparing the portrait information with the first operation instruction to be executed with the attribute relation table, executing the first control instruction on the equipment to be controlled if the portrait information and the first operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting a user to input a second voice input instruction conforming to the equipment control attribute again.
In addition, to achieve the above object, the present application also provides a terminal, wherein the terminal includes: the intelligent home control system comprises a memory, a processor and an intelligent home control program based on user images, wherein the intelligent home control program based on the user images is stored in the memory and can run on the processor, and the intelligent home control program based on the user images realizes the steps of the intelligent home control method based on the user images when being executed by the processor.
In addition, in order to achieve the above object, the present application further provides a computer readable storage medium, wherein the computer readable storage medium stores a smart home control program based on a user image, and the smart home control program based on the user image realizes the steps of the smart home control method based on the user image as described above when being executed by a processor.
In the application, portrait information of a user and a first voice input instruction of the user are obtained, and the first voice input instruction is subjected to algorithm processing to obtain a first control instruction; obtaining a to-be-controlled equipment name and a first to-be-executed operation instruction according to the first control instruction, and obtaining an attribute relation table according to the to-be-controlled equipment name; comparing the portrait information with the first operation instruction to be executed and the attribute relation table, if the portrait information and the first operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, executing the first control instruction by the equipment to be controlled, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting the user to input a second voice input instruction conforming to the equipment control attribute again; the application improves the control experience of the user and increases the intelligent effect; through reasonable control setting, the energy saving and emission reduction is facilitated, and good household appliance use habits of users are cultivated.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the user image-based smart home control method of the present application;
FIG. 2 is an overall architecture diagram of a preferred embodiment of a user image based smart home control method of the present application;
FIG. 3 is a schematic diagram of an algorithm process performed in the user image-based smart home control method of the present application;
FIG. 4 is a flow chart of a preferred embodiment of the algorithm processing in the user image based smart home control method of the present application;
FIG. 5 is a schematic diagram of a semantic parsing part when performing algorithm processing in the user image-based smart home control method of the present application;
FIG. 6 is a schematic diagram of a preferred embodiment of the user portrayal-based smart home control system of the present application;
FIG. 7 is a diagram of the operating environment of a preferred embodiment of the terminal of the present application.
Detailed Description
The application provides an intelligent home control method based on user images and related equipment, and aims to make the purposes, technical schemes and effects of the intelligent home control method clearer and more definite. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
The intelligent home control method based on user image according to the preferred embodiment of the present application, as shown in fig. 1 and 2, comprises the following steps:
step S100, obtaining portrait information of a user and a first voice input instruction of the user, and performing algorithm processing on the first voice input instruction to obtain a first control instruction.
When a user is in the service range of an intelligent home, the user needs to log in a personal account of the user when the user is about to control the intelligent home, and fills in self portrait information when logging in the account, wherein the portrait information comprises age information, gender information and weight information; the age information, the sex information and the weight information need to be filled in at least one when the user logs in.
It should be understood that the portrait information in the present application includes, but is not limited to, information such as age, sex, weight, etc., and may also relate to other physical health indicators such as blood pressure, heart rate, etc., and since the portrait information of the user relates to user privacy, only information is illustrated herein.
As shown in fig. 3, after the user logs in successfully and fills in the portrait information, the user sends a first voice input instruction to a central control (a central control system, which is applied to a multimedia classroom, a multifunctional conference hall, a command control center, an intelligent home and the like), the user can use a button control panel, a computer display, a touch screen, a wireless remote control and other devices, and control the devices such as a projector, a display stand, a video disc player, a video recorder and the like through software of the computer and the central control system, for example, the user speaks a voice instruction of "turn on an air conditioner and adjust the temperature to 18 degrees", the voice instruction is input to a scheduling module as the first voice input instruction, and under the support of a configuration module, a database and a server interface, a core algorithm performs algorithm processing on the first voice input instruction through the scheduling module to obtain the first control instruction.
Further, the algorithmic processing includes: preprocessing, semantic analysis and post-processing; as shown in fig. 4, the algorithm processing the first voice input instruction to obtain a first control instruction specifically includes:
step S101, performing general preprocessing, entity normalization, nickname matching, pinyin error correction and NLP word segmentation on the first voice input instruction to obtain a preprocessed voice input instruction.
Step S102, as shown in FIG. 5, replacing APP names and scene names appearing in the preprocessed voice input instructions with placeholders, filtering commands which do not accord with user intention in the replaced voice input instructions, and performing white list matching and regular matching on the filtered voice input instructions to construct a semantic analysis result.
The APP name is used for directly controlling a certain APP to be in a preset state, the scene name comprises a built-in scene name and a scene name set by a user, the scene can be used for directly controlling a plurality of devices to be in a preset state, for example, a user sets a reading mode to be that a desk lamp is turned on and a bedroom is turned off, an air conditioner is set to be 27 degrees and a washing machine is set, and all instructions are automatically executed when the user uses a command to enter the reading mode.
In order to facilitate unified recognition of semantic algorithms, APP names and scene names are temporarily replaced with placeholders (a fixed position is occupied, then content symbols are added into the placeholders, and the placeholders are widely used for editing various documents in a computer) in a preprocessing stage, and the corresponding names are filled into word slots in a post-processing stage.
The word slots are generally used for inquiring variables, one word slot corresponds to one variable, and because the final returned result of the algorithm end of the application is a JSON format, each parameter in the JSON format corresponds to one word slot, for example, when a user sends a voice command of 'helping me to set a certain air conditioner to 18 degrees', the returned result is the word slot: { intention: set temperature, nickname: certain air conditioner, equipment: air conditioner, parameter: 18 degrees centigrade }.
Further, after temporarily replacing the APP name and the scene name with placeholders, performing multi-command filtering on the replaced voice input instruction, and filtering the command which does not accord with the intention of the user in the voice input instruction, wherein the multi-command filtering process is as follows: commands that do not meet the user's intent are filtered out if multiple commands are identified, e.g., user commands are: the "television on air conditioner off" will be identified as three commands "television on", "air conditioner on" and "air conditioner off", and the user intends to "television on" and "air conditioner off", so the command "on air conditioner" needs to be filtered out.
And performing white list matching on the filtered voice input instruction, wherein the command with extremely high user calling frequency is listed in a white list, the white list command is not involved in voice matching and analysis, but is directly filled in a word slot for control, and finally, the voice instruction subjected to white list matching is subjected to regular matching, so that a semantic analysis result is obtained.
And step S103, carrying out intention rewriting, special intention processing, equipment control return, scene control return and field correction on the semantic analysis result to obtain a first control instruction.
The post-processing mainly realizes intent rewrite (for example, the word slots with the content of "degree" are changed to "degrees centigrade"), special intent processing (the light is fully opened or closed is a scene rather than a device, the curtain is fully opened or closed is a travel control rather than a device), device control is returned (the matched device ID is added into a device control list), scene control is returned (the matched scene ID is added into the scene control list), and a field correction function (custom patch for a specific project or version) is realized, so that the first control instruction is obtained.
That is to say, after the first voice input instruction is subjected to algorithm processing, a first control instruction is obtained and can be converted into a control instruction capable of controlling the intelligent home to execute corresponding operation.
And step 200, obtaining the name of the equipment to be controlled and a first operation instruction to be executed according to the first control instruction, and obtaining an attribute relation table of the user and the equipment according to the name of the equipment to be controlled.
In combination with the first control instruction issued by the user to the device, the first control instruction needs to be rewritten to a certain extent according to the portrait data of the user, so that better control experience is provided for the user, and therefore, before the user logs in to use the smart home, an attribute relation table of the user and the device is preset for each smart home device, and the attribute relation table comprises possible situations of all the portrait information of the user and device attributes corresponding to the portrait information of each user.
For example, if the voice command sent by the user is to set the air conditioner to 18 degrees, the name of the equipment to be controlled is "air conditioner" in the obtained first control command, and at this time, a preset attribute relation table between the user and the air conditioner is obtained.
And step S300, comparing the portrait information and the first operation instruction to be executed with the attribute relation table, executing the first control instruction on the equipment to be controlled if the portrait information and the first operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting a user to input a second voice input instruction conforming to the equipment control attribute again.
As shown in table 1 below, the attribute relationship table will be further illustrated by taking an air conditioner as an example.
Table 1 relationship between user information and air conditioner attributes
Specifically, as can be seen from table 1 above, the user information includes age information, sex information and weight information, the device attributes include temperature, wind speed and mode, and the device attributes corresponding to different user information are given, for example, the temperature of the air conditioner of the rest of the ages should be above 26 degrees except that the temperature of the air conditioner of the young and the old between 18 and 40 years can be 20 to 26 degrees.
Meanwhile, in the application, the age information, the weight information and the gender information have priorities, and when the age information, the weight information and the gender information are simultaneously present, the image information with the highest priority and the first operation instruction to be executed are compared with the attribute relation table.
It should be emphasized that the age segmentation, sex class and weight segmentation in the present application are only used for reference, where the user portrait information dimension may be segmented according to a specific application scenario, and at the same time, the user portrait data and the priority of the user portrait data are set according to actual needs, which is not limited herein.
For example, for an air conditioning device, we set the user information priority to age > gender > weight, then the user inputs a piece of text in voice, and we reprocess the formatted control command after processing by the algorithm, for example, user a say, "set air conditioner to 25 degrees", the command after processing by the algorithm is, { device: air conditioner, action: setting temperature and numerical value: 18 degrees }; based on the previously acquired image information of user a, { gender: female, age: age 26, body weight: 80 jin, the control command needs to be adjusted, because the priority of the age is highest, the weight information and the sex information do not need to be considered, the air conditioner setting temperature (18-40 years old, [ 20-26 degrees + ]) which should correspond to 26 years old is directly found in the attribute relation table, the temperature expected by the user A is set to 25 degrees at the moment, and the first control command is executed on the equipment (air conditioner) to be controlled in a reasonable temperature setting range.
Further, if the user a says that "set air conditioner to 18 degrees", the command after the algorithm processing is, { device: air conditioner, action: setting temperature and numerical value: 18 degrees }; based on the previously acquired image information of user a, { gender: female, age: age 26, body weight: 80 jin, the control command needs to be adjusted, and because the priority of the age is highest, the weight information and the sex information do not need to be considered, and the air conditioner setting temperature (18-40 years, [ 20-26 degrees ] corresponding to 26 years old) is directly found in the attribute relation table, so that the temperature expected by the user A is set to 18 degrees and is not in a reasonable temperature setting range, and then we need to reply, "sorry, suggest you to set the temperature to 20-26 degrees".
It can be understood that if the portrait information of the user does not include age information, but includes only weight information and gender information, the portrait information is determined according to the priority, and the gender information is used to determine whether the portrait information meets the control attribute of the device, and the specific determination process is similar to the above, and will not be repeated here.
Further, after prompting the user to input the second voice input instruction conforming to the control attribute of the device, the method further includes:
judging whether the second voice input instruction is identical to the first voice input instruction or not, and executing the first control instruction if the second voice input instruction is identical to the first voice input instruction;
and if the second voice input instruction is different from the first voice input instruction, carrying out algorithm processing on the second voice input instruction to obtain a second control instruction, obtaining a second operation instruction to be executed according to the second control instruction, and comparing the image information with the second operation instruction to be executed with the attribute relation table again.
It will be appreciated that if the user is to adhere to his own behavior ideas, the second voice input instruction should be identical to the first voice input instruction, where the second voice input instruction is not limited to the same expression as the first voice input instruction, but the second voice input instruction is identical to the first voice input instruction as long as the meaning after semantic parsing is the same.
If the second voice input instruction is different from the first voice input instruction, performing the same algorithm processing on the second voice input instruction to obtain a second control instruction, obtaining a second operation instruction to be executed according to the second control instruction, and comparing the image information, the second operation instruction to be executed and the attribute relation table again: if the portrait information and the second operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, executing the second control instruction on the equipment to be controlled; if the portrait information and the second operation instruction to be executed do not accord with the corresponding equipment control attribute in the attribute relation table, repeating the process, and sending prompt information again, so that individuation and intelligence of the home are improved, and guidance suitable for self health conditions is provided for users.
Further, if the second voice input command is the same as the first voice input command, after executing the first control command, the method further includes:
and acquiring the first control instruction, and updating the equipment control attribute corresponding to the portrait information in the attribute relation table according to the first control instruction and the portrait information of the user.
For example, the first voice input instruction sent by the user is "set air conditioner to 18 degrees", after the prompt, the user still does not make a change and continues to send the first voice input instruction, the user is considered to emphasize the first voice input instruction twice, at this time, the user is considered to have to execute the first voice input instruction, the air conditioning equipment is subjected to temperature adjustment according to the first voice input instruction, and the first control instruction of the user is collected.
Meanwhile, according to the first control instruction and the portrait information of the user, updating the equipment control attribute corresponding to the portrait information in the attribute relation table, for example, if the age of the user is 26 years old, two continuous emphasis is given to setting the air-conditioning temperature to 18 degrees, the age range correspondence and the proper temperature of [ 18-40 years ] in the attribute relation table are adjusted to [ 18-26 degrees ], so that the user is not reminded again when the user sends out the instruction for setting the air-conditioning to 18 degrees next time, and the individualized setting of the user is emphasized while the rationalization of intelligent household control is ensured.
The application improves the control experience of the user and increases the intelligent effect; through reasonable control setting, the energy saving and emission reduction is facilitated, and good household appliance use habits of users are cultivated.
Further, as shown in fig. 6, the present application further provides a smart home control system based on the user image, based on the smart home control method based on the user image, where the smart home control system based on the user image includes:
the control instruction acquisition module 51 is configured to acquire portrait information of a user and a first voice input instruction of the user, and process the first voice input instruction to obtain a first control instruction;
the attribute relation table obtaining module 52 is configured to obtain a name of a device to be controlled and a first operation instruction to be executed according to the first control instruction, and obtain an attribute relation table of a user and the device according to the name of the device to be controlled;
and a control instruction execution module 53, configured to compare the portrait information and the first operation instruction to be executed with the attribute relation table, if the portrait information and the first operation instruction to be executed conform to corresponding equipment control attributes in the attribute relation table, execute the first control instruction on the equipment to be controlled, otherwise, obtain corresponding equipment control attributes in the attribute relation table according to the portrait information, and prompt a user to input a second voice input instruction conforming to the equipment control attributes again.
Further, as shown in fig. 7, based on the above-mentioned smart home control method and system based on user image, the present application further provides a terminal correspondingly, where the terminal includes a processor 10, a memory 20 and a display 30. Fig. 7 shows only some of the components of the terminal, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may alternatively be implemented.
The memory 20 may in some embodiments be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 20 may in other embodiments also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal. The memory 20 is used for storing application software installed in the terminal and various data, such as program codes of the installation terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a smart home control program 40 based on a user image, and the smart home control program 40 based on the user image can be executed by the processor 10, so as to implement the smart home control method based on the user image in the present application.
The processor 10 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip for executing program code or processing data stored in the memory 20, for example, for executing the user image-based smart home control method, etc.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like in some embodiments. The display 30 is used for displaying information at the terminal and for displaying a visual user interface. The components 10-30 of the terminal communicate with each other via a system bus.
In one embodiment, the following steps are implemented when the processor 10 executes the smart home control program 40 based on the user portraits in the memory 20:
acquiring portrait information of a user and a first voice input instruction of the user, and performing algorithm processing on the first voice input instruction to obtain a first control instruction;
obtaining a name of equipment to be controlled and a first operation instruction to be executed according to the first control instruction, and obtaining an attribute relation table of a user and equipment according to the name of the equipment to be controlled;
comparing the portrait information and the first operation instruction to be executed with the attribute relation table, executing the first control instruction on the equipment to be controlled if the portrait information and the first operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting a user to input a second voice input instruction conforming to the equipment control attribute again.
Wherein the portrait information includes age information, weight information, and sex information;
and when the age information, the weight information and the sex information exist at the same time, comparing the portrait information with the highest priority and the first operation instruction to be executed with the attribute relation table.
The attribute relation table is used for storing equipment control attributes corresponding to users of different portrait information.
Wherein the algorithmic processing includes: preprocessing, semantic parsing and post-processing.
The first voice input instruction is subjected to algorithm processing to obtain a first control instruction, which specifically includes:
performing general preprocessing, entity normalization, nickname matching, pinyin error correction and NLP word segmentation on the first voice input instruction to obtain a preprocessed voice input instruction;
replacing APP names and scene names appearing in the preprocessed voice input instructions with placeholders, filtering commands which do not accord with user intention in the replaced voice input instructions, and performing white list matching and regular matching on the filtered voice input instructions to construct semantic analysis results;
and carrying out intention rewriting, special intention processing, equipment control return, scene control return and field correction on the semantic analysis result to obtain a first control instruction.
After the prompting user inputs the second voice input instruction conforming to the control attribute of the equipment, the method further comprises:
judging whether the second voice input instruction is identical to the first voice input instruction or not, and executing the first control instruction if the second voice input instruction is identical to the first voice input instruction;
and if the second voice input instruction is different from the first voice input instruction, carrying out algorithm processing on the second voice input instruction to obtain a second control instruction, obtaining a second operation instruction to be executed according to the second control instruction, and comparing the image information with the second operation instruction to be executed with the attribute relation table again.
Wherein if the second voice input instruction is the same as the first voice input instruction, after executing the first control instruction, the method further includes:
and acquiring the first control instruction, and updating the equipment control attribute corresponding to the portrait information in the attribute relation table according to the first control instruction and the portrait information of the user.
The application also provides a computer readable storage medium, wherein the computer readable storage medium stores a smart home control program based on the user image, and the smart home control program based on the user image realizes the steps of the smart home control method based on the user image when being executed by a processor.
In summary, the application discloses an intelligent home control method based on user images and related equipment, wherein the method comprises the following steps: acquiring portrait information of a user and a first voice input instruction of the user, and performing algorithm processing on the first voice input instruction to obtain a first control instruction; obtaining a to-be-controlled equipment name and a first to-be-executed operation instruction according to the first control instruction, and obtaining an attribute relation table according to the to-be-controlled equipment name; comparing the portrait information with the first operation instruction to be executed and the attribute relation table, if the portrait information and the first operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, executing the first control instruction by the equipment to be controlled, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting the user to input a second voice input instruction conforming to the equipment control attribute again; the application improves the control experience of the user and increases the intelligent effect; through reasonable control setting, the energy saving and emission reduction is facilitated, and good household appliance use habits of users are cultivated.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal comprising the element.
Of course, those skilled in the art will appreciate that implementing all or part of the above described embodiment methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the above described methods. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It is to be understood that the application is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (10)

1. The intelligent home control method based on the user image is characterized by comprising the following steps of:
acquiring portrait information of a user and a first voice input instruction of the user, and performing algorithm processing on the first voice input instruction to obtain a first control instruction;
obtaining a name of equipment to be controlled and a first operation instruction to be executed according to the first control instruction, and obtaining an attribute relation table of a user and equipment according to the name of the equipment to be controlled;
comparing the portrait information and the first operation instruction to be executed with the attribute relation table, executing the first control instruction on the equipment to be controlled if the portrait information and the first operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting a user to input a second voice input instruction conforming to the equipment control attribute again.
2. The smart home control method based on user portraits of claim 1, wherein the portraits information includes age information, weight information, and sex information;
and when the age information, the weight information and the sex information exist at the same time, comparing the portrait information with the highest priority and the first operation instruction to be executed with the attribute relation table.
3. The smart home control method based on user portraits of claim 1, wherein the attribute relation table is used for storing device control attributes corresponding to users of different portraits information.
4. The smart home control method based on user portraits of claim 1, wherein the algorithmic processing comprises: preprocessing, semantic parsing and post-processing.
5. The smart home control method based on user image according to claim 4, wherein the performing the algorithm processing on the first voice input instruction to obtain a first control instruction specifically includes:
performing general preprocessing, entity normalization, nickname matching, pinyin error correction and NLP word segmentation on the first voice input instruction to obtain a preprocessed voice input instruction;
replacing APP names and scene names appearing in the preprocessed voice input instructions with placeholders, filtering commands which do not accord with user intention in the replaced voice input instructions, and performing white list matching and regular matching on the filtered voice input instructions to construct semantic analysis results;
and carrying out intention rewriting, special intention processing, equipment control return, scene control return and field correction on the semantic analysis result to obtain a first control instruction.
6. The smart home control method based on user image according to claim 1, wherein after prompting the user to input the second voice input command conforming to the device control attribute again, further comprising:
judging whether the second voice input instruction is identical to the first voice input instruction or not, and executing the first control instruction if the second voice input instruction is identical to the first voice input instruction;
and if the second voice input instruction is different from the first voice input instruction, carrying out algorithm processing on the second voice input instruction to obtain a second control instruction, obtaining a second operation instruction to be executed according to the second control instruction, and comparing the image information with the second operation instruction to be executed with the attribute relation table again.
7. The smart home control method according to claim 6, wherein if the second voice input command is the same as the first voice input command, after executing the first control command, further comprising:
and acquiring the first control instruction, and updating the equipment control attribute corresponding to the portrait information in the attribute relation table according to the first control instruction and the portrait information of the user.
8. An intelligent home control system based on user images, which is characterized by comprising:
the control instruction acquisition module is used for acquiring portrait information of a user and a first voice input instruction of the user, and processing the first voice input instruction to obtain a first control instruction;
the attribute relation table acquisition module is used for acquiring a to-be-controlled equipment name and a first to-be-executed operation instruction according to the first control instruction, and acquiring an attribute relation table of a user and equipment according to the to-be-controlled equipment name;
and the control instruction execution module is used for comparing the portrait information with the first operation instruction to be executed with the attribute relation table, executing the first control instruction on the equipment to be controlled if the portrait information and the first operation instruction to be executed accord with the corresponding equipment control attribute in the attribute relation table, otherwise, acquiring the corresponding equipment control attribute in the attribute relation table according to the portrait information, and prompting a user to input a second voice input instruction conforming to the equipment control attribute again.
9. A terminal, the terminal comprising: the smart home control system comprises a memory, a processor and a smart home control program based on user images, wherein the smart home control program based on user images is stored in the memory and can run on the processor, and the smart home control program based on user images realizes the steps of the smart home control method based on user images according to any one of claims 1 to 7 when being executed by the processor.
10. A computer readable storage medium, wherein the computer readable storage medium stores a user image based smart home control program, which when executed by a processor, implements the steps of the user image based smart home control method as claimed in any one of claims 1 to 7.
CN202311191836.XA 2023-09-14 2023-09-14 Smart home control method based on user image and related equipment Pending CN117193031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311191836.XA CN117193031A (en) 2023-09-14 2023-09-14 Smart home control method based on user image and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311191836.XA CN117193031A (en) 2023-09-14 2023-09-14 Smart home control method based on user image and related equipment

Publications (1)

Publication Number Publication Date
CN117193031A true CN117193031A (en) 2023-12-08

Family

ID=89001346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311191836.XA Pending CN117193031A (en) 2023-09-14 2023-09-14 Smart home control method based on user image and related equipment

Country Status (1)

Country Link
CN (1) CN117193031A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117518857A (en) * 2023-12-31 2024-02-06 深圳酷宅科技有限公司 Personalized intelligent home control strategy generation method and system applying NLP

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117518857A (en) * 2023-12-31 2024-02-06 深圳酷宅科技有限公司 Personalized intelligent home control strategy generation method and system applying NLP
CN117518857B (en) * 2023-12-31 2024-04-09 深圳酷宅科技有限公司 Personalized intelligent home control strategy generation method and system applying NLP

Similar Documents

Publication Publication Date Title
CN110687817B (en) Intelligent household control method and device, terminal and computer readable storage medium
CN108489031B (en) Control method of air conditioning unit, air conditioning unit and storage medium
US11137161B2 (en) Data learning server and method for generating and using learning model thereof
KR102411619B1 (en) Electronic apparatus and the controlling method thereof
CN117193031A (en) Smart home control method based on user image and related equipment
CN110286601A (en) Method and device for controlling intelligent household equipment, control equipment and storage medium
US10523460B2 (en) Electronic apparatus and external apparatus controlling method thereof
US20150316286A1 (en) System and method of operating an hvac controller based on a third party calendar event
CN104240702A (en) Voice control method and voice control system for air-conditioner
CN111654420A (en) Method and device for controlling household equipment, computer equipment and storage medium
JP6383356B2 (en) Brightness control method, apparatus and program product
CN110570850A (en) Voice control method, device, computer equipment and storage medium
WO2013145543A1 (en) Apparatus control device, apparatus control system, and program
CN110706696A (en) Voice control method and device
CN115208757B (en) Smart home configuration method and device, computer equipment and readable storage medium
EP3553400A1 (en) Air conditioner adjusting method and device, and storage medium
US11177044B2 (en) Summarily conveying smart appliance statuses
CN117687314A (en) Method for generating intelligent home control scene based on large language model
CN117499457A (en) Method and system for cloud management of intelligent equipment
CN112578953B (en) Display control method and device applied to terminal interface
CN111679847A (en) Rule file updating method, terminal device and storage medium
CN113676382B (en) IOT voice command control method, system and computer readable storage medium
US20070136507A1 (en) File display system and method thereof
CN112856727A (en) Method and apparatus for controlling electronic device
US20210319791A1 (en) Electronic apparatus and controlling method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination