CN110125938B - Robot control method and device and robot - Google Patents

Robot control method and device and robot Download PDF

Info

Publication number
CN110125938B
CN110125938B CN201910437548.5A CN201910437548A CN110125938B CN 110125938 B CN110125938 B CN 110125938B CN 201910437548 A CN201910437548 A CN 201910437548A CN 110125938 B CN110125938 B CN 110125938B
Authority
CN
China
Prior art keywords
robot
interaction
data
controlling
weighting coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910437548.5A
Other languages
Chinese (zh)
Other versions
CN110125938A (en
Inventor
叶群松
龚桂强
张瑞忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Meiji Technology Co ltd
Original Assignee
Beijing Geyuan Zhibo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Geyuan Zhibo Technology Co ltd filed Critical Beijing Geyuan Zhibo Technology Co ltd
Publication of CN110125938A publication Critical patent/CN110125938A/en
Application granted granted Critical
Publication of CN110125938B publication Critical patent/CN110125938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application provides a control method and device of a robot and the robot. The control method of the robot comprises the following steps: acquiring at least one of environmental data of the robot and interactive data between the robot and a user; controlling robot growth based on at least one of the environmental data and the interaction data. According to the embodiment of the application, at least one of the environment where the robot is located and the interaction situation between the robot and the user can be mapped to the growth of the robot, so that the user can intuitively feel that the robot grows under the accompanying and caring of the user, the emotion between the user and the robot is better enhanced, and the robot plays a better accompanying role.

Description

Robot control method and device and robot
Technical Field
The present invention relates to the field of robots, and in particular, to a robot control method, a robot control device, and a robot.
Background
Robots can be classified into two categories according to application environments, one is an industrial robot, the other is a service robot, and the service robot can be subdivided into an entertainment and leisure robot, a companion robot, a housework service robot, a special robot and the like. The existing accompanying robot can only interact with a user in a simple language or behavior, such as singing, dancing, telling, encyclopedia question and answer, weather searching and the like, and cannot meet the pursuit of the user for the personalized accompanying.
Disclosure of Invention
The embodiment of the invention provides a control method and a control device of a robot and the robot, which can meet the requirement of a user on the individualized accompanying of the robot.
In a first aspect, a control method for a robot is provided, including: acquiring at least one of environmental data of the robot and interactive data between the robot and a user; controlling robot growth based on at least one of the environmental data and the interaction data.
In some embodiments, controlling robot growth comprises: controlling a change in the shape and/or skill upgrade of the robot.
In some embodiments, the change in the shape of the robot includes a change in at least one of a height, a shape, a weight, and a volume of the robot.
In some embodiments, the skill upgrades of the bot include interactive content and/or action upgrades of the bot.
In some embodiments, the environmental data includes at least one of: temperature, humidity, air quality and noise; the interaction data includes at least one of: interactive duration, interactive content distribution and interactive time distribution.
In some embodiments, the change in the shape of the robot comprises a change in height of the robot, and controlling robot growth based on the environmental data and/or the interaction data comprises: determining a weighting coefficient corresponding to the environment data and a weighting coefficient corresponding to the interactive data; acquiring a first growth rate corresponding to the environmental data and a second growth rate of the robot corresponding to the interactive data; and controlling the height of the robot to change according to the weighting coefficient corresponding to the environment data, the first growth rate, the weighting coefficient corresponding to the interactive data and the second growth rate.
In some embodiments, controlling the height change of the robot according to the weighting factor corresponding to the environmental data, the first growth rate, the weighting factor corresponding to the interactive data, and the second growth rate includes:
calculating the height which can be increased by the robot according to the following expression, and controlling the robot to increase the height:
L=G1*(T+H+N)+G2*(I+C+S)
wherein L represents the height that the robot can grow, G1 represents the first growth rate, G2 represents the second growth rate, T represents the weighting coefficient corresponding to the temperature, H represents the weighting coefficient corresponding to the humidity, N represents the weighting coefficient corresponding to the noise, I represents the weighting coefficient corresponding to the interaction duration, C represents the weighting coefficient corresponding to the interaction content distribution, and S represents the weighting coefficient corresponding to the interaction time distribution.
In some embodiments, determining the weighting factor corresponding to the environmental data and the weighting factor corresponding to the interaction data comprises:
acquiring total weights corresponding to temperature, humidity and noise respectively;
determining the weight of each corresponding current period according to the respective ranges of the temperature, the humidity and the noise in the current period;
determining weighting coefficients corresponding to the temperature, the humidity and the noise in the current period respectively, wherein the weighting coefficients corresponding to the temperature, the humidity and the noise are products of the total weight corresponding to each and the weight of the current period corresponding to each;
acquiring total weights corresponding to interaction duration, interaction content distribution and interaction time distribution respectively;
determining the weight of each corresponding current period according to the range of each interaction duration, interaction content distribution and interaction time distribution in the current period;
determining weighting coefficients corresponding to the interaction duration, the interaction content distribution and the interaction time distribution in the current period, wherein the weighting coefficients corresponding to the interaction duration, the interaction content distribution and the interaction time distribution are products of total weights corresponding to the interaction duration, the interaction content distribution and the interaction time distribution and weights corresponding to the current period.
In some embodiments, prior to controlling robot growth, the method further comprises: judging that the robot meets at least one of the following conditions: the robot is in an idle state, in an unmanned state around the robot, and currently in a preset time period set for robot growth.
In some embodiments, the method further comprises: judging whether the robot grows to a final state; and if the robot grows to the final state, stopping controlling the robot to grow.
In some embodiments, the method further comprises: adjusting parameters related to the environmental data and/or the interaction data according to the instructions, the parameters including at least one of a temperature preference, a humidity preference, an activity preference, a content preference, a time preference, a weighting factor corresponding to the environmental data, and a weighting factor corresponding to the interaction data.
In some embodiments, controlling the growth of the robot comprises: and generating one or more control signals, wherein each control signal is used for controlling at least one part of the robot to grow.
In some embodiments, the method further comprises: outputting state information of the robot, wherein the state information comprises at least one of the following: the physiological age of the robot, the awakening age of the robot, the active age of the robot, the height of the robot, the number of interactions of the robot, the distribution of the interactive contents of the robot, and the distribution of the interactive time of the robot.
In some embodiments, the method further comprises: acquiring user information;
wherein said controlling the robot to grow in accordance with at least one of the environmental data and the interaction data comprises:
and controlling the robot to grow according to at least one of the environment data and the interaction data and the user information.
In a second aspect, a control device for a robot is provided, which is used for realizing the control method for the robot of the first aspect and each embodiment. Accordingly, the apparatus includes modules or units for performing the respective procedures above. For example, the apparatus includes: an acquisition unit for acquiring at least one of environmental data of the robot and interactive data between the robot and a user; and the processing unit is used for controlling the robot to grow according to at least one of the environmental data and the interactive data.
In some embodiments, the apparatus further comprises: an output unit for outputting status information of the robot, the status information including at least one of: the physiological age of the robot, the awakening age of the robot, the active age of the robot, the height of the robot, the number of interactions of the robot in a specified period, the interaction distribution of the robot in the specified period, and the interaction time distribution of the robot in the specified period.
In some embodiments, the apparatus further comprises: the input unit is used for receiving an instruction input by a user when the robot is in a parent mode or a privilege mode; the processing unit is further configured to adjust a parameter associated with the environmental data and/or the interaction data according to the instruction, the parameter including at least one of a temperature preference, a humidity preference, an activity preference, a content preference, a time preference, a weighting factor corresponding to the environmental data, and a weighting factor corresponding to the interaction data.
In a third aspect, a robot is provided, comprising: a body; and a processor installed inside the body for executing the control method according to the first aspect and the embodiments.
In some embodiments, the robot further comprises: a sensor module for collecting environmental data of the robot, the sensor module including at least one of the following sensors: humidity sensor, ambient noise sensor, air quality sensor.
In some embodiments, the robot further comprises: an output device for outputting status information of the robot, the status information including at least one of: the physiological age of the robot, the awakening age of the robot, the active age of the robot, the height of the robot, the number of interactions of the robot in a specified period, the interaction distribution of the robot in the specified period, and the interaction time distribution of the robot in the specified period.
In some embodiments, the robot is a biped robot, and the body includes height adjustable biped mechanism joints.
In a fourth aspect, there is provided a computer-readable storage medium for storing computer-readable instructions which, when executed by a computer, cause the computer to perform the above control method of a robot.
In the embodiment of the application, the robot is controlled to grow according to the environment data and/or the interactive data of the environment, at least one of the environment where the robot is located and the interactive situation between the robot and the user can be mapped to the growth of the robot, so that the user can intuitively feel that the robot grows under the accompanying and attending of the user, the feeling between the user and the robot is better enhanced, and the robot plays a better accompanying role.
Meanwhile, because the environments of the robots are different, the interaction quality between each user and the robots is different, and even if the robots of the same model are used by different users for a period of time, the growth states of the robots are different, so that for different users, the embodiment of the invention can bring certain differentiated experience to the users, and is more beneficial to developing the exclusive feeling of the robots.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flowchart of a control method of a robot of an embodiment of the present application;
fig. 2 is a schematic structural diagram of a control device of a robot according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a control device of a robot according to another embodiment of the present application;
FIG. 4 is a schematic structural diagram of a robot according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a robot according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the implementation of the present invention. It is to be understood that the embodiments described are only some of the embodiments of the invention, and not all of them.
The embodiment of the invention provides a control method of a robot, which can control the growth of the robot according to the environment of the robot and the interaction between the robot and a user, so that the user can feel the awareness of the user to grow the robot, the emotion between the user and the robot is enhanced, and the stickiness of the user is increased.
Fig. 1 is a schematic flowchart of a robot control method 100 according to an embodiment of the present invention, where the method 100 may be executed by a control device of the robot, which may be a module (e.g., a processor) built in the robot or a device or a server communicatively connected to the robot. As shown in fig. 1, the method 100 includes 110 and 120.
110. At least one of environmental data of the robot and interaction data between the robot and the user is acquired.
In some embodiments, the environmental data of the robot includes at least one of temperature, humidity, air quality index, and noise. For example, the air quality index includes, but is not limited to, concentrations of PM2.5, formaldehyde, pollen, carbon dioxide, carbon monoxide, PM10, and the like. For example, the environmental data may be collected by a sensor, which may be built in the robot or an external sensor placed in the environment where the robot is located. For example, the environmental data of the robot may be a weighted value of the data collected in the current period, a value range of the data collected in the current period, or an average value of the data collected in the current period. The duration of the current period may be set according to specific situations, for example, the duration of the current period may be one day, one week, or any other duration.
In some embodiments, the interaction data between the robot and the user includes at least one of interaction duration, interaction content distribution, and interaction time distribution, which may be obtained by statistically analyzing the interaction of the robot and the user. The interaction duration, the interaction content distribution and the interaction time may reflect the quality of the interaction between the robot and the user. For example, the interaction data between the robot and the user may be the interaction data of the robot and the user in the current cycle.
Illustratively, the interaction duration may be an average interaction duration of a current cycle (e.g., one day) for measuring the interaction liveness between the robot owner and the robot. Taking the average daily interaction time as an example, the average daily interaction time can be obtained by dividing the accumulated total interaction time (t1) by the number of days since the robot was first powered on.
Illustratively, the interactive content distribution is used to calculate the leisure-style interaction and learning-style interaction ratios. For example, the proportion of the interactive content distribution may be calculated by dividing the accumulated time duration of the casual interactions by the accumulated time duration of the learning interactions.
Illustratively, the interaction time distribution is used to count the time periods during which user and robot interactions occur. For example, the distribution of the interaction time is counted from 4 aspects of day, night, holiday and working day to determine whether the interaction between the robot and the user occurs in a suitable time period.
120. Controlling robot growth based on at least one of the environmental data and the interaction data.
The growth of the robot is controlled according to the environment data and/or the interactive data, the environment where the robot is located and/or the interactive condition of the robot and the user can be mapped to the growth of the robot, so that the user can intuitively feel that the robot grows under the accompanying and attending of the user, the emotion between the user and the robot is better enhanced, and the robot plays a better accompanying role.
Meanwhile, because the environments of the robots are different, the interaction quality between each user and the robots is different, and even if the robots of the same model are used by different users for a period of time, the growth states of the robots are different, so that for different users, the embodiment of the invention can bring certain differentiated experience to the users, and is more beneficial to developing the exclusive feeling of the robots.
In some embodiments, the method 100 may further include obtaining user information, wherein the user information includes the user's age and/or height, and the user information may be input by the user via an input device (e.g., a keyboard, a display screen, a microphone, etc.). Accordingly, robot growth is controlled in 120 based on the user information and at least one of the environmental data and the interaction data. The robot growth control method has the advantages that the user information is mapped to the robot growth in the process of controlling the robot growth through the environment data and/or the interactive data, and the exclusive differentiated experience of the user can be further brought to the user. Thus, for different robots of the same model, even though the environment is the same, the interaction between the robot and the user is equivalent, but the robots belonging to different users have different growth conditions. For example, two children in the same family may each own a robot belonging to themselves, and if the ages of the two children are different in height, the growth states of the two robots may be different even if the interactions between the two children and the robots are equivalent. For example, the growth rate of a robot of a child of greater age and/or greater height of the two children is greater than the growth rate of a robot of the other child.
In some embodiments, controlling robot growth comprises: controlling a change in the shape and/or skill upgrade of the robot. For example, when at least one of environmental data of the robot, interactive data between the robot and the user, and user information satisfies a preset growing condition, the shape change and/or skill upgrade of the robot is controlled. Wherein the preset growing condition may be preset and stored in the storage medium, for example, may be preset according to a preference of the user. For example, a plurality of growth conditions may be preset, and it may be determined at intervals (e.g., one day, one week, one month, etc.) whether the growth conditions are currently satisfied, and if so, the robot may be controlled to grow once. For example, when the age and/or height of the user increases by a preset value, the shape of the robot is controlled to be correspondingly changed and/or the skill is controlled to be correspondingly upgraded; or when the environmental data and/or the interaction data in the current period meet the environment and/or the interaction requirement of the robot growth, controlling the robot to correspondingly change the appearance and/or correspondingly upgrade the skill; or when the age and/or height of the user increases by a preset value and the environmental data and/or interaction data in the current period meet the environment and/or interaction requirements for the growth of the robot, controlling the appearance of the robot to be correspondingly changed and/or controlling the skill to be correspondingly upgraded.
By controlling the appearance change and/or skill upgrading of the robot, a user can intuitively feel the growth of the robot.
Illustratively, a skill upgrade of a bot includes an interactive content and/or action upgrade of the bot. That is, when the preset growth condition is satisfied, the interactive contents and/or actions of the robot may also be changed. For example, when the user issues a "tell a story" instruction, the robot can select a suitable story to play according to the current growth state of the robot. For example, a plurality of actions are stored in the robot in advance, and the robot can unlock more actions as the robot grows.
In some embodiments, controlling the change in the shape of the robot at 120 includes: one or more control signals are generated, wherein each control signal is used to control a change in the shape of at least one location on the robot. For example, the robot may be configured to change the shape of one or more designated portions, and may send control signals to the designated portions, respectively, to control corresponding mechanisms of the designated portions to perform corresponding actions to implement the change in shape.
Illustratively, the change in the shape of the robot includes a change in at least one of height, form, weight, and volume of the robot.
In some embodiments, the robot has at least one length-adjustable joint mechanism, and the height of the robot can be controlled by controlling the length of the joint mechanism. For example, the robot is a biped robot, two leg joints of the biped robot can be provided with 4 telescopic joint mechanisms, the two telescopic joint mechanisms are respectively installed on a left thigh, a left calf, a right thigh and a right calf, and 4 control signals can be generated to control the 4 telescopic joint mechanisms to control the height of the robot.
In some embodiments, the robot has at least two forms, and in a normal state, the robot assumes one form, and when a preset growth condition is satisfied, the robot may deform to assume another form. For example, the robot is initially a biped robot, and when the robot has satisfied the preset growth condition, the wheels of the feet of the robot are extended to roll and walk, and the wheels are transformed into a wheel-foot combination form. For example, the robot may be a transformers robot, which may be transformed into an automobile when the robot satisfies preset growth conditions.
In some embodiments, a water absorbent material is included in the robot body, which can absorb moisture, causing the weight of the robot to increase. Illustratively, the water-absorbing material can be distributed in proportion to each part of the robot body, or the water-absorbing material can be concentrated and distributed in partial parts of the robot. Alternatively, a sealing cover for isolating air may be provided on the surface of the water-absorbent material, and when the robot has satisfied the preset growth condition, the sealing cover is controlled to open to expose the water-absorbent material to air, so as to increase the weight of the robot. For example, the water absorbing material can be distributed at different positions of the robot in a sealing mode, and the sealing cover at different positions can be opened for multiple times in the process of controlling the robot to grow.
In some embodiments, the volume of the robot may vary. For example, a mechanical structure with adjustable volume can be arranged on the trunk and/or limbs of the robot body, and the thickness of the trunk and/or limbs of the robot is controlled through motors in the mechanical structure, so that the volume of the robot is changed. For example, it is also possible to provide at least part of the surface of the robot with an inflatable special material, in which the change in volume of the robot is controlled by inflating the same.
The above embodiments only describe the shape change of the robot during the growth process by taking the height, shape, weight and volume of the robot as examples, but the embodiments of the present application are not limited thereto, and other shape changes may also be made during the growth process of the robot.
In some embodiments, controlling the change in the shape of the robot includes controlling the change in the height of the robot. Accordingly, controlling the height change of the robot in dependence of the environmental data and/or the interaction data comprises:
determining a weighting coefficient corresponding to the environment data and a weighting coefficient corresponding to the interactive data;
acquiring a first growth rate corresponding to the environmental data and a second growth rate of the robot corresponding to the interactive data;
and controlling the robot to grow according to the weighting coefficient corresponding to the environmental data, the first growth rate, the weighting coefficient corresponding to the interactive data and the second growth rate.
The first growth rate and the second growth rate may be preset and stored in the storage medium. Alternatively, the first growth rate and the second growth rate may also be determined according to the following expressions (1) and (2):
G1=R1*(L/T) (1)
G2=R2*(L/T) (2)
where G1 represents a first growth rate, R1 represents a weight taken by the environmental data in a height change of the robot, G2 represents a second growth rate, R2 represents a weight taken by the interaction data between the robot and the user in a height change of the robot, L represents a maximum allowable growth height of the robot, and T represents a duration of growth of the robot. R1, R2, L, and T may be preset and stored in a storage medium. For example, R1 and R2 may each take the value 0.5, L may take the value 10 centimeters (cm), and T may take the value 10 days, with G1 being 0.5cm and G2 being 0.5 cm.
In some embodiments, controlling the height change of the robot according to the weighting factor corresponding to the environmental data, the first growth rate, the weighting factor corresponding to the interactive data, and the second growth rate includes:
calculating the height increased by the robot according to the following expression (3), and controlling the robot to increase the height:
L=G1*(T+H+N)+G2*(I+C+S) (3)
wherein L represents the height that the robot can grow, G1 represents the first growth rate, G2 represents the second growth rate, T represents the weighting coefficient corresponding to the temperature, H represents the weighting coefficient corresponding to the humidity, N represents the weighting coefficient corresponding to the noise, I represents the weighting coefficient corresponding to the interaction duration, C represents the weighting coefficient corresponding to the interaction content distribution, and S represents the weighting coefficient corresponding to the interaction time distribution.
In some embodiments, determining the weighting factor corresponding to the environmental data and the weighting factor corresponding to the interaction data comprises:
acquiring total weights t1, h1 and n1 corresponding to temperature, humidity and noise;
determining weights t2, h2 and n2 of the current period corresponding to the temperature, the humidity and the noise according to the respective ranges of the temperature, the humidity and the noise in the current period;
determining weighting coefficients T, H and N corresponding to temperature, humidity and noise in the current period, wherein T is T1T 2, H is H1H 2, and N is N1N 2;
acquiring total weights i1, c1 and s1 corresponding to the interaction duration, the interaction content distribution and the interaction time distribution;
determining weights i2, c2 and s2 of the current period corresponding to the interaction duration, the interaction content distribution and the interaction time distribution according to the range of the interaction duration, the range of the interaction content distribution and the range of the interaction time distribution in the current period;
and determining weighting coefficients I, C and S corresponding to the interaction duration, the interaction content distribution and the interaction time distribution in the current period, wherein I is 1I 2, C is 1C 2, and S is 1S 2.
Assuming that the current period is the current day, t1 is 40%, h1 is 30%, n1 is 30%, i1 is 40%, c1 is 30%, s1 is 30%, t2 is 100%, h2 is 100%, n2 is 100%, i2 is 100%, c2 is 100%, s2 is 100%, the robot can increase the height L to 1cm on the current day; if t2 is 80%, h2 is 70%, n2 is 60%, i2 is 100%, c2 is 50%, and s2 is 60%, the robot can increase the height L by 0.72cm on the same day.
It should be noted that, the expression (3) only takes the environmental data including temperature, humidity and noise as an example, but the embodiment of the present application is not limited thereto. For example, the expression (3) may be modified to L ═ G1 ═ T + H) + G2 ═ I + C + S, may be modified to L ═ G1 ═ T + H + N) + G2 × (I + C), and may be modified to L ═ G1 ═ T + H + N + a) + G2 (I + C + S). Wherein a represents a weighting coefficient corresponding to the air quality, a-a 1 a2, a1 is a total weight corresponding to the air quality, and a2 is a weight of a current period corresponding to the air quality in the current period.
Illustratively, the total weight corresponding to various environment data and the total weight corresponding to interactive data are normalized to a number between 0 and 100%, the sum of the total weights corresponding to various environment data is 1, and the sum of the total weights corresponding to various interactive data is 1. For example, in the environmental data, the weighting coefficient corresponding to the temperature is 40%, the weighting coefficient corresponding to the humidity is 30%, the weighting coefficient corresponding to the noise is 30%, and the weighting coefficient corresponding to the air quality is 0; the duration weighting coefficient corresponding to the interactive duration in the interactive data is 40%, the content weighting coefficient corresponding to the interactive content distribution is 30%, and the time weighting coefficient corresponding to the interactive time distribution is 30%. The total weight corresponding to various environment data of the robot and the total weight corresponding to various interaction data between the robot and the user may be preset according to the preference of the user and stored in the storage medium.
The product of the total weight corresponding to various environment data and the weight of the current period is used as the weighting coefficient corresponding to various environment data, and the product of the total weight corresponding to various interaction data and the weight of the current period is used as the weighting coefficient corresponding to various interaction data, so that the user preference, the environment data and the interaction data in the current period can be mapped to the growth of the robot.
The determination method of the weight of the current period corresponding to the temperature, the humidity, and the noise is described below by way of example.
In some embodiments, if the temperature in the current period is less than the first temperature threshold, the current period corresponds to a first temperature weighting coefficient T1; if the temperature is greater than the first temperature threshold and less than the second temperature threshold, the temperature corresponds to a second temperature weighting coefficient T2; if the temperature is greater than the second temperature threshold, the third temperature weighting coefficient T3 is corresponded; wherein the first temperature threshold is less than the second temperature threshold and T2 is greater than T1 and T3. The first temperature threshold and the second temperature threshold may be different according to seasons, for example, the first temperature threshold and the second temperature threshold are higher by 2 degrees celsius in summer than in winter. The magnitude relationship between the first temperature weighting coefficient and the third temperature weighting coefficient may be set according to the preference of the user, which is not limited in the embodiment of the present application. For example, if the user likes hot and does not like cold, then T1 is less than T3, whereas T1 is greater than T3.
In some embodiments, if the humidity in the current cycle is less than the first humidity threshold, the current cycle corresponds to a first humidity weighting coefficient H1; if the humidity is greater than the first humidity threshold and less than the second humidity threshold, the humidity corresponds to a second humidity weighting coefficient H2; if the humidity is greater than the second humidity threshold, the humidity corresponds to a third humidity weighting coefficient H3; wherein the first humidity threshold is less than the second humidity threshold and H2 is greater than H1 and H3. The size relationship between H1 and H3 may be set according to the preference of the user, which is not limited in the embodiment of the present application. For example, if the user prefers dryness but not wetting, then H1 is greater than H3, whereas H1 is less than H3.
In some embodiments, if the noise in the current period is less than the first noise threshold, the noise in the current period corresponds to a first noise weighting coefficient N1; if the noise is larger than the first noise threshold and smaller than the second noise threshold, the noise corresponds to a second noise weighting coefficient N2; if the noise is greater than the second noise threshold, the noise corresponds to a third noise weighting coefficient N3; wherein the first noise threshold is less than the second noise threshold, N1 is greater than N2, and N2 is greater than N3. Wherein the first and second noise thresholds may be different during the day and night, e.g., the first and second noise thresholds are smaller (e.g., 10DB smaller) during the night than during the day. The time length of the day and the time length of the night can be distinguished according to summer time and winter time, and the time length of the day and the time length of the night can be distinguished according to seasons.
The following describes, by way of example, a method for determining weights of a current period corresponding to an interaction duration, an interaction content distribution, and an interaction time distribution.
If the interaction duration in the current period is smaller than the first duration threshold, the interaction duration is short, and the activity of the robot is insufficient, the first duration weighting coefficient I1 is corresponded; if the interaction duration is greater than the first duration threshold and less than the second duration threshold, the interaction duration is balanced, and the robot activity is moderate, the robot corresponds to a second duration weighting coefficient I2; if the interaction duration is greater than the second duration threshold, the interaction duration is longer, and the robot activity is higher, the interaction duration corresponds to a third duration weighting coefficient I3; wherein the first duration threshold is less than the second duration threshold, and I2 and I3 are respectively greater than I1. The size relationship between I2 and I3 is not limited in the examples of the present application. For example, if the robot is configured as a balanced robot, I2 is greater than I3. For example, the higher the default robot liveness, the better, I3 is greater than I2.
In some embodiments, if the distribution of the interactive content between the robot and the user in the current period is in the first range, the distribution corresponds to a first content weighting coefficient C1; if the interactive content distribution is in the second range, the interactive content distribution corresponds to a second content weighting coefficient C2; if the interactive content distribution is in the third range, the interactive content distribution corresponds to a third content weighting coefficient C3; wherein the first range represents less casual interaction than learning interaction, the second range represents a balance of casual interaction and learning interaction, and the third range represents more casual interaction than learning interaction. The size relationship among C1, C2, and C3 is not limited in the embodiments of the present application. For example, if the robot is configured as a learning robot, C3 is greater than C2, C2 is greater than C1; if the robot is configured as a leisure robot, C1 is greater than C2 and C2 is greater than C3.
And if the interaction time distribution between the robot and the user in the current period shows that more interactions occur in a proper time period, correspondingly taking a larger weighting coefficient. In some embodiments, if the interaction time is distributed during the day of the vacation, it corresponds to the first time weighting factor S1; if the interaction time is distributed in the evening of the holiday, the interaction time corresponds to a second time weighting coefficient S2; if the interaction time is distributed in the daytime of the working day, the interaction time corresponds to a third time weighting coefficient S3; if the interaction time is distributed in the evening of the weekday, it corresponds to S4. For example, S1 is greater than S2, S3, and S4. The time length of the day and the time length of the night can be distinguished according to summer time and winter time, and the time length of the day and the time length of the night can be distinguished according to seasons.
In some embodiments, before controlling robot growth in 120, method 100 may further comprise: judging that the robot meets at least one of the following conditions: the robot is in an idle state, the robot is in an unmanned state around the robot, and the current time is in a preset time period set for the robot to grow. Therefore, the normal use of the user can not be influenced and the user can not be disturbed when the robot is controlled to grow.
For example, it may be determined by a sensor (e.g., a camera, an infrared sensor, a sound sensor, a thermal imager, etc.) that the robot is in an unmanned state around.
In some embodiments, the method 100 may further include: judging whether the robot grows to a final state; and if the robot grows to the final state, stopping controlling the robot to grow. Alternatively, if the robot has grown to the final state, a prompt message may be output via an output device (e.g., a speaker or a display screen) to prompt the user that the robot has grown to the final state.
In some embodiments, the robot includes multiple growth modes, each growth mode having a different user's control authority over the robot. For example, the growth mode of the robot includes a user mode, a parent mode, and a privilege mode. The control authority corresponding to the user mode is smaller than the control authority corresponding to the parent mode. In the user mode and the parent mode, the growth of the robot is irreversible, for example, the height of the robot can be long or maintained and cannot be short; under the privilege mode, the height of the robot can be freely adjusted to the specified height, so that the debugging and the maintenance of the robot are facilitated.
In some embodiments, environmental parameters and/or interaction data may also be adjusted. For example, a user may enter commands via an input device (mouse, keyboard, keys, touch screen, microphone, etc.) to modify the adjustment environment parameters and/or the interaction data.
Illustratively, the method 100 may further include: adjusting parameters related to the environmental data and/or the interaction data when the robot is in the parent mode or the privileged mode, the parameters including at least one of:
1) the temperature preference can be used for determining the magnitude relation between the first temperature weighting coefficient and the third temperature weighting coefficient corresponding to the temperature;
2) the humidity preference can be used for determining the magnitude relation between the first humidity weighting coefficient and the third humidity weighting coefficient corresponding to the humidity;
3) the activity preference can be used for determining the magnitude relation between a second duration weighting coefficient and a third duration weighting coefficient corresponding to the interaction duration;
4) the content preference can be used for determining the magnitude relation among a first content weighting coefficient, a second content weighting coefficient and a third content weighting coefficient corresponding to the interactive content distribution;
5) the time preference can be used for determining the magnitude relation among a first time weighting coefficient, a second time weighting coefficient, a third time weighting coefficient and a fourth time weighting coefficient corresponding to the interaction time distribution;
6) the weight that the environmental data of the robot occupies in the height change of the robot and the weight that the interaction data between the robot and the user occupies in the height change of the robot.
It will be appreciated that in some embodiments the robot may also have only one growth mode in which the user may input corresponding instructions via the input device to modify the relevant parameters described above.
In some embodiments, the method 100 may further include: outputting state information of the robot, wherein the state information comprises at least one of the following information: the physiological age of the robot, the awakening age of the robot, the active age of the robot, the height of the robot, the number of interactions of the robot, the distribution of the interaction content of the robot, and the distribution of the interaction time of the robot in a specified period. The physiological age of the robot can be the duration of the robot leaving the factory till now, and the year, month, week, day and the like can be taken as a timing cycle; the awakening age of the robot can be the duration of the first startup of the robot till now, and can also take years, months, weeks, days and the like as a timing cycle; the active age of the robot can be the duration of the robot in a powered state until the robot is started for the first time, and can be a timing cycle of year, month, week, day and the like; the height of the robot comprises an original height, a current height and a growing height; the number of interactions of the robot may be the average daily number of interactions up to a specified date; the interactive content distribution of the robot can be daily-average interactive content distribution until a specified date; the total interaction time distribution of the robot may be a daily average interaction time distribution up to a specified date. The state information of the robot can be output when an instruction of inquiring the state of the robot by a user is received, and the state information of the robot can also be output when the robot is started for the first time after the growth of the robot is finished or the robot interacts with the user for the first time.
By outputting the state information of the robot, the user can know the state of the robot conveniently. The state information of the robot may be output through a screen, and may also be output through a speaker.
Having described the control method of the robot according to the embodiment of the present invention, a control apparatus of the robot according to the embodiment of the present application is described below with reference to fig. 2.
Fig. 2 is a schematic structural diagram of a control device 200 of a robot according to an embodiment of the present application. The apparatus 200 may be a module (e.g., a processor) built into the robot, or may be a device or server communicatively connected to the robot. The apparatus 200 is used for executing the control method of the robot according to the embodiment of the present application, and for avoiding repetition, corresponding contents are omitted here, and refer to the control method of the robot described above specifically.
As shown in fig. 2, the apparatus 200 includes an acquisition unit 210 and a processing unit 220.
The acquiring unit 210 is configured to acquire at least one of environment data of the robot and interaction data between the robot and a user; the processing unit 220 is configured to control robot growth according to at least one of the environmental data and the interaction data.
The robot growth is controlled according to the environment data and/or the interactive data, at least one of the environment where the robot is located and/or the interactive situation between the robot and the user can be mapped to the robot growth, so that the user can intuitively feel that the robot grows under the accompanying and attending of the user, the emotion between the user and the robot is better enhanced, and the robot plays a better accompanying role.
Meanwhile, because the environments of the robots are different, the interaction quality between each user and the robots is different, and even if the robots of the same model are used by different users for a period of time, the growth states of the robots are different, so that for different users, the embodiment of the invention can bring certain differentiated experience to the users, and is more beneficial to developing the exclusive feeling of the robots.
Optionally, as shown in fig. 3, the apparatus 200 may further include an output unit 230 for outputting status information of the robot, where the status information includes at least one of: the physiological age of the robot, the awakening age of the robot, the active age of the robot, the height of the robot, the number of interactions of the robot, the distribution of the interactive contents of the robot, and the distribution of the interactive time of the robot. For example, the state information of the robot may be output upon receiving an instruction to inquire the state information of the robot; or, the state information of the robot is automatically output when the robot is interacted with the user for the first time after the growth of the controlled robot is completed.
Fig. 4 is a schematic structural diagram of a robot 300 according to an embodiment of the present application, and as shown in fig. 4, the robot 300 includes a body 310 and a processor 320 installed inside the body 310. The processor 320 is used for executing the control method of the robot according to the embodiment of the present application. Specifically, the processor 320 is used to control the shape change of the body 310, and/or the processor 320 is used to control the skill upgrade of the robot 300.
In some embodiments, the body 310 has at least one length-adjustable joint mechanism thereon, and the processor 320 can control the height of the robot by controlling the length of the joint mechanism. For example, the robot 300 may be a biped robot. Accordingly, body 310 includes a height adjustable bipedal mechanism joint. For example, two leg joints of the biped robot may be provided with 4 telescopic joint mechanisms respectively installed on the left thigh, the left calf, the right thigh and the right calf, and the processor 320 may generate 4 control signals to control the 4 telescopic joint mechanisms to control the height of the robot. It should be understood that the robot 300 may also be any other shape robot, such as a human-type robot, an animal-type robot, a cartoon robot, etc.
In some embodiments, the body 310 is provided with a deformable member, and the processor 320 may control the deformable member to deform to cause the body 310 of the robot to assume another different configuration. In some embodiments, the body 310 is provided with a material for effecting weight change, such as a water absorbent material. In some embodiments, the body 310 is provided with a mechanical structure and/or special materials for realizing the volume change thereof. Specifically, reference may be made to the content described in the above method embodiment, and details are not described herein again.
In some embodiments, the body 310 may further include a driving mechanism and a moving mechanism (not shown), and the driving mechanism drives the moving mechanism to move under the control of the processor 320, so as to realize the automatic movement of the robot. Wherein the moving structure may be a universal wheel, a roller, a tire, or a crawler, etc. installed at the bottom of the body 310.
The processor 320 may be a Central Processing Unit (CPU), and the processor 320 may also be other general-purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In some embodiments, as shown in fig. 5, the robot 300 may further include a sensor module 330 for collecting environmental data of the robot.
The sensor module 330 includes at least one of the following sensors: humidity sensors, ambient noise sensors, and air quality sensors.
In some implementations, as shown in fig. 5, the robot 300 may further include: an output device 340 for outputting status information of the robot, the status information including at least one of: the physiological age of the robot, the awakening age of the robot, the active age of the robot, the height of the robot, the number of interactions of the robot, the distribution of the interactive contents of the robot, and the distribution of the interactive time of the robot. Illustratively, the output device 330 may be a display screen or a speaker.
In some embodiments, the robot 300 may also include a communication module 350 for communicating with external devices. For example, the communication module may include a Universal Asynchronous Receiver/Transmitter (UART) or a Serial Peripheral Interface (SPI).
The embodiment of the present application also provides a computer-readable storage medium for storing computer-readable instructions, which, when executed by a computer, cause the computer to execute the control method of the robot of the embodiment of the present application. The storage medium may include a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or any combination thereof.
The embodiment of the application further provides a computer program, and the computer program can be stored on a cloud or a local storage medium. When being executed by a computer or a processor, the computer program is used for executing the steps of the control method of the computer according to the embodiment of the application and realizing the modules of the control device of the robot according to the embodiment of the application.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be substantially or partially implemented in the prior art, or all or part of the technical solutions may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the embodiments disclosed in the present application, and these modifications or substitutions should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method for controlling a robot, comprising:
acquiring at least one of environmental data of the robot and interactive data between the robot and a user;
controlling the robot to grow according to at least one of the environmental data and the interaction data;
wherein the controlling the robot to grow comprises:
controlling a shape change and/or a skill upgrade of the robot, wherein the shape change of the robot comprises at least one change of height, shape, weight and volume of the robot, and the skill upgrade of the robot comprises interaction content and/or action upgrade of the robot;
the change in the shape of the robot includes a change in height of the robot,
the controlling the robot to grow according to the environmental data and/or the interaction data comprises: determining a weighting coefficient corresponding to the environment data and a weighting coefficient corresponding to the interactive data; acquiring a first growth rate corresponding to the environment data and a second growth rate of the robot corresponding to the interaction data;
controlling the height of the robot to change according to the weighting coefficient corresponding to the environment data, the first growth rate, the weighting coefficient corresponding to the interactive data and the second growth rate;
the environmental data comprises at least one of temperature, humidity, air quality and noise, and the interaction data comprises at least one of interaction duration, interaction content distribution and interaction time distribution;
the controlling the height change of the robot according to the weighting coefficient corresponding to the environment data, the first growth rate, the weighting coefficient corresponding to the interactive data and the second growth rate comprises:
calculating the height which can be increased by the robot according to the following expression, and controlling the robot to increase the height:
L=G1*(T+H+N)+G2*(I+C+S)
wherein L represents the height that the robot can grow, G1 represents the first growth rate, G2 represents the second growth rate, T represents a weighting coefficient corresponding to temperature, H represents a weighting coefficient corresponding to humidity, N represents a weighting coefficient corresponding to noise, I represents a weighting coefficient corresponding to interaction duration, C represents a weighting coefficient corresponding to interaction content distribution, and S represents a weighting coefficient corresponding to interaction time distribution;
the determining the weighting coefficient corresponding to the environment data and the weighting coefficient corresponding to the interaction data includes:
acquiring total weights corresponding to the temperature, the humidity and the noise respectively;
determining the weight of each corresponding current period according to the respective ranges of the temperature, the humidity and the noise in the current period;
determining weighting coefficients corresponding to the temperature, the humidity and the noise in the current period respectively, wherein the weighting coefficients corresponding to the temperature, the humidity and the noise are products of total weights corresponding to the temperature, the humidity and the noise respectively and weights corresponding to the current period respectively;
acquiring total weights corresponding to the interaction duration, the interaction content distribution and the interaction time distribution respectively;
determining the weight of each corresponding current period according to the range of each of the interaction duration, the interaction content distribution and the interaction time distribution in the current period;
determining respective corresponding weighting coefficients of the interaction duration, the interaction content distribution and the interaction time distribution in a current period, wherein the respective corresponding weighting coefficients of the interaction duration, the interaction content distribution and the interaction time distribution are products of respective corresponding total weights and respective corresponding weights of the current period.
2. The method of controlling a robot according to claim 1, further comprising, before controlling the robot to grow:
judging that the robot meets at least one of the following conditions: the robot is in an idle state, the robot is in an unmanned state around the robot, and the robot is currently in a preset time period set for the growth of the robot.
3. The method of controlling a robot according to claim 2, further comprising:
judging whether the robot grows to a final state;
and if the robot grows to the final state, stopping controlling the robot to grow.
4. The method of controlling a robot according to claim 3, further comprising:
outputting status information of the robot, the status information including at least one of: a physiological age of the robot, an arousal age of the robot, an active age of the robot, a height of the robot, a number of interactions of the robot, a distribution of interactive contents of the robot, and a distribution of interactive time of the robot.
5. The method of controlling a robot according to claim 4, further comprising:
acquiring user information;
wherein said controlling the robot to grow in accordance with at least one of the environmental data and the interaction data comprises:
and controlling the robot to grow according to at least one of the environment data and the interaction data and the user information.
6. A control device for a robot, comprising:
an acquisition unit configured to acquire at least one of environmental data of the robot and interaction data between the robot and a user;
a processing unit for controlling the growth of the robot according to the control method of the robot of any one of claims 1-5 based on at least one of the environmental data and the interaction data.
7. A robot, comprising:
a body;
a processor mounted inside the body for performing the control method of any one of claims 1-5.
8. A robot as claimed in claim 7, wherein the robot is a bipedal robot and the body comprises height adjustable bipedal mechanism joints.
CN201910437548.5A 2019-02-21 2019-05-23 Robot control method and device and robot Active CN110125938B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910135816 2019-02-21
CN2019101358168 2019-02-21

Publications (2)

Publication Number Publication Date
CN110125938A CN110125938A (en) 2019-08-16
CN110125938B true CN110125938B (en) 2021-07-02

Family

ID=67572892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910437548.5A Active CN110125938B (en) 2019-02-21 2019-05-23 Robot control method and device and robot

Country Status (1)

Country Link
CN (1) CN110125938B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115990892B (en) * 2023-03-24 2023-06-20 中航西安飞机工业集团股份有限公司 Double-robot cooperative assembly system and method for large airfoil skeleton

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1304345A (en) * 1999-05-10 2001-07-18 索尼公司 Robot device and method for controlling same
JP2002120179A (en) * 2000-10-11 2002-04-23 Sony Corp Robot device and control method for it
CN1942289A (en) * 2004-04-16 2007-04-04 松下电器产业株式会社 Robot, hint output device, robot control system, robot control method, robot control program, and integrated circuit
CN105563493A (en) * 2016-02-01 2016-05-11 昆山市工业技术研究院有限责任公司 Height and direction adaptive service robot and adaptive method
CN105825268A (en) * 2016-03-18 2016-08-03 北京光年无限科技有限公司 Method and system for data processing for robot action expression learning
CN106078756A (en) * 2016-06-24 2016-11-09 苏州美丽澄电子技术有限公司 Running gear adjustable private tutor robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1304345A (en) * 1999-05-10 2001-07-18 索尼公司 Robot device and method for controlling same
US6445978B1 (en) * 1999-05-10 2002-09-03 Sony Corporation Robot device and method for controlling the same
JP2002120179A (en) * 2000-10-11 2002-04-23 Sony Corp Robot device and control method for it
CN1942289A (en) * 2004-04-16 2007-04-04 松下电器产业株式会社 Robot, hint output device, robot control system, robot control method, robot control program, and integrated circuit
CN105563493A (en) * 2016-02-01 2016-05-11 昆山市工业技术研究院有限责任公司 Height and direction adaptive service robot and adaptive method
CN105825268A (en) * 2016-03-18 2016-08-03 北京光年无限科技有限公司 Method and system for data processing for robot action expression learning
CN106078756A (en) * 2016-06-24 2016-11-09 苏州美丽澄电子技术有限公司 Running gear adjustable private tutor robot

Also Published As

Publication number Publication date
CN110125938A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
WO2018108175A1 (en) Task plan adjustment method for wearable device, and device
CN107817891B (en) Screen control method, device, equipment and storage medium
US11579617B2 (en) Autonomously acting robot whose activity amount is controlled
US9807983B2 (en) Device control method for estimating a state of an animal and for determining a control detail for an electronic device
JP5265141B2 (en) Portable electronic device, program and information storage medium
JP6409207B2 (en) Autonomous robot to wear clothes
WO2018043235A1 (en) Autonomous behavior type robot recognizing direction of sound source
CN104033988B (en) Air conditioner control system and control method of air conditioner control system
US20140038489A1 (en) Interactive plush toy
KR20170000752A (en) Human-computer interactive method and apparatus based on artificial intelligence, and terminal device
US10076632B2 (en) Sensory feedback system with active learning
CN117666867A (en) Robot
CN110125938B (en) Robot control method and device and robot
CN106024016A (en) Children's guarding robot and method for identifying crying of children
CN105389461A (en) Interactive children self-management system and management method thereof
JP2008310680A (en) Control system, program, and information storage medium
CN110049404B (en) Intelligent device and volume control method thereof
WO2018108174A1 (en) Interface interactive assembly control method and apparatus, and wearable device
CN112026472B (en) Electronic equipment, intelligent cabin, environment adjusting method and device, and environment detecting method and device
WO2019151387A1 (en) Autonomous behavior robot that behaves on basis of experience
CN106773628A (en) A kind of protection based reminding method and device for intelligent and portable equipment
CN110842929A (en) Sleep-soothing robot with simulation mechanical arm
CN105737323B (en) Air conditioning control method
CN110382181A (en) It is suitble to the joint construction in the joint of robot
US20230201517A1 (en) Programmable interactive systems, methods and machine readable programs to affect behavioral patterns

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 306, Floor 3, No. 89, West Third Ring Road North, Haidian District, Beijing 100089

Patentee after: Beijing Meiji Technology Co.,Ltd.

Address before: 102206 1107, unit 1, building 1, yard 1, Longyu middle street, Huilongguan town, Changping District, Beijing

Patentee before: Beijing Geyuan Zhibo Technology Co.,Ltd.