WO2021102800A1 - 智能设备的控制方法、装置、***和存储介质 - Google Patents

智能设备的控制方法、装置、***和存储介质 Download PDF

Info

Publication number
WO2021102800A1
WO2021102800A1 PCT/CN2019/121612 CN2019121612W WO2021102800A1 WO 2021102800 A1 WO2021102800 A1 WO 2021102800A1 CN 2019121612 W CN2019121612 W CN 2019121612W WO 2021102800 A1 WO2021102800 A1 WO 2021102800A1
Authority
WO
WIPO (PCT)
Prior art keywords
user operation
operation information
behavior
smart device
instruction
Prior art date
Application number
PCT/CN2019/121612
Other languages
English (en)
French (fr)
Inventor
杜凯
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980040074.9A priority Critical patent/CN112313590A/zh
Priority to PCT/CN2019/121612 priority patent/WO2021102800A1/zh
Publication of WO2021102800A1 publication Critical patent/WO2021102800A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Definitions

  • the embodiments of the present application relate to the technical field of smart devices, and in particular, to a control method, device, system, and storage medium of a smart device.
  • Smart devices such as robots, can perform certain behaviors and actions.
  • smart algorithms can be stored in a smart device.
  • the smart device detects a user operation or automatically triggers, the smart device can perform some behaviors and actions.
  • the actions of the smart device are relatively rigid and not smooth, which may cause the behavior and actions of the smart device to be inaccurate and the smart device to perform actions with low accuracy. .
  • the embodiments of the present application provide a control method, device, system, and storage medium of a smart device, so as to make the behavior and actions of the smart device accurate, improve the accuracy of the smart device's completion of the action, and make the action of the smart device not rigid and capable. vitality.
  • an embodiment of the present application provides a method for controlling a smart device, including:
  • the user operation instruction includes user operation information used to characterize the behavior of the smart device
  • the user operation information is stored, so that when a preset trigger condition is met, the smart device is controlled to execute the behavior according to the user operation information.
  • an embodiment of the present application provides a method for controlling a smart device, which is applied to the control device of the smart device, and the method includes:
  • the user operation instruction includes user operation information used to characterize the behavior of the smart device
  • the user operation information is sent to the smart device, so that the smart device stores the user operation information, and when a preset trigger condition is met, executes the behavior according to the user operation information.
  • an embodiment of the present application provides a control device for a smart device, including: a processor and a memory;
  • the memory is used to store program code
  • the processor is used to call the program code, and when the program code is executed, it is used to perform the following operations:
  • the user operation instruction includes user operation information used to characterize the behavior of the smart device
  • the user operation information is stored, so that when a preset trigger condition is met, the smart device is controlled to execute the behavior according to the user operation information.
  • an embodiment of the present application provides a control system for a smart device, including: a control device and the smart device;
  • the control device is configured to obtain user operation instructions, the user operation instructions including user operation information used to characterize the behavior of the smart device, and send the behavior operation information to the smart device;
  • the smart device is configured to store the user operation information, and when it is determined that a preset trigger condition is satisfied, execute the behavior according to the user operation information.
  • an embodiment of the present application provides a readable storage medium with a computer program stored on the readable storage medium; when the computer program is executed, it realizes the intelligence described in the embodiment of the present application in the first aspect.
  • an embodiment of the present application provides a program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of a control device of a smart device can be read from the program product.
  • the storage medium reads the computer program, and the at least one processor executes the computer program so that the control device of the smart device implements the control method of the smart device according to the embodiment of the present application in the first aspect, or implements the control method of the smart device as described in the second aspect The control method of the smart device described in the embodiment of the present application.
  • the user operation instructions include user operation information used to characterize the behavior of the smart device; the user operation information is stored to meet expectations.
  • the trigger condition is set, the execution behavior of the smart device is controlled according to the user's operation information.
  • the user's user operation instruction for the smart device is obtained, and the user operation information in the user operation instruction is stored; when it is detected that the preset trigger condition is met, the stored user operation information is retrieved to control the smart device to execute the user operation information characterization behavior.
  • the behavior represented by the user operation information can make the behavior and actions of the smart device more accurate, precise, natural, and energetic; thus, when the smart device needs to behave During the operation, the stored user operation information can be retrieved, and the smart device can be controlled to perform corresponding actions based on the user operation information. At this time, the behavior and actions of the smart device will be more accurate, precise, natural, and energetic.
  • Figure 1 is a schematic diagram 1 of the application scenario provided by this application.
  • Figure 2 is a second schematic diagram of the application scenario provided by this application.
  • FIG. 3 is a flowchart of a method for controlling a smart device according to an embodiment of the application
  • Figure 4 is a schematic diagram of inputting user operation information provided by this application.
  • Figure 5 is a schematic diagram of the behavior of the unmanned vehicle provided in this application.
  • Fig. 6 is a schematic diagram 1 of the action of the control device for the unmanned vehicle provided by this application;
  • FIG. 7 is a second schematic diagram of the action of the control device for the unmanned vehicle provided by this application.
  • FIG. 8 is a flowchart of a method for controlling a smart device according to another embodiment of the application.
  • FIG. 9 is a flowchart of a method for controlling a smart device according to another embodiment of this application.
  • FIG. 10 is a schematic diagram of the editing process provided by this application.
  • FIG. 11 is a flowchart of a method for controlling a smart device according to still another embodiment of the application.
  • FIG. 12 is a first schematic diagram of an interactive interface provided by an embodiment of the application.
  • FIG. 13 is a second schematic diagram of an interactive interface provided by an embodiment of the application.
  • FIG. 14 is a flowchart of a method for controlling a smart device provided by another embodiment of this application.
  • 15 is a schematic structural diagram of a control device for a smart device provided by an embodiment of the application.
  • FIG. 16 is a schematic structural diagram of a control system for a smart device provided by another embodiment of the application.
  • a component when referred to as being "fixed to” another component, it can be directly on the other component or a centered component may also exist. When a component is considered to be “connected” to another component, it can be directly connected to the other component or there may be a centered component at the same time.
  • Movable platforms including but not limited to drones and intelligent robots.
  • Correspondence can refer to an association relationship or binding relationship, and the correspondence between A and B means that there is an association relationship or binding relationship between A and B.
  • FIG. 1 is a schematic diagram of the first application scenario provided by this application
  • Figure 2 is a schematic diagram of the second application scenario provided by this application.
  • the control method of a smart device can be applied to a smart device or a control device of a smart device on.
  • the smart device can be a movable platform; the movable platform includes, but is not limited to, unmanned aerial vehicles, smart robots, unmanned vehicles, etc.
  • Figure 1 is a drone;
  • Figure 2 is an intelligent robot.
  • control method of the smart device can also be applied to any device or system to complete the control method of the smart device provided in this application.
  • smart algorithms can be stored in smart devices.
  • the smart device detects a user operation or automatically triggers, the smart device can perform some behaviors and actions.
  • the actions of the smart device are relatively stiff and not smooth, which may cause the behavior and actions of the smart device to be inaccurate and the smart device to perform actions with low accuracy.
  • the control method, device, system and storage medium of the smart device provided in this embodiment can solve the above-mentioned problems.
  • Fig. 3 is a flowchart of a method for controlling a smart device provided by an embodiment of the application. As shown in Fig. 3, the method of this embodiment may include:
  • the execution subject of this embodiment may be a smart device or a control device of the smart device.
  • smart devices may be smart robots, unmanned aerial vehicles, unmanned vehicles, etc., devices that may have posture and/or position change actions.
  • the smart device is a drone, and the drone is a movable platform; or, the smart device is provided with a movable platform, that is, the smart device includes a movable platform.
  • a smart device whose execution subject is an unmanned vehicle is taken as an example for description.
  • the unmanned vehicle may include a chassis and a pan/tilt mounted on the chassis.
  • the chassis may be driven by, for example, universal wheels.
  • the pan/tilt may be equipped with at least one of a shooting device, a broadcaster, and a camera. Among them, the pan/tilt may be a single-axis pan/tilt, two-axis pan/tilt, or three-axis pan/tilt.
  • the user When the user operates the smart device to perform related behaviors, the user can send user operation instructions to the smart device through the controller, or the user directly sends user operation instructions to the smart device; furthermore, the user can control the smart device through user operation instructions.
  • the above-mentioned controller may be a remote control device connected to a smart device, or a controller on the smart device.
  • the user operation instruction may include one or more user operation information.
  • a piece of user operation information is used to instruct the smart device to perform one or more actions. Actions performed by smart devices, such as shooting actions, moving actions, acceleration actions, and deceleration actions.
  • the user can use one or more of the following methods to input user operation information into the smart device: touch mode, physical control mode, somatosensory trigger mode, and voice trigger mode. Therefore, the user operation information includes any one or more of the following: touch information, physical control operation information, somatosensory information, and voice information.
  • a touch controller is provided on the smart device, for example, the touch controller is a smart display screen; the user can touch the touch controller, and then add a touch controller to the touch controller.
  • the touch information is input, and the touch information indicates the behavior that the smart device needs to perform; the touch controller sends the touch information to the smart device; further, the smart device performs the behavior indicated by the touch information according to the touch information.
  • FIG. 4 is a schematic diagram of the input of user operation information provided by this application.
  • the executor is used as the control device for exemplary description, and a remote control device is provided, and the interactive interface of the remote control device can be displayed With the touch button, the remote control device can communicate with the smart device; as shown in Figure 4, the remote control device displays 2 virtual joysticks; the user can touch the virtual joystick to input touch information into the remote control device.
  • the touch information indicates the behavior that the smart device needs to perform; the remote control device sends the touch information to the smart device; further, the smart device performs the behavior indicated by the touch information according to the touch information.
  • the control device may include, but is not limited to, mobile terminals such as mobile phones.
  • the smart device when a user operates a smart device, the smart device can obtain a user operation instruction, and the user operation instruction includes one or more user operation information; in order to facilitate the cascading of the user operation information, and to facilitate the indication
  • user operation information may include behavior parameters, and the behavior parameters are used to indicate the behavior of the smart device.
  • one behavior parameter indicates one behavior, or multiple behavior parameters indicate one behavior.
  • the behavior parameters include any one or more of the following: movement parameters, pose parameters, shooting parameters, skill parameters, and audio parameters.
  • the user operates the smart device to move and shoot, and then the smart device is required to complete the movement behavior and the shooting behavior.
  • the movement parameters and the shooting parameters can be obtained.
  • movement parameters include but are not limited to movement distance, movement speed, and movement acceleration
  • shooting parameters include but are not limited to shooting angle and range distance.
  • a user operates a smart device to perform physical activities, and the smart device is required to complete the physical activity behavior.
  • a robotic arm is installed on the smart device; at this time, the smart device can obtain the pose parameters of the robotic arm.
  • the pose parameters include, but are not limited to, the position of the robot arm of the smart device, and the moving speed of the robot arm of the smart device.
  • the smart device when a user operates a smart device to perform various skills operations, the smart device needs to complete the related skills; at this time, the smart device can obtain the skill parameter, which can be a parameter corresponding to a special function, or it can be Enable or disable certain functions.
  • skills include but are not limited to jumping, shooting, shooting, infrared emission, continuous circle rotation, and so on.
  • Skill parameters include, but are not limited to, skill types and various parameters of skills.
  • the skill parameters of the skill include, but are not limited to, jumping speed and jumping distance.
  • the skill parameters of shooting skills include, but are limited to, shooting range, shooting object, shooting time, and shooting angle.
  • the skill parameters of infrared skills include, but are limited to, the range of infrared rays and the angle of infrared rays.
  • Voice parameters include but are limited to volume, voice content, and role type.
  • S102 Store user operation information, so as to control the execution behavior of the smart device according to the user operation information when a preset trigger condition is met.
  • the smart device since the smart device obtains the user operation information, the smart device can store the user operation information, and the smart device stores the behavior of the smart device under the user operation instruction.
  • the smart device can detect an external trigger condition.
  • the smart device can detect the user's active trigger. For example, the user issues a trigger instruction through voice, touch, pose, etc.
  • the smart device can detect the state information of the smart device itself, or detect the environmental information recognized by the smart device, to determine whether the state of the smart device meets the trigger condition.
  • the state of the smart device includes, but is not limited to: the voice state of the smart device, the position state of the smart device, the pose state of the smart device, and the skill state of the smart device. For example, detecting whether the speed of the smart device meets a preset speed, or detecting whether the pose of the smart device is a preset pose, or detecting whether the position of the smart device is within the preset position range.
  • the smart device can then control the smart device to execute the behavior represented by the preset trigger condition according to the stored user operation information.
  • the preset trigger condition may be a preset user operation.
  • the preset user operation may be a user's preset action, a user's preset pose, and a user's preset audio.
  • the preset trigger condition may be an external environmental condition, for example, the external environmental condition may be a preset environmental temperature, a preset environmental humidity, a preset environmental noise, and a preset environmental identification.
  • the preset trigger condition may be the preset state of the smart device.
  • the preset state of the smart device may be the preset position of the smart device, the preset motion state of the smart device, and the preset pose state of the smart device.
  • the smart device when the user operates the smart device, if the smart device obtains the user operation instruction, it can perform the behavior instructed by the user, and the smart device can obtain and store the user operation information in the user operation instruction; then, The user can perform certain actions.
  • the smart device determines that the user's action is to instruct the smart device to perform a certain action, the smart device retrieves the user operation information corresponding to the user's action; then, the smart device depends on the user's action The corresponding user operation information completes the action indicated by the user operation information.
  • the smart device controls the smart device to perform the behavior instructed by the user according to the user operation instruction, and the smart device can store the user operation information in the user operation instruction;
  • the smart device can detect the state of the smart device.
  • the smart device determines that the state of the smart device meets certain conditions, for example, the smart device determines that the position of the smart device is at a preset position, or the smart device determines that the pose of the smart device is preset.
  • the smart device retrieves the user operation information corresponding to the state of the smart device; then, the smart device controls the smart device to perform the behavior indicated by the user operation information according to the user operation information corresponding to the state of the smart device.
  • control device when the execution subject is the control device, the control device sends user operation information corresponding to the state of the smart device to the smart device, so that the smart device performs the behavior indicated by the user operation information, and the smart device and/or the control device can Store user operation information.
  • the user can use the virtual joystick of the remote control device shown in FIG. 4 to input user operation instructions to the smart device to control the behavior of the unmanned vehicle (smart device).
  • the left virtual joystick can control the unmanned vehicle's chassis; among them, the chassis controls the unmanned vehicle's forward, backward, left, and right movement; for example, the left virtual joystick controls the unmanned vehicle's moving direction and acceleration.
  • the right virtual joystick controls the gimbal of the unmanned vehicle.
  • the right virtual joystick controls the pitch and yaw of the gimbal; since the gimbal has a shooting device, when the right virtual joystick controls the gimbal, It can indirectly control the shooting angle of the shooting device on the PTZ.
  • An inertial measurement unit can be provided in the remote control device.
  • the IMU is a device that measures the three-axis attitude angle (or angular rate) and acceleration of an object; thus, the user can use the IMU in the remote control device to input user operations Instruction, that is, using the posture change of the remote control device to input operation instructions, and then to control the smart device.
  • the remote control device can control the behavior of the unmanned vehicle according to the user operation information in the user operation instruction, and the remote control device can use the corresponding Pitch/Yaw operation to control the movement of the unmanned vehicle's pan/tilt .
  • the remote control device can store user operation information in the user operation instruction.
  • FIG. 5 is a schematic diagram of the behavior of the unmanned vehicle provided by this application
  • FIG. 6 is a schematic diagram 1 of the action of the unmanned vehicle control device provided by this application
  • FIG. 7 is a schematic diagram of the action of the unmanned vehicle control device provided by this application
  • the user can send user operation instructions to the unmanned vehicle through the control device.
  • the user operation instruction is used to instruct the chassis of the unmanned vehicle to move
  • the user operation instruction is used to instruct the unmanned vehicle to move.
  • the gimbal is pitched and/or rotated.
  • the chassis of the unmanned vehicle can be controlled to move left and right.
  • the user sends a voice to the smart device, and then the smart device controls the smart device to perform corresponding actions according to the received voice.
  • the smart device can issue a voice response based on the voice uttered by the user, or the smart device can move based on the voice uttered by the user, or the smart device can perform corresponding skills based on the voice uttered by the user.
  • the smart device can perform semantic analysis on the voice, and the smart device can obtain user operation information in the voice; the smart device can store user operation information.
  • the smart device detects whether the trigger condition is received, for example, the user utters a voice again, so that the smart device can determine that the trigger condition is reached according to the currently received voice; then, the smart device retrieves the stored user operation information, and the smart device executes and The behavior corresponding to the user operation information.
  • the following step may also be included: when the user operation instruction is obtained, the execution behavior of the smart device is controlled according to the user operation information.
  • the smart device may further control the smart device to perform a behavior corresponding to the user operation information according to the user operation information in the user operation instruction.
  • the real-time control of the smart device to perform the behavior represented by the user operation information in the user operation instruction can be presented in a substantive manner, so that the user can confirm in real time whether the storage of the user operation information is in line with expectations. Perform error correction or further processing of user operation information according to the actual display results.
  • the user operation instruction includes one or more user operation information; one user operation information is used to instruct the smart device to perform one or more actions.
  • a user operates a smart device, and then the smart device obtains user operation information, and the user operation information includes shooting parameters; the smart device can control the smart device to shoot according to the shooting parameters in the user operation information.
  • the user operation instructions include user operation information used to characterize the behavior of the smart device; user operation information is stored to control the execution behavior of the smart device according to the user operation information when a preset trigger condition is met.
  • the user's user operation instructions for the smart device are obtained, and then the user operation information in the user operation instructions is stored; when it is detected that the preset trigger condition is met, the stored user operation information is retrieved to control the smart device to execute the user operation information represented by the user operation information. behavior.
  • the behavior represented by the user operation information can make the behavior and actions of the smart device more accurate, precise, natural, and energetic; thus, when the smart device needs to behave
  • the stored user operation information can be retrieved, and the smart device can be controlled to perform corresponding actions based on the user operation information.
  • the behavior and actions of the smart device will be more accurate, precise, natural, and energetic.
  • FIG. 8 is a flowchart of a method for controlling a smart device according to another embodiment of the application. As shown in FIG. 8, the method in this embodiment may include:
  • each user operation instruction further includes an instruction identifier, and the instruction identifier is used to identify the user operation instruction.
  • the generation time of the multiple user operation instructions is continuous; wherein, when the preset trigger condition is satisfied, the execution sequence of the behavior represented by each user operation information is the same as the generation sequence of the multiple user operation instructions. Or, the generation time of at least part of the user operation instruction is discontinuous.
  • the execution subject of this embodiment may be a smart device, or may be a control device of the smart device.
  • the smart device may be a movable platform.
  • the execution subject of this embodiment is the same as the above-mentioned embodiment, taking an example for description.
  • each user operation instruction may have its own instruction identifier, and the instruction identifier may identify the user operation instruction.
  • Each user operation instruction includes user operation information, and the user operation information characterizes the behavior of the smart device.
  • Each user operation information may include behavior parameters; the behavior parameters include any one or more of the following: movement parameters, pose parameters, shooting parameters, skill parameters, and audio parameters.
  • the movement parameters include but are not limited to speed, acceleration, and displacement.
  • the pose parameters include, but are not limited to, pose position and pose state.
  • Shooting parameters include, but are not limited to, shooting range, shooting angle, and shooting speed.
  • Skill parameters include, but are not limited to, movement skills, jumping skills, shooting skills, infrared skills, and zoom skills.
  • Audio parameters include but are not limited to playback parameters, voice parameters, and volume parameters.
  • the user can continuously operate the smart device to make the smart device perform a series of actions; at this time, the user continuously issues user operation instructions to the smart device; thus, the acquired user operation instructions are continuous in time. That is, the generation time of the acquired multiple user operation instructions is continuous. Since the user inputs a user operation instruction, the smart device executes an action according to the user operation information in the user operation instruction. Therefore, in step S202, after storing the user operation information in the multiple user operation instructions obtained, the smart device will meet the requirements. When setting the trigger condition, it is necessary to sequentially execute the behaviors represented by the user operation information according to the generation sequence of the generation time of the user operation instructions, that is, the execution sequence of the behaviors represented by each user operation information is the same as the generation order of multiple user operation instructions.
  • the user inputs user operation instruction A
  • user operation instruction A includes user operation information a and user operation information b
  • the generation time of user operation instruction A is time 1.
  • the smart device according to the user operation instruction A Each user operates the information and performs an action 1.
  • the user inputs user operation instruction B.
  • User operation instruction B includes user operation information c and user operation information d.
  • the generation time of user operation instruction B is time 2; at this time, the smart device according to each user in user operation instruction B Operation information, perform an action 2.
  • the user inputs the user operation instruction C.
  • the user operation instruction C includes user operation information e
  • the generation time of the user operation instruction C is time 3.
  • the smart device executes one according to each user operation information in the user operation instruction C Behavior 3.
  • Time 1, time 2, and time 3 are sequential in time sequence, and time 1, time 2, and time 3 are continuous in time.
  • the preset trigger condition is met, the stored user operation information a, user operation information b, user operation information c, user operation information d, and user operation information e can be retrieved; then, the smart device can be controlled to execute behavior 1, Act 2, Act 3.
  • the user can discontinuously operate the smart device to make the smart device perform a series of actions; at this time, the user can issue user operation instructions to the smart device at intervals, that is, during the period when the smart device issues user operation instructions, The operation of the smart device can be suspended; therefore, at least part of the user operation instructions acquired by the smart device are not continuous in time.
  • the user inputs multiple user operation instructions into the smart device to make the smart device perform a series of actions; then, the smart device adjusts the sequence of the multiple user operation instructions, so that at least part of the User operation instructions are not continuous in time.
  • step S202 after storing the acquired user operation information in the multiple user operation instructions, when the preset trigger condition is satisfied, the smart device does not need to execute the user operation in sequence according to the generation sequence of the user operation instruction generation time.
  • the behavior represented by the operation information the smart device can sequentially execute the behavior represented by the user operation information according to the stored arrangement order of the user operation information.
  • the user inputs user operation instruction A
  • user operation instruction A includes user operation information a and user operation information b
  • the generation time of user operation instruction A is time 1.
  • the smart device according to the user operation instruction A Each user operates the information and performs an action 1.
  • the user inputs user operation instruction B.
  • User operation instruction B includes user operation information c and user operation information d.
  • the generation time of user operation instruction B is time 2; at this time, the smart device according to each user in user operation instruction B Operation information, perform an action 2.
  • the user inputs the user operation instruction C.
  • the user operation instruction C includes user operation information e
  • the generation time of the user operation instruction C is time 3.
  • the smart device executes one according to each user operation information in the user operation instruction C Behavior 3.
  • Time 1, time 2, and time 3 are sequential in time sequence, and time 1, time 2, and time 3 are continuous in time.
  • the order of user operation instruction A, user operation instruction B, and user operation instruction C can be adjusted, and the adjusted order is user operation instruction A, user operation instruction C, and user operation instruction D; thus, the smart device stores more Each user operation instruction is not continuous in generation time.
  • the smart device can retrieve the stored user operation information a, user operation information b, user operation information e, user operation information c, and user operation information d; then, control the smart device to perform actions in sequence 1.
  • S202 Store user operation information corresponding to multiple user operation instructions in association, and obtain a user operation information set, so that when a preset trigger condition is met, the smart device is controlled to perform behaviors represented by each user operation information according to the user operation information set.
  • the smart device has acquired multiple user operation instructions, and each user operation instruction includes at least one piece of user operation information; it can be seen that these user operation information represent a series of behaviors of the smart device, so that multiple user operations Each user operation information in the instruction has a certain relationship. Then the smart device associates each user operation information in the multiple obtained user operation instructions to obtain a user operation information set.
  • the corresponding relationship between the preset trigger condition and the user operation information set can be set, that is, a plurality of preset trigger conditions are configured, and each preset trigger condition corresponds to a respective user operation information set.
  • the smart device retrieves the user operation information set corresponding to the preset trigger condition; then, the smart device can set the user operation information corresponding to the preset trigger condition, Control the smart device to perform the behavior represented by each user operation information in the user operation information set.
  • the smart device will sequentially execute the behavior corresponding to each user operation instruction; among them, the user operation instruction A Including user operation information a and user operation information b, user operation instruction B includes user operation information c and user operation information d, user operation instruction C includes user operation information e, and user operation instruction D includes user operation information f. Then, stop this input.
  • the smart device needs to associate and store user operation information a, user operation information b, user operation information c, user operation information d, user operation information e, and user operation information f in order to obtain a user operation information set Q.
  • the smart device retrieves the user operation information set Q corresponding to the preset trigger condition. Then, according to each user operation information in the user operation information set Q, the smart device is controlled to perform a series of actions.
  • the smart device can uniformly store the recorded user operation information; or, in the process of step S201 and step S202, the smart device The device can store each user's operation information in real time.
  • the method provided in this embodiment may further include: in the process of acquiring multiple user operation instructions, if a tentative instruction is received, temporarily determining the storage of user operation information.
  • the smart device in the process of step S201 and step S202, can store the user operation information in the user operation instruction in real time when the user inputs the user operation instruction; in this process, if the smart device Upon receiving the pause instruction, the smart device can temporarily determine the storage and processing of the user operation information in the user operation instruction.
  • the user issues a pause instruction through voice, gesture, touch, etc.; or, the smart device detects the state of the smart device, and generates a pause instruction when it determines that the state of the smart device meets certain conditions; or, the smart device detects the external environment. State, when it is determined that the state of the external environment meets certain conditions, a pause instruction is generated.
  • each user operation instruction includes user operation information used to characterize the behavior of the smart device; the user operation information corresponding to the multiple user operation instructions is stored in association to obtain a user operation information set.
  • the smart device is controlled to execute the behavior represented by each user operation information according to the user operation information set.
  • the user operation information in the multiple user operation instructions is associated and stored to obtain the user operation information set; because each user operation information in the user operation information set can indicate a series of behaviors Therefore, when the preset trigger condition is met, the stored user operation information set can be retrieved, and the smart device can be controlled to perform a series of actions according to each user operation information in the user operation information set.
  • the behavior represented by the user operation information can make the behavior and actions of the smart device more accurate, precise, natural, and energetic; thus, when the smart device needs to behave During the operation, the stored user operation information can be retrieved, and the smart device can be controlled to perform corresponding actions based on the user operation information. At this time, the behavior and actions of the smart device will be more accurate, precise, natural, and energetic.
  • FIG. 9 is a flowchart of a method for controlling a smart device provided by another embodiment of the application. As shown in FIG. 9, the method of this embodiment may include:
  • the execution subject of this embodiment may be a smart device, or may be a control device of the smart device.
  • the smart device may be a movable platform.
  • the execution subject of this embodiment is the same as the above-mentioned embodiment, taking an example for description.
  • a trigger instruction is required to indicate that the user operation information in the user operation instruction issued by the user can be started to be recorded. Therefore, the smart device can detect in real time whether the opening instruction is received; after the smart device determines that the opening instruction is received, it can determine that after obtaining the user operation instruction, it can record the user operation information in the user operation instruction.
  • recording does not necessarily mean storage. During the recording process, you can pause and edit. After the recording is completed, you can determine whether to store according to the trigger condition.
  • the user issues an opening instruction through touch, joystick, gesture, voice, and so on.
  • the state of the smart device may be detected, and when it is determined that the state of the smart device satisfies a certain condition, an opening instruction is generated.
  • the smart device determines that the engine of the smart device starts to start, it determines to generate the opening instruction.
  • the smart device may detect the state of the external environment, and when it is determined that the state of the external environment satisfies certain conditions, generate a turn-on instruction.
  • the smart device determines to generate the turn-on instruction when determining that the vibration state of the external environment is greater than a preset oscillation amplitude.
  • this step may refer to step S101 shown in FIG. 3, or this step may refer to step S201 shown in FIG. 8, and will not be described again.
  • this step may refer to step S102 shown in FIG. 3, or this step may refer to step S202 shown in FIG. 8, and will not be described again.
  • the end instruction is generated according to the time indicated by the opening instruction and the preset duration.
  • step S303 or in the process of storing user operation information in step S303, it can be detected in real time whether an end instruction is received; after the smart device determines that the end instruction is received, it can determine that it does not need to continue recording Once the user operation information is completed, the recording and storage of the user operation information can be ended.
  • the user can proactively issue an end instruction.
  • the user sends an end instruction through touch, joystick, gesture, voice, and so on.
  • the smart device may detect the state of the smart device, and when it is determined that the state of the smart device satisfies a certain condition, an end instruction is generated, for example, when it is determined that the engine of the smart device is no longer running, the end instruction is determined to be generated.
  • the smart device can detect the state of the external environment, and generate an end instruction when it is determined that the state of the external environment meets a certain condition, for example, when it is determined that the noise of the external environment is less than a preset noise value, it is determined to generate an end instruction.
  • the smart device when the start instruction is obtained, can obtain the start recording time indicated by the start instruction, and the start instruction may indicate a preset duration; then, the smart device can start recording according to the start time indicated by the start instruction Superimposed with the preset time length, a time point can be determined, and this time point is used as the time point of the generated end instruction; furthermore, when the time point is reached, the smart device automatically generates the recording to indicate the user's operation information The end instruction of the end.
  • the editing process includes any one or more of the following:
  • Move processing move processing is used to instruct to adjust the execution order of the behavior represented by user operation information; delete processing, delete processing is used to instruct to delete user operation information; add processing, add processing is used to instruct to add at least one user operation information , To form a set of user operation information; superimposition processing, which is used to indicate to associate the execution order of at least two behaviors represented by user operation information; zoom processing, which is used to indicate the execution time of the behavior represented by user operation information is Lengthen or shorten; cropping processing, cropping processing is used to indicate that the execution of the behavior represented by the user operation information is divided into multiple sub-behaviors or the execution of the behavior represented by the user operation information is incomplete.
  • step S303 or S304 that is, after the smart device stores the user operation information
  • the smart device can edit the stored user operation information to obtain the processed user operation information; or, in step In the process of storing user operation information in S303, the smart device may edit the stored user operation information to obtain the processed user operation information; or, before storing the user operation information in step S303, the smart device may record the User operation information for editing.
  • the smart device after the smart device obtains the processed user operation information, it can replace the processed user operation information with the previous user operation information; or, after the smart device obtains the processed user operation information, it can retain the previous user at the same time Operation information, and processed user operation information.
  • the editing processing includes but is not limited to the following processing: moving processing, deleting processing, adding processing, superimposing processing, zooming processing, cropping processing.
  • the smart device receives a movement processing instruction, the movement processing instruction indicates one or more user operation information, and the movement processing instruction indicates the original position and the moved position of the user operation information that needs to be moved; then, The smart device moves the user operation information indicated by the movement processing instruction to a preset position according to the movement processing instruction.
  • the smart device receives a delete processing instruction, the delete processing instruction indicates one or more user operation information, and the delete processing instruction indicates that the user operation information needs to be deleted; then, the smart device deletes processing according to the delete processing instruction The user operation information indicated by the instruction is deleted.
  • the smart device receives an increase processing instruction, the addition processing instruction indicates one or more other user operation information, and the addition processing instruction indicates the need to add user operation information; then, the smart device adds Adding other user operation information indicated by the processing instruction is included before, after, or in the current user operation information.
  • the smart device receives the superimposition processing instruction, the superimposition processing instruction indicates multiple user operation information, and the superimposition processing instruction indicates that the user operation information needs to be superimposed; then, according to the superimposition processing instruction, the smart device transfers the superimposition processing instruction to the The indicated multiple user operation information is superimposed.
  • the user operation information a is movement, and the user operation information b is shooting; the superposition processing instruction is received, and the superposition processing instruction indicates that the user operation information a and the user operation information b need to be superimposed; then, the smart device according to the superposition processing instruction, The user operation information a and user operation information b indicated by the superimposition processing instruction are superimposed to obtain the processed user operation information A.
  • the user operation information A indicates that the smart device needs to move and shoot, for example, the user operation information A indicates The smart device needs to move and shoot at the same time, or the user operation information A indicates that the smart device needs to shoot at a certain position during the movement, or the user operation information A indicates that the smart device needs to shoot during a certain period of time Make a move.
  • the smart device receives a zoom processing instruction, the zoom processing instruction indicates one or more user operation information, and the zoom processing instruction indicates the execution time of the behavior represented by the user operation information that needs to be zoomed; then, the smart device zooms according to the The processing instruction lengthens or shortens the duration of the user operation information indicated by the zoom processing instruction.
  • the user operation information a is zooming; upon receiving a zoom processing instruction, the zoom processing instruction indicates that the duration of the user operation information a needs to be extended; then, according to the zoom processing instruction, the duration of the user operation information a is extended.
  • the smart device receives a tailoring processing instruction, the tailoring processing instruction indicates one or more user operation information, and the tailoring processing instruction is used to indicate that the execution of the behavior represented by the user operation information needs to be divided into multiple sub-behaviors; , The smart device divides the behavior represented by the user operation information indicated by the tailoring processing instruction into multiple sub-behaviors.
  • the smart device receives a cropping processing instruction, and the user operation information indicated by the cropping processing instruction includes zoom operation information, moving operation information, shooting operation information, and shooting operation information.
  • the behavior represented by the user operation information includes Zooming, moving, shooting, and shooting, that is, the smart device can perform zooming, moving, shooting, and shooting at the same time according to the user operation information; then, the user operation information can be segmented to obtain zoom operation information, mobile operation information, Shooting operation information and shooting operation information; thus, the behavior represented by the user's operation information is divided into multiple sub-actions, namely zooming, moving, shooting, and shooting. These multiple sub-actions can be executed individually or combined with other actions.
  • the smart device receives a cropping processing instruction, the cropping processing instruction indicates one or more user operation information, and the cropping processing instruction is used to indicate that when the behavior represented by the user operation information is executed, the executed behavior is incomplete. Then, the smart device cuts out the behavior represented by the user operation information indicated by the processing instruction, and cuts out part of the sub-behavior.
  • the smart device receives a cropping processing instruction, and the user operation information indicated by the cropping processing instruction includes zoom operation information, moving operation information, shooting operation information, and shooting operation information.
  • the behavior represented by the user operation information includes Zooming, moving, shooting, and shooting, that is, the smart device can perform zooming, moving, shooting, and shooting at the same time according to the user operation information; then, the smart device can segment the user operation information to remove the zoom operation information; thus, The zoom behavior in the sub-behavior represented by the user operation information is removed.
  • the smart device detects the trigger condition, and when it is determined that the preset trigger condition is satisfied, the smart device retrieves the processed user operation information corresponding to the preset trigger condition; then, the smart device controls the smart device according to the processed user operation information Perform the corresponding behavior.
  • the above-mentioned editing processing instruction may be issued by the user through touch, voice, pose, or the like.
  • FIG. 10 is a schematic diagram of the editing process provided by this application.
  • an interactive interface can be displayed on the smart device or the control device of the smart device, and the user can watch the acquired user operation on the interactive interface.
  • the behavior represented by the information; and, the interactive interface provides a play button, a pause button, and an edit button; the user can touch the play button to watch the behavior of the smart device represented by the acquired user operation information; the user can touch Control the pause button to pause watching the behavior of the smart device represented by the acquired user operation information; the user can edit the acquired user operation information by touching the edit button.
  • an editing frame of user operation information sorted in chronological order can be provided on the interface, and the user can touch these editing frames to edit the user operation information.
  • the user touches edit frame 1, and then drags edit frame 1, drag edit frame 1 before edit frame 2, where edit frame 1 corresponds to user operation information a, edit Frame 2 corresponds to user operation information b; thus, a movement processing instruction issued by the user is received, and the movement processing instruction indicates to move user operation information a before user operation information b; thus, according to the movement processing instruction, user operation information a is moved Before the user operation information b.
  • the interactive interface involved in the interactive action in the embodiment of the present invention can be multiple or one.
  • the various interactive interfaces can be switched, and when there is one, the interactive interface Part of the logo can be hidden or revealed.
  • step S303 or step S305 before controlling the smart device to perform the behavior in step S303 or step S305, the following process may also be performed:
  • the preset trigger condition is a specific trigger condition
  • the smart device after the smart device stores the user operation information, or after the smart device obtains the processed user operation information, or after the smart device obtains the user operation information collection, it can detect whether the preset trigger condition occurs; After detecting that the preset trigger condition is satisfied, that is, when it is determined that the preset trigger condition is a specific trigger condition, the smart device determines whether the current state of the smart device meets the preset condition.
  • the preset trigger condition can refer to step S102 shown in FIG. 3.
  • the specific trigger condition may be a trigger condition that meets certain requirements.
  • the preset trigger condition may be a user operation corresponding to user operation information, and the specific trigger condition is specific user operation information.
  • the smart device determines that the current state of the smart device meets the preset condition, it determines that it can retrieve the user operation information corresponding to the preset trigger condition; the smart device controls the smart device to perform the behavior represented by the user operation information according to the user operation information.
  • the smart device determines that the current state of the smart device does not meet the preset conditions, in order for the smart device to retrieve and execute user operation information, the smart device needs to control the smart device to perform certain operations so that the current state of the smart device meets the preset conditions Condition; Then, the smart device retrieves user operation information corresponding to the preset trigger condition; according to the user operation information, the smart device is controlled to perform the behavior represented by the user operation information.
  • the preset condition refers to a specific trigger condition.
  • the preset conditions include, but are not limited to, preset external environmental conditions, smart device status, and specific user operations; for example, user operations can be user actions, user poses, user audio, and external environmental conditions can be environmental Temperature, environmental humidity, environmental noise, the state of the smart device can be the location of the smart device, the motion state of the smart device, and the pose state of the smart device.
  • control the smart device to perform corresponding operations including but not limited to the following operations: move, rotate, zoom, shoot, pose.
  • the smart device detects that the preset trigger condition is a specific trigger condition
  • the specific trigger condition is that the smart device starts to run. Then, the smart device detects whether the pan/tilt of the smart device is at the preset position. If it is determined that the pan/tilt of the smart device is at the preset position, the user operation information for shooting is retrieved; then, the smart device controls the design device on the pan/tilt to perform shooting according to the user's operation information.
  • the smart device determines that the pan/tilt of the smart device is not at the preset position, it controls the pan/tilt of the smart device to move to the preset position; then, the smart device retrieves the shooting user operation information; then, controls the pan/tilt according to the user operation information The act of shooting on the design device.
  • the acquired user operation information can also be edited, such as moving, deleting, adding, superimposing, zooming, cropping, etc., to obtain a variety of different forms of user operations.
  • Information, or a collection of multiple user operation information then, when a preset trigger condition is met, the smart device is controlled to perform corresponding actions according to the processed user operation information.
  • the diversity of user operation information can be increased. Since different user operation information indicates different behaviors, the diversity of behaviors of the smart device can be increased.
  • FIG. 11 is a flowchart of a method for controlling a smart device according to another embodiment of this application. As shown in FIG. 11, the method of this embodiment may include:
  • this step may refer to step S101 shown in FIG. 3, or this step may refer to step S201 shown in FIG. 8, or this step may refer to step S302 shown in FIG. 9, and will not be repeated.
  • S402 Store user operation information, so as to control the execution behavior of the smart device according to the user operation information when a preset trigger condition is met.
  • this step may refer to step S102 shown in FIG. 3, or this step may refer to step S202 shown in FIG. 8, or this step may refer to step S303 shown in FIG. 9, and will not be repeated.
  • the method provided in this embodiment can also perform each step in FIG. 9, and details are not described herein again.
  • S403 Display a behavior play bar on the interactive interface; wherein, during the execution of the behavior, the behavior play bar dynamically changes.
  • the instruction for instructing to process the user operation information is generated according to the user's operation on the behavior play bar and/or the user's operation on the virtual buttons on the interactive interface.
  • the smart device may provide an interactive interface on the smart device or the control device of the smart device.
  • the interactive interface is used to receive instructions from the user, and the interactive interface is used to Show users the actions of smart devices and edit user operation information.
  • Figure 12 is a schematic diagram of an interactive interface provided by an embodiment of the application.
  • the behavior of the smart device can be displayed on the interactive interface; in order to facilitate the user The operation information is edited, and in order to facilitate the user to selectively watch the behavior represented by the user operation information, a behavior play bar can be provided on the interactive interface, or jump to another interactive interface according to the user operation, on the interactive interface Display a behavior play bar, or automatically jump to another interactive interface, and display a behavior play bar on the interactive interface; multiple edit frames (behavior frames) are displayed on the behavior play bar, and each edit frame corresponds to A behavior, each behavior corresponds to at least one user operation information; when the smart device executes the behavior represented by the user operation information, the user's behavior can be displayed on the interactive interface, and the editing frame on the behavior play bar is frame by frame Progressive, that is, each edit frame is played, the edit frame is marked as played; for example, as shown in FIG.
  • the behavior play bar is dynamically changed to mark which behaviors have been executed by the smart device.
  • the behavior play bar is triggered, the behavior represented by the corresponding user operation information may not be displayed on the interactive interface.
  • the smart device can be triggered to execute the corresponding behavior, that is, the trigger of the behavior play bar is executed by the smart device.
  • a preset trigger condition for the corresponding behavior is provided.
  • the behavior represented by the corresponding user operation information displayed on the interactive interface can be a virtual multimedia screen, or it can be a user operation instruction obtained on the smart device and executed according to the user operation instruction Recorded video of the behavior.
  • a play button, a pause button, and an edit button are provided on the interactive interface; the user can touch the play button, and then, the control device obtains the play instruction, and the control device plays the behavior of the smart device represented by the obtained user operation information; the user The pause button can be touched, and the control device obtains the pause instruction, and the control device pauses playing the behavior of the smart device represented by the obtained user operation information; the user can touch the edit button, and then the control device obtains the edit instruction and controls The device edits the acquired user operation information according to the user's editing instruction.
  • a behavior play bar is provided on the interactive interface, and multiple edit frames (behavior frames) are displayed on the behavior play bar.
  • Each edit frame corresponds to a behavior, and each behavior corresponds to at least one user operation information;
  • an editing processing instruction is generated. For example, the user touches edit frame 1 on the play bar, then drags edit frame 1, drag edit frame 1 before edit frame 2, where edit frame 1 corresponds to user operation information a, and edit frame 2 corresponds to user operation information a.
  • the operation information b corresponds; thus, the movement processing instruction issued by the user is received, and the movement processing instruction indicates that the user operation information a is moved before the user operation information b; thus, the user operation information a is moved to the user operation information according to the movement processing instruction before b.
  • buttons are provided on the interactive interface, for example, an "edit button” is provided, so that the user can touch the virtual buttons on the interactive interface, and then the user can input different editing processing instructions.
  • each behavior identifier is used to indicate a collection of behaviors characterized by at least one user operation information.
  • the smart device can combine multiple user operation information to form a user operation information set; a user operation information set can indicate a series of behaviors of the smart device; thus, in order to facilitate the user to know which user operation information belongs to The same set of user operation information facilitates the user to edit the user operation information.
  • the behavior of the smart device can be displayed on the interactive interface
  • the behavior identification of the user operation information set can be displayed on the interactive interface. It can be seen that the behavior identification is also the identification of the behavior collection, and the behavior collection here refers to the above-mentioned series of behaviors.
  • a set of user operation information can indicate a series of behaviors of the smart device; thus, the smart device can complete multiple behaviors, that is, complete a complete set of behaviors; can record a complete set of behaviors completed by the smart device And display.
  • Figure 13 is the second schematic diagram of the interactive interface provided by the embodiment of the application.
  • the smart device can complete a series of actions according to the user operation information set 1, that is, complete the action set 1; it can be displayed on the interactive interface
  • Each behavior of behavior set 1; the smart device can complete a series of behaviors according to user operation information set 2, that is, complete behavior set 2; each behavior of behavior set 2 can be displayed on the interactive interface; the smart device collects based on user operation information 3.
  • a series of behaviors can be completed, that is, behavior set 3 can be completed; each behavior of behavior set 3 can be displayed on the interactive interface.
  • a behavior identifier can be configured for each behavior set. Behavior identification, such as text, pictures, graphics, buttons, and frames.
  • the selected behavior identification dynamically changes on the interactive interface.
  • each behavior identifier indicates each behavior set; the interactive interface may display each behavior identifier. Furthermore, the user can touch each behavior identification to select the behavior identification. Then, the smart device can retrieve the behavior set corresponding to the behavior tag according to the behavior tag selected by the user, and then play the behavior set corresponding to the behavior tag on the interactive interface; or the smart device can retrieve the behavior tag according to the behavior tag selected by the user The behavior set corresponding to the behavior identification, and then the smart device edits the behavior set corresponding to the behavior identification, displays the editing process on the interactive interface, and can also display the edited behavior set on the interactive interface.
  • the smart device may make one or more dynamic changes to the behavior identification to highlight the behavior identification.
  • the smart device highlights the behavior identification selected by the user, or the smart device changes the color of the behavior identification selected by the user, or the smart device boldly displays the behavior identification selected by the user.
  • S406 Acquire a touch command of the user on the interactive interface; according to the sliding direction indicated by the touch command, display the sliding switch of the behavior indicator on the interactive interface.
  • each behavior identifier indicates each behavior set; the interactive interface may display each behavior identifier. Furthermore, the user can select the behavior set corresponding to the behavior identification by selecting the behavior identification; at this time, when the user selects and switches the behavior identification, the smart device can obtain the user's touch instruction. As the user slides on the interactive interface, the smart device can detect the sliding direction indicated by the touch command.
  • the smart device detects the behavior identifier to be switched indicated by the user's touch instruction, and then displays the behavior set corresponding to the behavior identifier; and, in the process of the user sliding on the interactive interface, the smart device according to the touch instruction
  • the sliding direction, the action of the sliding switch of the behavior indicator is displayed on the interactive interface.
  • three behavior identifiers can be displayed on the interactive interface, namely, behavior set 1, behavior set 2, and behavior set 3.
  • the user slides on the interactive interface to indicate switching behavior sets; for example, user Swiping from left to right; thus, the sliding of set 1, behavior set 2 and behavior set 3 is displayed on the interactive interface from left to right.
  • an interactive interface may also be provided.
  • the interactive interface is used to receive instructions from the user, and the interactive interface is used to show the user the actions of the smart device and edit the user's operation information.
  • the behavior and behavior set of the smart device can be displayed on the interactive interface, and the switching status of the behavior set can also be displayed on the interactive interface, so that the user can know the behavior of the currently displayed smart device, and intuitively display the acquired and edited for the user the behavior of.
  • users can perform operations on the interactive interface to edit user operation information; users can build behavior logic through programming, which can increase the diversity of user operation information and the behavior of smart devices.
  • FIG. 14 is a flowchart of a method for controlling a smart device according to another embodiment of the application. The method may be applied to a control device of a smart device. As shown in FIG. 14, the method of this embodiment may include:
  • S502 Send the user operation information to the smart device, so that the smart device stores the user operation information, and when a preset trigger condition is met, perform an action according to the user operation information.
  • the execution subject of this embodiment may be a control device of a smart device.
  • the smart device may be a movable platform, smart robot, drone, and so on.
  • the control device can be a mobile terminal, a remote control device, and so on.
  • the user operation information includes any one or more of the following: touch information, physical control operation information, somatosensory information, and voice information.
  • the user operation information includes behavior parameters used to indicate behavior; the behavior parameters include any one or more of the following: movement parameters, pose parameters, shooting parameters, skill parameters, and audio parameters.
  • each user operation instruction further includes an instruction identifier, and the instruction identifier is used to identify the user operation instruction.
  • step 501 includes: acquiring multiple user operation instructions.
  • Step 502 includes: sending a user operation information set formed by user operation information corresponding to multiple user operation instructions to the smart device.
  • the generation time of the multiple user operation instructions is continuous; wherein, when the preset trigger condition is satisfied, the execution sequence of the behavior represented by each user operation information is the same as the generation sequence of the multiple user operation instructions.
  • the method provided in this embodiment further includes: in the process of acquiring multiple user operation instructions, if a tentative instruction is received, tentatively determining the recording of user operation information.
  • the generation time of at least part of the user operation instruction is discontinuous.
  • the method provided in this embodiment further includes: obtaining a start instruction, where the start instruction is used to instruct the user to start recording of the operation information.
  • the method provided in this embodiment further includes: obtaining an end instruction, where the end instruction is used to instruct the end of the recording of the user's operation information.
  • the end instruction is generated according to the time indicated by the opening instruction and the preset duration.
  • the method provided in this embodiment further includes: editing at least one user operation information to obtain the processed user operation information, so as to control the smart device to execute the processed user when the preset trigger condition is satisfied The behavior represented by the operation information.
  • the processed user operation information may be sent to the smart device. That is, after the user operation information is recorded, the recorded user operation information can be edited and then stored on the smart device side.
  • the editing process includes any one or more of the following:
  • Move processing move processing is used to instruct to adjust the execution order of the behavior represented by user operation information; delete processing, delete processing is used to instruct to delete user operation information; add processing, add processing is used to instruct to add at least one user operation information , To form a set of user operation information; superimposition processing, which is used to indicate to associate the execution order of at least two behaviors represented by user operation information; zoom processing, which is used to indicate the execution time of the behavior represented by user operation information is Lengthen or shorten; cropping processing, cropping processing is used to indicate that the execution of the behavior represented by user operation information is divided into multiple sub-behaviors or the execution of the behavior represented by user operation information is incomplete.
  • the method provided in this embodiment further includes: displaying a behavior play bar on the interactive interface; wherein, during the execution of the behavior, the behavior play bar dynamically changes.
  • the control device can obtain the corresponding behavior from the smart device.
  • User operation information when the user operation information is not stored on the control device side of the smart device, and the user operation information needs to be output on the interactive interface of the control device, the control device can obtain the corresponding behavior from the smart device. User operation information.
  • the instruction for instructing to process the user operation information is generated according to the user's operation on the behavior play bar and/or the user's operation on the virtual buttons on the interactive interface.
  • the method provided in this embodiment further includes: displaying at least one behavior identifier on the interactive interface, and each behavior identifier is used to indicate a collection of behaviors characterized by at least one user operation information.
  • the method provided in this embodiment further includes: when one of the at least one behavior identification is selected, the selected behavior identification dynamically changes on the interactive interface.
  • the method provided in this embodiment further includes: acquiring the user's touch instruction on the interactive interface; and displaying the sliding switch of the behavior indicator on the interactive interface according to the sliding direction indicated by the touch instruction.
  • the method provided in this embodiment further includes:
  • the preset trigger condition is a specific trigger condition
  • the method provided in this embodiment further includes: when the user operation instruction is obtained, controlling the execution behavior of the smart device according to the user operation information.
  • the smart device includes a movable platform.
  • FIG. 15 is a schematic structural diagram of a control apparatus for a smart device according to an embodiment of the application.
  • the control apparatus 600 for a smart device provided in this embodiment may include a memory 601 and a processor 602.
  • the memory 601 is used to store program codes.
  • the processor 602 is used to call program code.
  • the program code When the program code is executed, it is used to perform the following operations: obtain user operation instructions, which include user operation information used to characterize the behavior of the smart device; and store user operation information to When the preset trigger condition is met, the execution behavior of the smart device is controlled according to the user operation information.
  • control device can be a smart device (optionally, it can be installed on the smart device, or it can be independent of the smart device), or the control device of the smart device (optionally, it can be installed on the control device, or it can be Independent of control equipment).
  • the processor 602 is configured to call program code.
  • the program code When the program code is executed, it is configured to perform the following operations: obtaining user operation instructions, which include User operation information that characterizes the behavior of the smart device; the user operation information is sent to the smart device so that the smart device stores the user operation information, and when the preset trigger condition is met, the behavior is executed according to the user operation information.
  • the user operation information includes any one or more of the following: touch information, physical control operation information, somatosensory information, and voice information.
  • the user operation information includes behavior parameters for indicating behavior; the behavior parameters include any one or more of the following: movement parameters, pose parameters, shooting parameters, skill parameters, and audio parameters.
  • the number of user operation instructions is multiple, and each user operation instruction further includes an instruction identifier, and the instruction identifier is used to identify the user operation instruction.
  • the processor 602 is specifically configured to: obtain a plurality of user operation instructions; associate and store user operation information corresponding to the plurality of user operation instructions to obtain a set of user operation information, so that when a preset trigger condition is met, according to The user operation information set controls the smart device to perform the behavior represented by each user operation information.
  • the processor 602 is specifically configured to: obtain multiple user operation instructions; and send a user operation information set formed by user operation information corresponding to the multiple user operation instructions to the smart device.
  • the generation time of the multiple user operation instructions is continuous; wherein, when the preset trigger condition is satisfied, the execution sequence of the behavior represented by each user operation information is the same as the generation sequence of the multiple user operation instructions.
  • the processor 602 is further configured to: in the process of acquiring multiple user operation instructions, if a tentative instruction is received, temporarily determine the recording of user operation information.
  • the generation time of at least part of the user operation instruction is discontinuous.
  • the processor 602 is further configured to: before obtaining the user operation instruction, obtain an opening instruction, and the opening instruction is used to instruct the recording of the user operation information to start.
  • the processor 602 is further configured to: obtain an end instruction, which is used to instruct the end of the recording of the user operation information.
  • the end instruction is generated according to the time indicated by the opening instruction and the preset duration.
  • the processor 602 is further configured to:
  • Editing processing is performed on at least one piece of user operation information to obtain processed user operation information, so as to control the smart device to execute the behavior represented by the processed user operation information when the preset trigger condition is satisfied.
  • the editing process includes any one or more of the following:
  • Move processing move processing is used to instruct to adjust the execution order of the behavior represented by user operation information; delete processing, delete processing is used to instruct to delete user operation information; add processing, add processing is used to instruct to add at least one user operation information , To form a set of user operation information; superimposition processing, which is used to indicate to associate the execution order of at least two behaviors represented by user operation information; zoom processing, which is used to indicate the execution time of the behavior represented by user operation information is Lengthen or shorten; cropping processing, cropping processing is used to indicate that the execution of the behavior represented by the user operation information is divided into multiple sub-behaviors or the execution of the behavior represented by the user operation information is incomplete.
  • the processor 602 is further configured to: in the process of acquiring multiple user operation instructions, if a tentative instruction is received, temporarily determine the storage of user operation information.
  • the movable platform further includes: a display 603; the display 603 is used to display a behavior play bar on the interactive interface; wherein, during the execution of the behavior, the behavior play bar dynamically changes.
  • the instruction for instructing to process the user operation information is generated according to the user's operation on the behavior play bar and/or the user's operation on the virtual buttons on the interactive interface.
  • the display 603 is further configured to display at least one behavior identifier on the interactive interface, and each behavior identifier is used to indicate a collection of behaviors characterized by at least one user operation information.
  • the display 603 is also used to dynamically change the selected behavior identifier on the interactive interface when one of the at least one behavior identifier is selected.
  • the processor 602 is further configured to obtain the user's touch command on the interactive interface; the display 603 is also configured to display the sliding switch of the behavior indicator on the interactive interface according to the sliding direction indicated by the touch command .
  • the processor 602 is further configured to:
  • the preset trigger condition is a specific trigger condition
  • the processor 602 is further configured to:
  • the execution behavior of the smart device is controlled according to the user operation information.
  • the smart device includes a movable platform.
  • control device 600 of this embodiment can be used to execute the technical solutions of the aforementioned corresponding method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium stores program instructions, and the program execution may include part or all of the steps of the method in the aforementioned corresponding embodiment.
  • FIG. 16 is a schematic structural diagram of a control system for a smart device provided by an embodiment of this application.
  • the system includes a control device 701 and a smart device 702;
  • the control device 701 is configured to obtain a user operation instruction.
  • the user operation instruction includes user operation information used to characterize the behavior of the smart device 702, and sends the behavior operation information to the smart device 702.
  • the smart device 702 is configured to store user operation information, and when it is determined that a preset trigger condition is met, perform an action according to the user operation information.
  • the user operation information includes any one or more of the following: touch information, physical control operation information, somatosensory information, and voice information.
  • the user operation information includes behavior parameters for indicating behavior; the behavior parameters include any one or more of the following: movement parameters, pose parameters, shooting parameters, skill parameters, and audio parameters.
  • the number of user operation instructions is multiple, and each user operation instruction further includes an instruction identifier, and the instruction identifier is used to identify the user operation instruction.
  • control device 701 is specifically configured to obtain multiple user operation instructions, and send user operation information corresponding to the multiple user operation instructions to the smart device 702.
  • the smart device 702 is specifically configured to associate and store user operation information corresponding to multiple user operation instructions to obtain a user operation information set; and when it is determined that a preset trigger condition is satisfied, perform behaviors represented by each user operation information according to the user operation information set.
  • the generation time of the multiple user operation instructions is continuous; wherein, when the preset trigger condition is satisfied, the execution sequence of the behavior represented by each user operation information is the same as the generation sequence of the multiple user operation instructions.
  • control device 701 is further configured to: in the process of acquiring multiple user operation instructions, if a tentative instruction is received, temporarily determine the recording of user operation information.
  • the generation time of at least part of the user operation instruction is discontinuous.
  • control device 701 is further configured to: before acquiring the user operation instruction, obtain an opening instruction, and the opening instruction is used to instruct the recording of the user operation information to start.
  • control device 701 is further used to obtain an end instruction, which is used to instruct the end of the recording of the user's operation information.
  • the end instruction is generated according to the time indicated by the opening instruction and the preset duration.
  • control device 701 is also used to edit at least one user operation information to obtain processed user operation information.
  • the smart device 702 is also used to execute the behavior represented by the processed user operation information when the preset trigger condition is met.
  • the editing process includes any one or more of the following:
  • Move processing move processing is used to instruct to adjust the execution order of the behavior represented by user operation information; delete processing, delete processing is used to instruct to delete user operation information; add processing, add processing is used to instruct to add at least one user operation information , To form a set of user operation information; superimposition processing, which is used to indicate to associate the execution order of at least two behaviors represented by user operation information; zoom processing, which is used to indicate the execution time of the behavior represented by user operation information is Lengthen or shorten; cropping processing, cropping processing is used to indicate that the execution of the behavior represented by user operation information is divided into multiple sub-behaviors or the execution of the behavior represented by user operation information is incomplete.
  • control device 701 is also used to: display a behavior play bar on the interactive interface; wherein, during the execution of the behavior, the behavior play bar dynamically changes.
  • the instruction for instructing to process the user operation information is generated according to the user's operation on the behavior play bar and/or the user's operation on the virtual buttons on the interactive interface.
  • control device 701 is further configured to: display at least one behavior identifier on the interactive interface, and each behavior identifier is used to indicate a collection of behaviors characterized by at least one user operation information.
  • control device 701 is further configured to: when one of the at least one behavior identification is selected, the selected behavior identification dynamically changes on the interactive interface.
  • control device 701 is further configured to: obtain the user's touch instruction on the interactive interface; and display the sliding switch of the behavior indicator on the interactive interface according to the sliding direction indicated by the touch instruction.
  • the smart device 702 is also used to: before the behavior is executed, when the preset trigger condition is a specific trigger condition, detect whether the smart device 702 meets the preset condition for executing the behavior; if not, execute the corresponding Operation to meet the preset conditions of the execution behavior.
  • the smart device 702 is further configured to: when a user operation instruction is obtained, control the smart device 702 to perform actions according to user operation information.
  • the smart device 702 includes a movable platform.
  • control device 701 can refer to the structure and implementation manner of the above-mentioned related embodiment
  • smart device 702 can refer to the structure and implementation manner of the above-mentioned related embodiment.
  • the implementation principles and technical effects are similar and will not be repeated here.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disks or optical disks, etc., which can store program codes Medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Selective Calling Equipment (AREA)

Abstract

一种智能设备的控制方法、装置、***和存储介质,该方法包括:获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息;存储用户操作信息,以在满足预设触发条件时,根据用户操作信息控制智能设备执行行为。由于用户操作指令中的用户操作信息是用户自己输入的,用户操作信息所表征的行为,可以使得智能设备的行为和动作更加准确、精准、自然、有活力;从而,在需要出发智能设备进行行为和动作的时候,可以调取存储的用户操作信息,根据用户操作信息控制智能设备执行相应的行为,此时,智能设备的行为和动作会更加准确、精准、自然、有活力。

Description

智能设备的控制方法、装置、***和存储介质 技术领域
本申请实施例涉及智能设备技术领域,尤其涉及一种智能设备的控制方法、装置、***和存储介质。
背景技术
近年,随着智能设备的发展和进步,智能设备开始应用到多个领域中。智能设备,例如机器人,可以做出一些行为和动作。
现有技术中,可以在智能设备中存储智能算法,智能设备在检测到用户操作、或在进行自动触发的时候,智能设备可以做出一些行为和动作。
然而现有技术中,智能设备在根据智能算法做出行为和动作的时候,智能设备的动作比较僵硬、不流畅,从而会导致智能设备的行为和动作不够准确,智能设备完成动作的精准度低。
发明内容
本申请实施例提供一种智能设备的控制方法、装置、***和存储介质,以使得智能设备的行为和动作准确,提高智能设备完成动作的精准度,并且,使得智能设备的动作不僵硬、具有活力。
第一方面,本申请实施例提供一种智能设备的控制方法,包括:
获取用户操作指令,所述用户操作指令包括用于表征所述智能设备的行为的用户操作信息;
存储所述用户操作信息,以在满足预设触发条件时,根据所述用户操作信息控制所述智能设备执行所述行为。
第二方面,本申请实施例提供一种智能设备的控制方法,应用于所述智能设备的控制设备,所述方法包括:
获取用户操作指令,所述用户操作指令包括用于表征所述智能设备的行为的用户操作信息;
将所述用户操作信息发送给所述智能设备,以使所述智能设备存储所述用户操作信息,并在满足预设触发条件时,根据所述用户操作信息执行所述 行为。
第三方面,本申请实施例提供一种智能设备的控制装置,包括:处理器和存储器;
所述存储器,用于存储程序代码;
所述处理器,用于调用所述程序代码,当程序代码被执行时,用于执行以下操作:
获取用户操作指令,所述用户操作指令包括用于表征所述智能设备的行为的用户操作信息;
存储所述用户操作信息,以在满足预设触发条件时,根据所述用户操作信息控制所述智能设备执行所述行为。
第四方面,本申请实施例提供一种智能设备的控制***,包括:控制设备和所述智能设备;
所述控制设备,用于获取用户操作指令,所述用户操作指令包括用于表征所述智能设备的行为的用户操作信息,并将所述行为操作信息发送给所述智能设备;
所述智能设备,用于存储所述用户操作信息,并在确定满足预设触发条件时,根据所述用户操作信息执行所述行为。
第五方面,本申请实施例提供一种可读存储介质,所述可读存储介质上存储有计算机程序;所述计算机程序在被执行时,实现如第一方面本申请实施例所述的智能设备的控制方法,或者实现如第二方面本申请实施例所述的智能设备的控制方法。
第六方面,本申请实施例提供一种程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,智能设备的控制设备的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得智能设备的控制设备实施如第一方面本申请实施例所述的智能设备的控制方法,或者实施如第二方面本申请实施例所述的智能设备的控制方法。
本申请实施例提供的智能设备的控制方法、装置、***和存储介质,通过获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息;存储用户操作信息,以在满足预设触发条件时,根据用户操作信息 控制智能设备执行行为。从而获取到用户对于智能设备的用户操作指令,进而存储用户操作指令中的用户操作信息;检测到满足预设触发条件时,调取存储的用户操作信息,以控制智能设备执行用户操作信息表征的行为。由于用户操作指令中的用户操作信息是用户自己输入的,用户操作信息所表征的行为,可以使得智能设备的行为和动作更加准确、精准、自然、有活力;从而,在需要出发智能设备进行行为和动作的时候,可以调取存储的用户操作信息,根据用户操作信息控制智能设备执行相应的行为,此时,智能设备的行为和动作会更加准确、精准、自然、有活力。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请提供的应用场景示意图一;
图2为本申请提供的应用场景示意图二;
图3为本申请一实施例提供的智能设备的控制方法的流程图;
图4为本申请提供的用户操作信息的输入示意图;
图5为本申请提供的无人车的行为示意图;
图6为本申请提供的无人车的控制设备的动作示意图一;
图7为本申请提供的无人车的控制设备的动作示意图二;
图8为本申请另一实施例提供的智能设备的控制方法的流程图;
图9为本申请又一实施例提供的智能设备的控制方法的流程图;
图10为本申请提供的编辑处理的示意图;
图11为本申请再一实施例提供的智能设备的控制方法的流程图;
图12为本申请实施例提供的交互界面的示意图一;
图13为本申请实施例提供的交互界面的示意图二;
图14为本申请其他一实施例提供的智能设备的控制方法的流程图;
图15为本申请一实施例提供的智能设备的控制装置的结构示意图;
图16为本申请另一实施例提供的智能设备的控制***的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
以下对本申请中的部分用语进行解释说明,以便于本领域技术人员理解。需要说明的是,当本申请实施例的方案应用于智能设备,或者智能设备的控制设备上,控制设备、智能设备的控制设备的名称可能发生变化,但这并不影响本申请实施例方案的实施。
1)可移动平台,包括但不限于无人机、智能机器人。
2)“多个”是指两个或两个以上,其它量词与之类似。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
3)“对应”可以指的是一种关联关系或绑定关系,A与B相对应指的是A与B之间是一种关联关系或绑定关系。
需要指出的是,本申请实施例中涉及的名词或术语可以相互参考,不再赘述。
本申请的实施例提供了一种智能设备的控制方法、装置、***和存储介质。图1为本申请提供的应用场景示意图一,图2为本申请提供的应用场景 示意图二,如图1-图2所示,智能设备的控制方法可以应用到智能设备、或者智能设备的控制设备上。智能设备可以为可移动平台;可移动平台包括但不限于无人机、智能机器人、无人车等。例如,图1为无人机;图2为智能机器人。
智能设备的控制方法还可以应用到任意设备或者***上,进而完成本申请提供的智能设备的控制方法。
应理解,上述对于设备的各组成部分的命名仅是出于标识的目的,并不应理解为对本申请的实施例的限制。
现有中,可以在智能设备中存储智能算法,智能设备在检测到用户操作、或在进行自动触发的时候,智能设备可以做出一些行为和动作。但是,智能设备在根据智能算法做出行为和动作的时候,智能设备的动作比较僵硬、不流畅,从而会导致智能设备的行为和动作不够准确,智能设备完成动作的精准度低。
本实施例提供的智能设备的控制方法、装置、***和存储介质,可以解决上述问题。
图3为本申请一实施例提供的智能设备的控制方法的流程图,如图3所示,本实施例的方法可以包括:
S101、获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息。
本实施例中,本实施例的执行主体可以是智能设备,也可以是智能设备的控制设备。其中,智能设备可以是智能机器人、无人机、无人车等等可以具有姿态和/或位置变化动作的设备。例如,智能设备为无人机,无人机为一种可移动平台;或者,智能设备上设置有可移动平台,即,智能设备包括可移动平台。
在本实施例中,以执行主体为无人车的智能设备为例进行说明。该无人车可以包括底盘以及设于底盘上的云台,底盘可以由诸如万向轮驱动,云台上可以安装有射击装置、播音器、摄像头等中的至少一种。其中,云台可以为单轴云台、二轴云台或三轴云台等。
在用户操作智能设备进行相关的行为的时候,用户可以通过控制器向智能设备发送用户操作指令,或者,用户直接向智能设备发送用户操作指令; 进而,用户可以通过用户操作指令,控制智能设备进行一系列的行为。上述控制器,可以是与智能设备连接的遥控设备,也可以是智能设备上的控制器。
为了使得智能设备可以进行一系列的行为,用户操作指令中可以包括有一个或多个用户操作信息。一个用户操作信息,用于指示智能设备进行一个或多个行为。智能设备所进行的行为,例如是射击动作、移动动作、加速动作、减速动作。
一个示例中,用户可以采用以下方式中的一种或多种,向智能设备中输入用户操作信息:触控方式、物理控件方式、体感触发方式、语音触发方式。从而,用户操作信息包括以下的任意一种或多种:触控信息、物理控件操作信息、体感信息、语音信息。
举例来说,以执行主体为智能设备进行示例性说明,在智能设备上设置有触控器,例如,触控器为智能显示屏;用户可以触控该触控器,进而向触控器中输入触控信息,触控信息指示出了智能设备需要执行的行为;触控器将触控信息发送给智能设备;进而,智能设备根据触控信息,执行触控信息所指示的行为。
再举例来说,图4为本申请提供的用户操作信息的输入示意图,如图4所示,以执行主体为控制设备进行示例性说明,提供了一个遥控设备,遥控设备的交互界面上可以显示出触控按键,遥控设备与智能设备之间可以进行通信;如图4所示,遥控设备显示了2个虚拟摇杆;用户可以触控虚拟摇杆,进而向遥控设备中输入触控信息,触控信息指示出了智能设备需要执行的行为;遥控设备将触控信息发送给智能设备;进而,智能设备根据触控信息,执行触控信息所指示的行为。其中,控制设备可以包括但不限于手机等移动终端。
示例性地,在用户操作智能设备的时候,智能设备可以获取到用户操作指令,用户操作指令中包括一个或多个用户操作信息;为了便于对用户操作信息进行级联,并且,为了便于指示出智能设备的行为,用户操作信息中可以包括行为参数,行为参数用于指示智能设备的行为。例如,一个行为参数指示出一个行为,或者多个行为参数指示出一个行为。示例性地,行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。
举例来说,用户操作智能设备进行移动和射击,进而需要智能设备完成移动行为和射击行为,此时,可以获取到移动参数和射击参数。例如,移动参数包括但不限于移动距离、移动速度、移动的加速度,射击参数包括但不限于射击角度、射程距离。
再举例来说,用户操作智能设备进行肢体活动,进而需要智能设备完成肢体活动行为,例如,智能设备上安装有机械臂;此时,智能设备可以获取到机械臂的位姿参数。例如,位姿参数包括但不限于智能设备的机械臂的的位置、智能设备的机械臂的移动速度。
又举例来说,用户操作智能设备进行各类技能的操作,需要智能设备完成相关的技能;此时,智能设备可以获取到技能参数,技能参数可以是一种特殊功能对应的参数,也可以是某一些功能的使能或失能。例如,技能包括但不限于跳跃、射击、拍摄、红外发射、连续转圈等等。技能参数包括但不限于技能类别、技能的各类参数。例如,技能的技能参数包括但不限于跳跃速度、跳跃距离。例如,拍摄技能的技能参数包括但限于拍摄范围、拍摄对象、拍摄时间、拍摄角度。例如,红外技能的技能参数包括但限于红外线的范围、红外线的角度。
再举例来说,用户操作智能设备发出不同的语音,需要智能设备发出相关的语音;此时,智能设备可以获取到语音参数。语音参数包括但限于音量、语音内容、角色类型。
S102、存储用户操作信息,以在满足预设触发条件时,根据用户操作信息控制智能设备执行行为。
本实施例中,由于智能设备获取到了用户操作信息,智能设备可以将用户操作信息进行存储,进而智能设备存储下了在用户操作指令下的智能设备的行为。
一个示例中,智能设备可以检测外部的触发条件。
一个示例中,智能设备可以检测用户的主动触发。例如,用户通过语音、触控、位姿等方式发出触发指令。
一个示例中,智能设备可以对智能设备自身的状态信息进行检测,或对智能设备所识别到的环境信息进行检测,以确定智能设备的状态是否满足触发条件。此时,智能设备的状态包括但不限于:智能设备的语音状态、智能 设备的位置状态、智能设备的位姿状态、智能设备的技能状态。例如,检测智能设备的速度是否满足预设速度,或者检测智能设备的位姿是否为预设位姿,或者检测智能的位置是否在预设位置范围之内。
在确定满足预设触发条件的时候,智能设备就可以根据存储的用户操作信息,进而控制智能设备执行预设触发条件所表征的行为。
示例性地,预设触发条件可以为预设用户操作,例如,预设用户操作可以是用户的预设动作、用户的预设位姿、用户的预设音频。或者,预设触发条件可以为外部环境条件,例如,外部环境条件可以是预设环境温度、预设环境湿度、预设环境噪音、预设环境标识。或者,预设触发条件可以为智能设备的预设状态,例如,智能设备的预设状态可以是智能设备的预设位置、智能设备的预设运动状态、智能设备的预设位姿状态。
举例来说,在用户对智能设备进行操作的时候,智能设备若获取到用户操作指令,则可以执行用户所指示的行为,并且智能设备可以获取并存储用户操作指令中的用户操作信息;然后,用户可以执行一定的动作,智能设备在确定用户的动作是在指示智能设备需要执行一定的行为的时候,智能设备调取与用户的动作对应的用户操作信息;然后,智能设备依据与用户的动作对应的用户操作信息,完成用户操作信息所指示的动作。
再举例来说,在用户通过控制设备对智能设备进行操作的时候,智能设备根据用户操作指令,控制智能设备执行用户所指示的行为,并且智能设备可以存储用户操作指令中的用户操作信息;然后,智能设备可以检测智能设备的状态,智能设备在确定智能设备的状态满足一定条件的时候,例如,智能设备确定智能设备的位置在预设位置、或者智能设备定智能设备的位姿是预设位姿,然后,智能设备调取与智能设备的状态对应的用户操作信息;然后,智能设备根据与智能设备的状态对应的用户操作信息,进而控制智能设备执行用户操作信息所指示的行为。另外,在执行主体为控制设备的时候,控制设备将与智能设备的状态对应的用户操作信息发送给智能设备,使得智能设备执行用户操作信息所指示的行为,且智能设备和/或控制设备可以存储用户操作信息。
举例来说,用户可以采用图4所示的遥控设备的虚拟摇杆,向智能设备输入用户操作指令,以控制无人车(智能设备)的行为。左虚拟摇杆,可以 控制无人车的底盘;其中,底盘是控制无人车的进行前后左右运动的;例如,左虚拟摇杆控制无人车的移动方向、加速度。右虚拟摇杆控制无人车的云台,例如,右虚拟摇杆控制云台的俯仰Pitch、偏航Yaw;由于云台上具有射击装置,从而,右虚拟摇杆在控制云台的时候,可以间接的控制云台上的射击装置的射击角度。遥控设备中可以设置有惯性测量单元(Inertial measurement unit,简称IMU),IMU是测量物体三轴姿态角(或角速率)以及加速度的装置;从而,用户可以利用遥控设备中的IMU,输入用户操作指令,即,利用遥控设备的姿态变化来输入操作指令,进而去控制智能设备。在用户输入用户操作指令的时候,遥控设备可以根据用户操作指令中的用户操作信息,控制无人车的行为,并且,遥控设备可以利用对应Pitch/Yaw的操作控制无人车的云台的动作。并且,遥控设备可以存储用户操作指令中的用户操作信息。
例如,图5为本申请提供的无人车的行为示意图,图6为本申请提供的无人车的控制设备的动作示意图一,图7为本申请提供的无人车的控制设备的动作示意图二,如图5-图7所示,用户可以通过控制设备向无人车发出用户操作指令,用户操作指令用于指示无人车的底盘进行移动,并且用户操作指令用于指示无人车的云台进行俯仰和/或旋转。如图6所示,通过将控制设备进行左右水平移动时(参照水平方向的箭头),可以控制无人车的底盘进行左右移动,通过将控制设备进行前后移动时(参照竖直方向的箭头),可以控制无人车的底盘进行前后移动。如图7所示,通过将控制设备进行左右翻转(参照水平方向的箭头),可以控制无人车的云台进行偏航移动,通过将控制设备进行上下翻转(参照竖直方向的箭头),可以控制无人车的云台进行俯仰移动。在这个过程中,可以存储下用户发出的用户操作指令中的用户操作信息,在满足预设触发条件时,根据存储的用户操作信息控制无人车的底盘进行移动、或者控制无人车上的云台进行俯仰和/或偏航旋转。
再举例来说,用户向智能设备发出语音,进而智能设备根据接收到的语音,控制智能设备进行相应的行为。例如,智能设备可以根据用户发出的语音发出语音回应,或者,智能设备可以根据用户发出的语音进行移动,或者,智能设备可以根据用户发出的语音进行相应的技能。并且,智能设备可以对语音进行语义分析,智能设备可以得到语音中的用户操作信息;智能设备可以存储用户操作信息。然后,智能设备检测是否接收到了触发条件,例如, 用户再次发出语音,从而智能设备可以根据当前接收到的语音确定达到了触发条件;然后,智能设备调取存储的用户操作信息,智能设备执行与用户操作信息对应的行为。
一个示例中,还可以包括以下步骤:在获取到用户操作指令时,根据用户操作信息,控制智能设备执行行为。
本实施例中,在获取到上述用户操作指令的时候,智能设备还可以根据用户操作指令中的用户操作信息,进而控制智能设备执行与用户操作信息所对应的行为。在获取到用户操作指令时,实时控制智能设备执行与用户操作指令中的用户操作信息表征的行为,可以通过实体化的呈现,使得用户实时确认用户操作信息的存储是否是合乎预期的,从而可以根据实际的展现结果进行纠错或用户操作信息的进一步处理。
用户操作指令中包括一个或多个用户操作信息;一个用户操作信息,用于指示智能设备进行一个或多个行为。
举例来说,用户操作智能设备,进而智能设备获取到用户操作信息,用户操作信息中包括射击参数;智能设备可以根据用户操作信息中的射击参数,控制智能设备进行射击。
本实施例,通过获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息;存储用户操作信息,以在满足预设触发条件时,根据用户操作信息控制智能设备执行行为。从而获取到用户对于智能设备的用户操作指令,进而存储用户操作指令中的用户操作信息;检测到满足预设触发条件时,调取存储的用户操作信息,以控制智能设备执行用户操作信息表征的行为。由于用户操作指令中的用户操作信息是用户自己输入的,用户操作信息所表征的行为,可以使得智能设备的行为和动作更加准确、精准、自然、有活力;从而,在需要出发智能设备进行行为和动作的时候,可以调取存储的用户操作信息,根据用户操作信息控制智能设备执行相应的行为,此时,智能设备的行为和动作会更加准确、精准、自然、有活力。
图8为本申请另一实施例提供的智能设备的控制方法的流程图,如图8所示,本实施例的方法可以包括:
S201、获取多个用户操作指令,每一个用户操作指令包括用于表征智能 设备的行为的用户操作信息。
一个示例中,用户操作指令的个数为多个,每一个用户操作指令还包括指令标识,指令标识用于标识用户操作指令。
一个示例中,多个用户操作指令的生成时间是连续的;其中,在满足预设触发条件时,各个用户操作信息表征的行为的执行顺序与多个用户操作指令的生成顺序相同。或者,至少部分用户操作指令的生成时间是不连续的。
本实施例中,本实施例的执行主体可以是智能设备,可以是智能设备的控制设备。其中,智能设备可以是可移动平台。本实施例的执行主体同上述实施例,为例进行说明。
在用户操作智能设备进行一定行为和动作的时候,用户会向智能设备输入用户操作指令,具体的,在需要智能设备进行多个行为或者进行一系列行为的时候,一个用户操作指令并不能使得智能设备完成多个行为,从而,用户可以向智能设备输入多个用户操作指令。由于需要存储多个用户操作指令,为了便于对多个用户操作指令进行处理和存储,每一个用户操作指令可以具有各自的指令标识,进而指令标识可以标识出用户操作指令。
每一个用户操作指令中包括用户操作信息,用户操作信息表征出智能设备的行为。
每一个用户操作信息中可以包括行为参数;行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。其中,移动参数包括但不限于速度、加速度、位移。位姿参数包括但不限于位姿位置、位姿状态。射击参数包括但不限于射程、射击角度、射击速度。技能参数包括但不限于移动技能、跳动技能、射击技能、红外技能、变焦技能。音频参数包括但不限于播放参数、语音参数、音量参数。
一个示例中,用户可以连续地操作智能设备,使得智能设备进行一系列的行为;此时,用户连续的向智能设备发出用户操作指令;从而,获取到的用户操作指令在时间上是连续的,即,获取到的多个用户操作指令的生成时间是连续的。由于用户输入一个用户操作指令,智能设备根据用户操作指令中的用户操作信息执行一个行为,从而,在步骤S202中,在存储获取到的多个用户操作指令中的用户操作信息之后,在满足预设触发条件时,需要根据用户操作指令的生成时间的生成顺序,依次执行用户操作信息表征的行为, 即,各个用户操作信息表征的行为的执行顺序与多个用户操作指令的生成顺序相同。
举例来说,用户输入用户操作指令A,用户操作指令A中包括用户操作信息a和用户操作信息b,用户操作指令A的生成时间为时间1;此时,智能设备根据用户操作指令A中的各用户操作信息,执行一个行为1。然后,用户输入用户操作指令B,用户操作指令B中包括用户操作信息c和用户操作信息d,用户操作指令B的生成时间为时间2;此时,智能设备根据用户操作指令B中的各用户操作信息,执行一个行为2。然后,用户输入用户操作指令C,用户操作指令C中包括用户操作信息e,用户操作指令C的生成时间为时间3;此时,智能设备根据用户操作指令C中的各用户操作信息,执行一个行为3。时间1、时间2、时间3在时间次序上依次的,并且,时间1、时间2、时间3在时间上是连续的。然后,在满足预设触发条件时,可以调取存储的用户操作信息a、用户操作信息b、用户操作信息c、用户操作信息d和用户操作信息e;然后,控制智能设备依次执行行为1、行为2、行为3。
另一个示例中,用户可以不连续地操作智能设备,使得智能设备进行一系列的行为;此时,用户可以间隔地向智能设备发出用户操作指令,即,在智能设备发出用户操作指令的期间,可以暂停对智能设备的操作;从而,智能设备获取到的至少部分的用户操作指令在时间上是不连续的。或者,用户在向智能设备中输入多个用户操作指令,使得智能设备进行一系列的行为;然后,智能设备对多个用户操作指令进行次序上的调整,从而,智能设备获取到的至少部分的用户操作指令在时间上是不连续的。从而,在步骤S202中,在存储获取到的多个用户操作指令中的用户操作信息之后,在满足预设触发条件时,智能设备不需要根据用户操作指令的生成时间的生成顺序,依次执行用户操作信息表征的行为,智能设备可以根据所存储的用户操作信息的排布次序,依次执行用户操作信息表征的行为。
举例来说,用户输入用户操作指令A,用户操作指令A中包括用户操作信息a和用户操作信息b,用户操作指令A的生成时间为时间1;此时,智能设备根据用户操作指令A中的各用户操作信息,执行一个行为1。然后,用户输入用户操作指令B,用户操作指令B中包括用户操作信息c和用户操作信息d,用户操作指令B的生成时间为时间2;此时,智能设备根据用户操作 指令B中的各用户操作信息,执行一个行为2。然后,用户输入用户操作指令C,用户操作指令C中包括用户操作信息e,用户操作指令C的生成时间为时间3;此时,智能设备根据用户操作指令C中的各用户操作信息,执行一个行为3。时间1、时间2、时间3在时间次序上依次的,并且,时间1、时间2、时间3在时间上是连续的。然后,可以对用户操作指令A、用户操作指令B、用户操作指令C的次序进行调整,调整后的次序为用户操作指令A、用户操作指令C、用户操作指令D;从而,智能设备存储的多个用户操作指令在生成时间上是不连续的。然后,在满足预设触发条件时,智能设备可以调取存储的用户操作信息a、用户操作信息b、用户操作信息e、用户操作信息c和用户操作信息d;然后,控制智能设备依次执行行为1、行为3、行为2。
S202、关联存储多个用户操作指令对应的用户操作信息,得到用户操作信息集合,以在满足预设触发条件时,根据用户操作信息集合控制智能设备执行各个用户操作信息表征的行为。
本实施例中,智能设备获取到了多个用户操作指令,每一个用户操作指令中包括至少一个用户操作信息;可知,这些用户操作信息表征出了智能设备的一系列行为,从而,多个用户操作指令中的各个用户操作信息之间是具有一定关联关系的。然后智能设备将获取到的多个用户操作指令中的各个用户操作信息关联起来,得到一个用户操作信息集合。
并且,可以设定预设触发条件与用户操作信息集合之间的对应关系,即,配置有多个预设触发条件,各个预设触发条件对应了各自的用户操作信息集合。
然后,在检测到满足预设触发条件的时候,智能设备调取与该预设触发条件对应的用户操作信息集合;然后,智能设备就可以根据与该预设触发条件对应的用户操作信息集合,控制智能设备执行用户操作信息集合中的各个用户操作信息所表征的行为。
举例来说,在用户输入了用户操作指令A、用户操作指令B、用户操作指令C、用户操作指令D之后,智能设备会依次执行与各个用户操作指令对应的行为;其中,用户操作指令A中包括用户操作信息a和用户操作信息b,用户操作指令B中包括用户操作信息c和用户操作信息d,用户操作指令C 中包括用户操作信息e,用户操作指令D中包括用户操作信息f。然后,停止此次的输入。智能设备需要将用户操作信息a、用户操作信息b、用户操作信息c、用户操作信息d、用户操作信息e和用户操作信息f,依次关联存储,得到一个用户操作信息集合Q。然后,在检测到满足预设触发条件时,智能设备调取与该预设触发条件对应的用户操作信息集合Q。然后,根据用户操作信息集合Q中的各个用户操作信息,控制智能设备进行一系列的行为。
从而,在步骤S201和步骤S202的过程中,可以在得到一系列的录制过程之后,智能设备对所录制的各个用户操作信息,进行统一存储;或者,在步骤S201和步骤S202的过程中,智能设备可以实时对每一个用户操作信息进行存储。
一个示例中,本实施例提供的方法,还可以包括:在获取多个用户操作指令的过程中,若接收到暂定指令,则暂定用户操作信息的存储。
本实施例中,在步骤S201和步骤S202的过程中,在用户输入用户操作指令的过程中,智能设备可以实时地对用户操作指令中的用户操作信息进行存储;在这个过程中,智能设备若接收到暂停指令,智能设备就可以暂定对用户操作指令中的用户操作信息的存储处理。
例如,用户通过语音、姿态、触控等方式发出暂停指令;或者,智能设备检测智能设备的状态,在确定智能设备的状态满足一定条件的时候,生成暂停指令;或者,智能设备检测外部环境的状态,在确定外部环境的状态满足一定条件的时候,生成暂停指令。
本实施例,通过获取多个用户操作指令,每一个用户操作指令包括用于表征智能设备的行为的用户操作信息;关联存储多个用户操作指令对应的用户操作信息,得到用户操作信息集合,以在满足预设触发条件时,根据用户操作信息集合控制智能设备执行各个用户操作信息表征的行为。在用户输入多个用户操作指令的时候,对多个用户操作指令中的各个用户操作信息进行关联存储,得到用户操作信息集合;由于用户操作信息集合中的各个用户操作信息可以指示出一系列行为,从而可以在满足预设触发条件时,调取存储的用户操作信息集合,根据用户操作信息集合中的各个用户操作信息,控制智能设备执行一系列行为。由于用户操作指令中的用户操作信息是用户自己输入的,用户操作信息所表征的行为,可以使得智能设备的行为和动作更加 准确、精准、自然、有活力;从而,在需要出发智能设备进行行为和动作的时候,可以调取存储的用户操作信息,根据用户操作信息控制智能设备执行相应的行为,此时,智能设备的行为和动作会更加准确、精准、自然、有活力。
图9为本申请又一实施例提供的智能设备的控制方法的流程图,如图9所示,本实施例的方法可以包括:
S301、获取开启指令,开启指令用于指示用户操作信息的录制开始。
本实施例中,本实施例的执行主体可以是智能设备,可以是智能设备的控制设备。其中,智能设备可以是可移动平台。本实施例的执行主体同上述实施例,为例进行说明。
在执行本申请实施例之前,需要一个触发指示,以指示出可以开始录制用户所发出的用户操作指令中的用户操作信息。从而,智能设备可以实时的检测是否接收到开启指令;智能设备在确定接收到开启指令之后,就可以确定在获取到用户操作指令之后,可以录制用户操作指令中的用户操作信息。其中,可以理解,录制并不一定意味着存储,录制过程中可以暂停、可以编辑,录制完成后,可以依据触发条件确定是否进行存储。
举例来说,用户通过触控、摇杆、姿态、语音等等方式,发出开启指令。或者,可以检测智能设备的状态,在确定智能设备的状态满足一定条件时,生成开启指令,例如,智能设备在确定智能设备的发动机开始启动时,确定生成开启指令。或者,智能设备可以检测外部环境的状态,在确定外部环境的状态满足一定条件时,生成开启指令,例如,智能设备在确定外部环境的震动状态大于预设震荡幅度时,确定生成开启指令。
S302、获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息。
本实施例中,本步骤可以参见图3所示的步骤S101,或者,本步骤可以参见图8所示的步骤S201,不再赘述。
S303、存储用户操作信息,以在满足预设触发条件时,根据用户操作信息控制智能设备执行行为。
本实施例中,本步骤可以参见图3所示的步骤S102,或者,本步骤可以 参见图8所示的步骤S202,不再赘述。
S304、获取结束指令,结束指令用于指示用户操作信息的录制结束。
一个示例中,结束指令为依据开启指令指示的时间以及预设时长生成。
本实施例中,步骤S303之后,或者,在步骤S303的存储用户操作信息的过程中,可以实时的检测是否接收到结束指令;智能设备在确定接收到结束指令之后,就可以确定不需要继续录制用户操作信息了,就可以结束对用户操作信息的录制和存储。
一个示例中,用户可以主动发出结束指令。举例来说,用户通过触控、摇杆、姿态、语音等等方式,发出结束指令。
一个示例中,智能设备可以检测智能设备的状态,在确定智能设备的状态满足一定条件时,生成结束指令,例如,在确定智能设备的发动机不再运行时,确定生成结束指令。
一个示例中,智能设备可以检测外部环境的状态,在确定外部环境的状态满足一定条件时,生成结束指令,例如,在确定外部环境的噪音小于预设噪音值时,确定生成结束指令。
一个示例中,在获取到开启指令的时候,智能设备可以获取到开启指令指示的开始录制的时间,并且,开启指令可以指出一个预设时长;然后,智能设备根据开启指令指示的开始录制的时间和预设时长进行叠加,可以确定出一个时间点,将该时间点作为生成的结束指令的时间点;进而,在抵达该时间点的时候,智能设备自动生成的用于指示用户操作信息的录制结束的结束指令。
S305、对至少一个用户操作信息进行编辑处理,得到处理后的用户操作信息,以在满足预设触发条件时,控制智能设备执行处理后的用户操作信息表征的行为。
一个示例中,编辑处理包括以下中的任意一种或多种:
移动处理,移动处理用于指示对用户操作信息表征的行为的执行顺序进行调整;删除处理,删除处理用于指示对用户操作信息进行删除;增加处理,增加处理用于指示增加至少一个用户操作信息,以形成用户操作信息集合;叠加处理,叠加处理用于指示将至少两个用户操作信息表征的行为的执行顺序进行关联;缩放处理,缩放处理用于指示用户操作信息表征的行为的执行 时长被拉长或被缩短;裁剪处理,裁剪处理用于指示用户操作信息表征的行为的执行被分割成多个子行为或用户操作信息表征的行为的执行是不完整的。
本实施例中,在步骤S303或者S304之后,即,智能设备在存储了用户操作信息之后,智能设备可以对已经存储的用户操作信息进行编辑处理,得到处理后的用户操作信息;或者,在步骤S303的存储用户操作信息的过程中,智能设备可以对已经存储的用户操作信息进行编辑处理,得到处理后的用户操作信息;或者,在步骤S303的存储用户操作信息之前,智能设备可以对录制后的用户操作信息进行编辑处理。另外,智能设备在得到处理后的用户操作信息之后,可以将处理后的用户操作信息替换掉之前的用户操作信息;或者,智能设备在得到处理后的用户操作信息之后,可以同时保留之前的用户操作信息、以及处理后的用户操作信息。
其中,编辑处理包括但不限于以下处理:移动处理、删除处理、增加处理、叠加处理、缩放处理、裁剪处理。
举例来说,智能设备接收移动处理指令,移动处理指令指示出了一个或多个用户操作信息,并且移动处理指令指示出了需要被移动的用户操作信息的原始位置和移动后的位置;然后,智能设备根据移动处理指令,将移动处理指令所指示的用户操作信息,移动至预设位置。
举例来说,智能设备接收删除处理指令,删除处理指令指示出了一个或多个用户操作信息,并且删除处理指令指示出了需要删除用户操作信息;然后,智能设备根据删除处理指令,将删除处理指令所指示的用户操作信息,进行删除。
举例来说,智能设备接收增加处理指令,增加处理指令指示出了一个或多个其他的用户操作信息,并且增加处理指令指示出了需要增加用户操作信息;然后,智能设备根据增加处理指令,将增加处理指令所指示的其他的用户操作信息,计入到当前的用户操作信息之前或者之后或者之中。
举例来说,智能设备接收叠加处理指令,叠加处理指令指示出了多个用户操作信息,并且叠加处理指令指示出了需要叠加用户操作信息;然后,智能设备根据叠加处理指令,将叠加处理指令所指示的多个用户操作信息,进行叠加。例如,用户操作信息a为移动,用户操作信息b为射击;接收叠加处理指令,叠加处理指令指示出了需要将用户操作信息a和用户操作信息b 进行叠加;然后,智能设备根据叠加处理指令,将叠加处理指令所指示的用户操作信息a和用户操作信息b,进行叠加,得到处理后的用户操作信息A,用户操作信息A表征了智能设备需要进行移动和射击,例如,用户操作信息A表征了智能设备需要同时进行移动和射击,或者用户操作信息A表征了智能设备需要移动过程中的某个位置上进行射击,或者用户操作信息A表征了智能设备需要射击过程中的某个时间段内进行移动。
举例来说,智能设备接收缩放处理指令,缩放处理指令指示出了一个或多个用户操作信息,并且缩放处理指令指示出了需要缩放用户操作信息表征的行为的执行时长;然后,智能设备根据缩放处理指令,将缩放处理指令所指示的用户操作信息的时长,进行拉长或被缩短。例如,用户操作信息a为变焦;接收到缩放处理指令,缩放处理指令指示需要将用户操作信息a的时长拉长;然后,根据缩放处理指令,将用户操作信息a的时长拉长。
举例来说,智能设备接收裁剪处理指令,裁剪处理指令指示出了一个或多个用户操作信息,并且裁剪处理指令用于指示需要将用户操作信息表征的行为的执行被分割成多个子行为;然后,智能设备将裁剪处理指令所指示的用户操作信息表征的行为,切分为多个子行为。举例来说,智能设备接收到裁剪处理指令,裁剪处理指令指示出的用户操作信息中包括变焦操作信息、移动操作信息、拍摄操作信息和射击操作信息,从而,该用户操作信息表征的行为包括了变焦、移动、拍摄和射击,即,智能设备可以根据该用户操作信息,同时完成变焦、移动、拍摄和射击;然后,可以对该用户操作信息进行切分,得到变焦操作信息、移动操作信息、拍摄操作信息和射击操作信息;从而,对用户操作信息表征的行为,切分为多个子行为,分别为变焦、移动、拍摄和射击,这个多个子行为可以各个单独执行,或与其它行为结合。
举例来说,智能设备接收裁剪处理指令,裁剪处理指令指示出了一个或多个用户操作信息,并且裁剪处理指令用于指示在执行用户操作信息表征的行为时,执行的行为是不完整的。然后,智能设备将裁剪处理指令所指示的用户操作信息表征的行为,裁剪掉部分的子行为。举例来说,智能设备接收到裁剪处理指令,裁剪处理指令指示出的用户操作信息中包括变焦操作信息、移动操作信息、拍摄操作信息和射击操作信息,从而,该用户操作信息表征的行为包括了变焦、移动、拍摄和射击,即,智能设备可以根据该用户操作 信息,同时完成变焦、移动、拍摄和射击;然后,智能设备可以对该用户操作信息进行切分,去除变焦操作信息;从而,去除掉用户操作信息表征的行为中子行为中的变焦行为。
并且,上述编辑处理的方式,可以同时进行多个。
然后,智能设备检测触发条件,在确定满足预设触发条件时,智能设备调取与预设触发条件对应的处理后的用户操作信息;然后,智能设备根据处理后的用户操作信息,控制智能设备执行相应的行为。
一个示例中,上述编辑处理的指令,可以是用户通过触控、语音、位姿等方式发出的。
举例来说,图10为本申请提供的编辑处理的示意图,如图10所示,可以在智能设备或者智能设备的控制设备上显示一个交互界面,用户可以在该交互界面上观看获取的用户操作信息所表征的行为;并且,该交互界面上提供了播放按钮、暂停按钮和编辑按钮;用户可以通过触控播放按钮,观看获取到的用户操作信息所表征的智能设备的行为;用户可以通过触控暂停按钮,暂停观看获取到的用户操作信息所表征的智能设备的行为;用户可以通过触控编辑按钮,对获取到的用户操作信息进行编辑处理。并且,在对用户操作信息进行编辑处理的时候,可以在界面上提供一个以时间次序进行排序的各用户操作信息的编辑帧,用户可以触控这些编辑帧,对用户操作信息进行编辑处理。
例如,在图10所示的示例中,用户触控编辑帧1,然后拖动编辑帧1,将编辑帧1拖动到编辑帧2之前,其中,编辑帧1与用户操作信息a对应,编辑帧2与用户操作信息b对应;从而接收到用户发出的移动处理指令,移动处理指令指示出将用户操作信息a移动至用户操作信息b之前;从而,根据移动处理指令,将用户操作信息a移动至用户操作信息b之前。
可以理解,本发明实施例中的涉及到的交互动作的交互界面可以为多个,也可以为一个,当为多个时,各个交互界面之间可以进行切换,当为一个时,交互界面上的部分标识可以进行隐藏或显现。
一个示例中,在步骤S303或者步骤S305中的控制智能设备执行行为之前,还可以执行以下过程:
在预设触发条件为特定触发条件时,检测智能设备是否满足执行行为的 预设条件;若否,则控制智能设备执行相应的操作,以满足执行行为的预设条件。
本实施例中,智能设备在存储用户操作信息之后,或者,智能设备在得到处理后的用户操作信息之后,或者,智能设备在得到用户操作信息集合之后,可以检测预设触发条件是否发生;在检测到满足预设触发条件之后,即,在确定足预设触发条件为特定触发条件时,智能设备判断智能设备当前的状态,是否满足了预设条件。
其中,预设触发条件可以参见图3所示的步骤S102。特定触发条件可以是符合一定要求的触发条件。例如,预设触发条件可以为与用户操作信息对应的用户操作,则特定触发条件为特定的用户操作信息。
智能设备在确定智能设备当前的状态满足预设条件时,确定可以调取与预设触发条件对应的用户操作信息;智能设备根据用户操作信息,控制智能设备执行用户操作信息所表征的行为。
智能设备在确定智能设备当前的状态不满足预设条件时,为了使得智能设备可以调取并执行用户操作信息,智能设备需要控制智能设备执行一定的操作,进而使得智能设备当前的状态满足预设条件;然后,智能设备调取与预设触发条件对应的用户操作信息;根据用户操作信息,控制智能设备执行用户操作信息所表征的行为。其中,预设条件指的是特定触发条件。例如,预设条件包括但不限于预设的外部环境条件、智能设备的状态、特定用户操作;例如,用户操作可以是用户的动作、用户的位姿、用户的音频,外部环境条件可以是环境温度、环境湿度、环境噪音,智能设备的状态可以是智能设备的位置、智能设备的运动状态、智能设备的位姿状态。
举例来说,控制智能设备执行相应的操作,包括但不限于以下操作:移动、旋转、变焦、射击、位姿。
举例来说,智能设备在检测到预设触发条件为特定触发条件时,其中,该特定触发条件为智能设备开始运行了。然后,智能设备检测智能设备的云台是否在预设位置上。若确定智能设备的云台在预设位置上,则调取射击的用户操作信息;然后,智能设备根据用户操作信息,控制云台上的设计装置进行射击的行为。智能设备若确定智能设备的云台不在预设位置上,则控制智能设备的云台移动至预设位置;然后,智能设备调取射击的用户操作信息; 然后,根据用户操作信息,控制云台上的设计装置进行射击的行为。
本实施例,在以上实施例的基础上,还可以对获取到的用户操作信息进行编辑处理,例如进行移动、删除、增加、叠加、缩放、裁剪等处理,进而得到多种不同形式的用户操作信息,或者得到多种用户操作信息集合;然后,在满足预设触发条件时,根据处理后的用户操作信息,控制智能设备执行相应的行为。从而,可以增加用户操作信息的多样性,由于不同的用户操作信息指示出了不同的行为,从而可以增加智能设备的行为的多样性。
图11为本申请再一实施例提供的智能设备的控制方法的流程图,如图11所示,本实施例的方法可以包括:
S401、获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息。
本实施例中,本步骤可以参见图3所示的步骤S101,或者,本步骤可以参见图8所示的步骤S201,或者,本步骤可以参见图9所示的步骤S302,不再赘述。
S402、存储用户操作信息,以在满足预设触发条件时,根据用户操作信息控制智能设备执行行为。
本实施例中,本步骤可以参见图3所示的步骤S102,或者,本步骤可以参见图8所示的步骤S202,或者,本步骤可以参见图9所示的步骤S303,不再赘述。
一个示例中,本实施例提供的方法,还可以执行图9中的各步骤,不再赘述。
S403、在交互界面上显示行为播放条;其中,在行为的执行过程中,行为播放条发生动态变化。
一个示例中,用于指示对用户操作信息进行处理的指令为依据用户针对行为播放条的操作和/或用户针对交互界面上的虚拟按键的操作生成。
本实施例中,在本实施例的方案的执行过程中,智能设备可以在智能设备或者智能设备的控制设备上提供一个交互界面,该交互界面用于接收用户的指令,并且该交互界面用于向用户展示智能设备的动作以及对用户操作信息的编辑。
图12为本申请实施例提供的交互界面的示意图一,如图12所示,在智能设备根据用户操作信息执行相应行为的时候,在交互界面上可以显示出智能设备的行为;为了便于对用户操作信息进行编辑,并且为了便于用户选择性的观看用户操作信息表征的行为,在交互界面上可以提供一个行为播放条,或者,根据用户操作跳转到另一个交互界面上,在该交互界面上显示一个行为播放条,或者,自动跳转到另一个交互界面上,在该交互界面上显示一个行为播放条;行为播放条上显示有多个编辑帧(行为帧),每一个编辑帧对应了一个行为,每一个行为对应了至少一个用户操作信息;在智能设备执行用户操作信息表征的行为的时候,可以在交互界面上显示用户的行为,并且,在行为播放条上的编辑帧进行逐帧递进,即,每播放一个编辑帧,将编辑帧标记为已播放;例如,如图12所示,将编辑帧上增加一个黑点,黑点表征了编辑帧已播放。从而,采用行为播放条发生动态变化的方式,标记处哪些行为已经被智能设备执行。其中,可以理解,在行为播放条触发时,也可以不在交互界面上显示相应用户操作信息表征的行为,此时,可以触发智能设备执行相应的行为,也即行为播放条的触发是智能设备执行相应行为的一个预设触发条件。其中,也可以理解,在行为播放条触发时,在交互界面上显示的相应用户操作信息表征的行为可以是虚拟的多媒体画面,也可以是在智能设备获取到用户操作指令而根据用户操作指令执行的行为的录制视频。
并且,交互界面上提供了播放按钮、暂停按钮和编辑按钮;用户可以触控播放按钮,进而,控制设备获取到了播放指令,控制设备播放获取到的用户操作信息所表征的智能设备的行为;用户可以触控暂停按钮,进而,控制设备获取到了暂停指令,控制设备暂停播放获取到的用户操作信息所表征的智能设备的行为;用户可以触控编辑按钮,进而,控制设备获取到了编辑指令,控制设备根据用户的编辑指令对获取到的用户操作信息进行编辑处理。
一个示例中,在交互界面上提供了行为播放条,行为播放条上显示有多个编辑帧(行为帧),每一个编辑帧对应了一个行为,每一个行为对应了至少一个用户操作信息;从而,在对用户操作信息进行编辑处理的时候,可以检测用户在行为播放条上的操作;然后,根据用户在行为播放条上的操作,生成编辑处理指令。例如,用户触控行为播放条上的编辑帧1,然后拖动编辑帧1,将编辑帧1拖动到编辑帧2之前,其中,编辑帧1与用户操作信息a 对应,编辑帧2与用户操作信息b对应;从而接收到用户发出的移动处理指令,移动处理指令指示出将用户操作信息a移动至用户操作信息b之前;从而,根据移动处理指令,将用户操作信息a移动至用户操作信息b之前。
另一个示例中,在交互界面上提供了虚拟按键,例如,提供了“编辑按钮”,从而,用户可以触控交互界面上的虚拟按键,进而用户输入不同的编辑处理指令。例如,用户触控交互界面上的“编辑按钮”;然后,在交互界面上弹出不同的编辑处理选项,用户可以选择叠加处理;从而接收到用户发出的叠加处理指令,叠加处理指令指示出将用户操作信息a和用户操作信息b进行叠加;从而,根据叠加处理指令,将用户操作信息a和用户操作信息b进行叠加。
S404、在交互界面上显示至少一个行为标识,各个行为标识用于指示至少一个用户操作信息表征的行为的集合。
本实施例中,智能设备可以将多个用户操作信息,组成一个用户操作信息集合;一个用户操作信息集合,可以指示出智能设备的一系列的行为;从而,为了便于用户获知哪些用户操作信息属于同一个用户操作信息集合,进而便于用户对用户操作信息进行编辑,可以在交互界面上显示智能设备的行为的同时,在交互界面上显示出用户操作信息集合的行为标识。可知,行为标识也是行为集合的标识,这里的行为集合,指的是上述一系列的行为。
一个用户操作信息集合,可以指示出智能设备的一系列的行为;从而,智能设备可以完成多个行为,即,完成一个完整的行为集合;可以对智能设备所完成的一个完整的行为集合进行录制和显示。
图13为本申请实施例提供的交互界面的示意图二,如图13所示,智能设备根据用户操作信息集合1,可以完成一系列的行为,即,完成行为集合1;可以在交互界面上显示行为集合1的各个行为;智能设备根据用户操作信息集合2,可以完成一系列的行为,即,完成行为集合2;可以在交互界面上显示行为集合2的各个行为;智能设备根据用户操作信息集合3,可以完成一系列的行为,即,完成行为集合3;可以在交互界面上显示行为集合3的各个行为。并且,为了便于区分各个行为集合,可以为每一个行为集合配置一个行为标识。行为标识,例如为文字、图片、图形、按钮、图框。
S405、在选中至少一个行为标识中的一个行为标识时,选中的行为标识 在交互界面上发生动态变化。
本实施例中,基于步骤S404中的行为标识,每一个行为标识指示出了每一个行为集合;交互界面可以显示出各个行为标识。进而,用户可以触控每一个行为标识,以选中行为标识。然后,智能设备就可以根据用户选中的行为标识,调取与行为标识对应的行为集合,然后在交互界面上播放与行为标识对应的行为集合;或者,智能设备根据用户选中的行为标识,调取与行为标识对应的行为集合,然后,智能设备对与行为标识对应的行为集合进行编辑处理,在交互界面上显示编辑处理的过程,还可以在交互界面上显示编辑后的行为集合。
并且,在用户选中一个行为标识时,为了便于用户确认是否选中该行为标识,智能设备可以将该行为标识进行一种或多种动态变化,以突出显示该行为标识。
例如,智能设备将用户选中的行为标识进行突出显示,或者,智能设备将用户选中的行为标识进行颜色的改变,或者,智能设备将用户选中的行为标识进行加粗显示。
S406、获取用户在交互界面上的触控指令;根据触控指令所指示的滑动方向,在交互界面上显示行为标识的滑动切换。
本实施例中,基于步骤S404中的行为标识,每一个行为标识指示出了每一个行为集合;交互界面可以显示出各个行为标识。进而,用户可以通过选中行为标识,对行为标识对应的行为集合进行切换选择;此时,在用户选中和切换行为标识的时候,智能设备可以获取到用户的触控指令。由于用户在交互界面上进行滑动,进而智能设备可以检测到触控指令所指示的滑动方向。
然后,智能设备检测用户的触控指令所指示切换的行为标识,然后显示该行为标识对应的行为集合;并且,在用户在交互界面上进行滑动的过程中,智能设备根据触控指令所指示的滑动方向,在交互界面上显示行为标识的滑动切换的动作。例如,如图13所示,可以在交互界面上显示3个行为标识,分别为行为集合1、行为集合2和行为集合3;用户在交互界面上进行滑动,以指示切换行为集合;例如,用户由左至右滑动;从而,由左至右的在交互界面上显示集合1、行为集合2和行为集合3的滑动。
本实施例,在以上实施例的基础上,还可以提供一个交互界面,该交互 界面用于接收用户的指令,并且该交互界面用于向用户展示智能设备的动作以及对用户操作信息的编辑。可以在交互界面上显示智能设备的行为和行为集合,还可以在交互界面上显示行为集合的切换状态,从而,使得用户获知当前所展示的智能设备的行为,直观地为用户展示所获取和编辑的行为。并且,用户可以在交互界面上进行操作,进而对用户操作信息进行编辑;用户通过编程建立行为逻辑,可以增加用户操作信息的多样性,增加智能设备的行为的多样性。
图14为本申请其他一实施例提供的智能设备的控制方法的流程图,该方法可以应用于智能设备的控制设备,如图14所示,本实施例的方法可以包括:
S501、获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息。
S502、将用户操作信息发送给智能设备,以使智能设备存储用户操作信息,并在满足预设触发条件时,根据用户操作信息执行行为。
本实施例中,本实施例的执行主体可以是智能设备的控制设备。例如,智能设备可以是可移动平台、智能机器人、无人机等等。控制设备可以是移动终端、遥控设备等等。
一个示例中,用户操作信息包括以下的任意一种或多种:触控信息、物理控件操作信息、体感信息、语音信息。
一个示例中,用户操作信息包括用于指示行为的行为参数;行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。
一个示例中,用户操作指令的个数为多个,每一个用户操作指令还包括指令标识,指令标识用于标识用户操作指令。
一个示例中,步骤501,包括:获取多个用户操作指令。
步骤502,包括:将多个用户操作指令对应的用户操作信息形成的用户操作信息集合发送给智能设备。
一个示例中,多个用户操作指令的生成时间是连续的;其中,在满足预设触发条件时,各个用户操作信息表征的行为的执行顺序与多个用户操作指令的生成顺序相同。
一个示例中,本实施例提供的方法,还包括:在获取多个用户操作指令的过程中,若接收到暂定指令,则暂定用户操作信息的录制。
一个示例中,至少部分用户操作指令的生成时间是不连续的。
一个示例中,在步骤501之前,本实施例提供的方法,还包括:获取开启指令,开启指令用于指示用户操作信息的录制开始。
一个示例中,本实施例提供的方法,还包括:获取结束指令,结束指令用于指示用户操作信息的录制结束。
一个示例中,结束指令为依据开启指令指示的时间以及预设时长生成。
一个示例中,本实施例提供的方法,方法还包括:对至少一个用户操作信息进行编辑处理,得到处理后的用户操作信息,以在满足预设触发条件时,控制智能设备执行处理后的用户操作信息表征的行为。
其中,在控制设备侧对至少一个用户操作信息进行编辑处理后,可以将处理后的用户操作信息发送至智能设备。也即,在对用户操作信息进行录制后,可以对录制的用户操作信息进行编辑之后再存储至智能设备侧。
一个示例中,编辑处理包括以下中的任意一种或多种:
移动处理,移动处理用于指示对用户操作信息表征的行为的执行顺序进行调整;删除处理,删除处理用于指示对用户操作信息进行删除;增加处理,增加处理用于指示增加至少一个用户操作信息,以形成用户操作信息集合;叠加处理,叠加处理用于指示将至少两个用户操作信息表征的行为的执行顺序进行关联;缩放处理,缩放处理用于指示用户操作信息表征的行为的执行时长被拉长或被缩短;裁剪处理,裁剪处理用于指示用户操作信息表征的行为的执行被分割成多个子行为或用户操作信息表征的行为的执行是不完整的。
一个示例中,本实施例提供的方法,还包括:在交互界面上显示行为播放条;其中,在行为的执行过程中,行为播放条发生动态变化。
其中,可以理解,当在智能设备的控制设备侧,未对用户操作信息进行存储,而需要在控制设备的交互界面输出用户操作信息表征的行为的播放时,控制设备可以从智能设备处获取相应的用户操作信息。
一个示例中,用于指示对用户操作信息进行处理的指令为依据用户针对行为播放条的操作和/或用户针对交互界面上的虚拟按键的操作生成。
一个示例中,本实施例提供的方法,还包括:在交互界面上显示至少一 个行为标识,各个行为标识用于指示至少一个用户操作信息表征的行为的集合。
一个示例中,本实施例提供的方法,还包括:在选中至少一个行为标识中的一个行为标识时,选中的行为标识在交互界面上发生动态变化。
一个示例中,本实施例提供的方法,还包括:获取用户在交互界面上的触控指令;根据触控指令所指示的滑动方向,在交互界面上显示行为标识的滑动切换。
一个示例中,在行为执行之前,本实施例提供的方法,还包括:
在预设触发条件为特定触发条件时,检测智能设备是否满足执行行为的预设条件;若否,则控制智能设备执行相应的操作,以满足执行行为的预设条件。
一个示例中,本实施例提供的方法,还包括:在获取到用户操作指令时,根据用户操作信息,控制智能设备执行行为。
一个示例中,智能设备包括可移动平台。
本实施例的各步骤可以参见上述实施例的介绍,不再赘述。
图15为本申请一实施例提供的智能设备的控制装置的结构示意图,图15所示,本实施例提供的智能设备的控制装置600可以包括:存储器601和处理器602。
存储器601,用于存储程序代码。
处理器602,用于调用程序代码,当程序代码被执行时,用于执行以下操作:获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息;存储用户操作信息,以在满足预设触发条件时,根据用户操作信息控制智能设备执行行为。
其中,上述控制装置可以为智能设备(可选的,可以安装于智能设备上,也可以独立于智能设备),或者为智能设备的控制设备(可选的,可以安装于控制设备上,也可以独立于控制设备)。
可选的,当上述控制装置为智能设备的控制设备时,处理器602,用于调用程序代码,当程序代码被执行时,用于执行以下操作:获取用户操作指令,用户操作指令包括用于表征智能设备的行为的用户操作信息;将用户 操作信息发送给智能设备,以使智能设备存储用户操作信息,并在满足预设触发条件时,根据用户操作信息执行行为。
在一些实施例中,用户操作信息包括以下的任意一种或多种:触控信息、物理控件操作信息、体感信息、语音信息。
在一些实施例中,用户操作信息包括用于指示行为的行为参数;行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。
在一些实施例中,用户操作指令的个数为多个,每一个用户操作指令还包括指令标识,指令标识用于标识用户操作指令。
在一些实施例中,处理器602,具体用于:获取多个用户操作指令;关联存储多个用户操作指令对应的用户操作信息,得到用户操作信息集合,以在满足预设触发条件时,根据用户操作信息集合控制智能设备执行各个用户操作信息表征的行为。
在一些实施例中,处理器602,具体用于:获取多个用户操作指令;将多个用户操作指令对应的用户操作信息形成的用户操作信息集合发送给智能设备。
在一些实施例中,多个用户操作指令的生成时间是连续的;其中,在满足预设触发条件时,各个用户操作信息表征的行为的执行顺序与多个用户操作指令的生成顺序相同。
在一些实施例中,处理器602,还用于:在获取多个用户操作指令的过程中,若接收到暂定指令,则暂定用户操作信息的录制。
在一些实施例中,至少部分用户操作指令的生成时间是不连续的。
在一些实施例中,处理器602,还用于:在获取用户操作指令之前,获取开启指令,开启指令用于指示用户操作信息的录制开始。
在一些实施例中,处理器602,还用于:获取结束指令,结束指令用于指示用户操作信息的录制结束。
在一些实施例中,结束指令为依据开启指令指示的时间以及预设时长生成。
在一些实施例中,处理器602,还用于:
对至少一个用户操作信息进行编辑处理,得到处理后的用户操作信息, 以在满足预设触发条件时,控制智能设备执行处理后的用户操作信息表征的行为。
在一些实施例中,编辑处理包括以下中的任意一种或多种:
移动处理,移动处理用于指示对用户操作信息表征的行为的执行顺序进行调整;删除处理,删除处理用于指示对用户操作信息进行删除;增加处理,增加处理用于指示增加至少一个用户操作信息,以形成用户操作信息集合;叠加处理,叠加处理用于指示将至少两个用户操作信息表征的行为的执行顺序进行关联;缩放处理,缩放处理用于指示用户操作信息表征的行为的执行时长被拉长或被缩短;裁剪处理,裁剪处理用于指示用户操作信息表征的行为的执行被分割成多个子行为或用户操作信息表征的行为的执行是不完整的。
在一些实施例中,处理器602,还用于:在获取多个用户操作指令的过程中,若接收到暂定指令,则暂定用户操作信息的存储。
在一些实施例中,可移动平台还包括:显示器603;显示器603,用于在交互界面上显示行为播放条;其中,在行为的执行过程中,行为播放条发生动态变化。
在一些实施例中,用于指示对用户操作信息进行处理的指令为依据用户针对行为播放条的操作和/或用户针对交互界面上的虚拟按键的操作生成。
在一些实施例中,显示器603,还用于在交互界面上显示至少一个行为标识,各个行为标识用于指示至少一个用户操作信息表征的行为的集合。
在一些实施例中,显示器603,还用于在选中至少一个行为标识中的一个行为标识时,选中的行为标识在交互界面上发生动态变化。
在一些实施例中,处理器602,还用于获取用户在交互界面上的触控指令;显示器603,还用于根据触控指令所指示的滑动方向,在交互界面上显示行为标识的滑动切换。
在一些实施例中,处理器602,还用于:
在行为执行之前,在预设触发条件为特定触发条件时,检测智能设备是否满足执行行为的预设条件;若否,则控制智能设备执行相应的操作,以满足执行行为的预设条件。
在一些实施例中,处理器602,还用于:
在获取到用户操作指令时,根据用户操作信息,控制智能设备执行行为。
在一些实施例中,智能设备包括可移动平台。
本实施例的控制装置600,可以用于执行前述对应方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
本申请实施例中还提供了一种计算机存储介质,该计算机存储介质中存储有程序指令,程序执行时可包括前述对应实施例中的方法的部分或全部步骤。
图16为本申请一实施例提供的智能设备的控制***的结构示意图,该***包括控制设备701和智能设备702;
控制设备701,用于获取用户操作指令,用户操作指令包括用于表征智能设备702的行为的用户操作信息,并将行为操作信息发送给智能设备702。
智能设备702,用于存储用户操作信息,并在确定满足预设触发条件时,根据用户操作信息执行行为。
在一些实施例中,用户操作信息包括以下的任意一种或多种:触控信息、物理控件操作信息、体感信息、语音信息。
在一些实施例中,用户操作信息包括用于指示行为的行为参数;行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。
在一些实施例中,用户操作指令的个数为多个,每一个用户操作指令还包括指令标识,指令标识用于标识用户操作指令。
在一些实施例中,控制设备701,具体用于获取多个用户操作指令,并将多个用户操作指令对应的用户操作信息发送给智能设备702。
智能设备702,具体用于关联存储多个用户操作指令对应的用户操作信息,得到用户操作信息集合;并在确定满足预设触发条件时,根据用户操作信息集合执行各个用户操作信息表征的行为。
在一些实施例中,多个用户操作指令的生成时间是连续的;其中,在满足预设触发条件时,各个用户操作信息表征的行为的执行顺序与多个用户操作指令的生成顺序相同。
在一些实施例中,控制设备701,还用于:在获取多个用户操作指令的 过程中,若接收到暂定指令,则暂定用户操作信息的录制。
在一些实施例中,至少部分用户操作指令的生成时间是不连续的。
在一些实施例中,控制设备701,还用于:在获取用户操作指令之前,获取开启指令,开启指令用于指示用户操作信息的录制开始。
在一些实施例中,控制设备701,还用于:获取结束指令,结束指令用于指示用户操作信息的录制结束。
在一些实施例中,结束指令为依据开启指令指示的时间以及预设时长生成。
在一些实施例中,控制设备701,还用于对至少一个用户操作信息进行编辑处理,得到处理后的用户操作信息。
智能设备702,还用于在满足预设触发条件时,执行处理后的用户操作信息表征的行为。
在一些实施例中,编辑处理包括以下中的任意一种或多种:
移动处理,移动处理用于指示对用户操作信息表征的行为的执行顺序进行调整;删除处理,删除处理用于指示对用户操作信息进行删除;增加处理,增加处理用于指示增加至少一个用户操作信息,以形成用户操作信息集合;叠加处理,叠加处理用于指示将至少两个用户操作信息表征的行为的执行顺序进行关联;缩放处理,缩放处理用于指示用户操作信息表征的行为的执行时长被拉长或被缩短;裁剪处理,裁剪处理用于指示用户操作信息表征的行为的执行被分割成多个子行为或用户操作信息表征的行为的执行是不完整的。
在一些实施例中,控制设备701,还用于:在交互界面上显示行为播放条;其中,在行为的执行过程中,行为播放条发生动态变化。
在一些实施例中,用于指示对用户操作信息进行处理的指令为依据用户针对行为播放条的操作和/或用户针对交互界面上的虚拟按键的操作生成。
在一些实施例中,控制设备701,还用于:在交互界面上显示至少一个行为标识,各个行为标识用于指示至少一个用户操作信息表征的行为的集合。
在一些实施例中,控制设备701,还用于:在选中至少一个行为标识中的一个行为标识时,选中的行为标识在交互界面上发生动态变化。
在一些实施例中,控制设备701,还用于:获取用户在交互界面上的触控指令;根据触控指令所指示的滑动方向,在交互界面上显示行为标识的滑 动切换。
在一些实施例中,智能设备702,还用于:在行为执行之前,在预设触发条件为特定触发条件时,检测智能设备702是否满足执行行为的预设条件;若否,则执行相应的操作,以满足执行行为的预设条件。
在一些实施例中,智能设备702,还用于:在获取到用户操作指令时,根据用户操作信息,控制智能设备702执行行为。
在一些实施例中,智能设备702包括可移动平台。
其中,控制设备701可以参见上述相关实施例的结构和实施方式,智能设备702可以参见上述相关实施例的结构和实施方式,其实现原理和技术效果类似,此处不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:只读内存(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (85)

  1. 一种智能设备的控制方法,其特征在于,包括:
    获取用户操作指令,所述用户操作指令包括用于表征所述智能设备的行为的用户操作信息;
    存储所述用户操作信息,以在满足预设触发条件时,根据所述用户操作信息控制所述智能设备执行所述行为。
  2. 根据权利要求1所述的方法,其特征在于,所述用户操作信息包括以下的任意一种或多种:
    触控信息、物理控件操作信息、体感信息、语音信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述用户操作信息包括用于指示所述行为的行为参数;
    所述行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述用户操作指令的个数为多个,每一个所述用户操作指令还包括指令标识,所述指令标识用于标识所述用户操作指令。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述获取用户操作指令,包括:
    获取多个用户操作指令;
    所述存储所述用户操作信息,以在满足预设触发条件时,根据所述用户操作信息控制所述智能设备执行所述行为,包括:
    关联存储多个所述用户操作指令对应的用户操作信息,得到用户操作信息集合,以在满足预设触发条件时,根据所述用户操作信息集合控制所述智能设备执行各个所述用户操作信息表征的行为。
  6. 根据权利要求5所述的方法,其特征在于,多个所述用户操作指令的生成时间是连续的;
    其中,在满足所述预设触发条件时,各个所述用户操作信息表征的所述行为的执行顺序与多个所述用户操作指令的生成顺序相同。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    在获取多个所述用户操作指令的过程中,若接收到暂定指令,则暂定所 述用户操作信息的录制。
  8. 根据权利要求5所述的方法,其特征在于,至少部分所述用户操作指令的生成时间是不连续的。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,在所述获取所述用户操作指令之前,所述方法还包括:
    获取开启指令,所述开启指令用于指示所述用户操作信息的录制开始。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    获取结束指令,所述结束指令用于指示所述用户操作信息的录制结束。
  11. 根据权利要求10所述的方法,其特征在于,所述结束指令为依据所述开启指令指示的时间以及预设时长生成。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述方法还包括:
    对至少一个所述用户操作信息进行编辑处理,得到处理后的用户操作信息,以在满足所述预设触发条件时,控制所述智能设备执行所述处理后的用户操作信息表征的行为。
  13. 根据权利要求12所述的方法,其特征在于,所述编辑处理包括以下中的任意一种或多种:
    移动处理,所述移动处理用于指示对所述用户操作信息表征的行为的执行顺序进行调整;
    删除处理,所述删除处理用于指示对所述用户操作信息进行删除;
    增加处理,所述增加处理用于指示增加至少一个用户操作信息,以形成用户操作信息集合;
    叠加处理,所述叠加处理用于指示将至少两个所述用户操作信息表征的行为的执行顺序进行关联;
    缩放处理,所述缩放处理用于指示所述用户操作信息表征的行为的执行时长被拉长或被缩短;
    裁剪处理,所述裁剪处理用于指示所述用户操作信息表征的行为的执行被分割成多个子行为或所述用户操作信息表征的行为的执行是不完整的。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,所述方法还包括:
    在交互界面上显示行为播放条;
    其中,在所述行为的执行过程中,所述行为播放条发生动态变化。
  15. 根据权利要求14所述的方法,其特征在于,用于指示对所述用户操作信息进行处理的指令为依据用户针对所述行为播放条的操作和/或用户针对所述交互界面上的虚拟按键的操作生成。
  16. 根据权利要求1-15任一项所述的方法,其特征在于,所述方法还包括:
    在交互界面上显示至少一个行为标识,各个所述行为标识用于指示至少一个所述用户操作信息表征的行为的集合。
  17. 根据权利要求16所述的方法,其特征在于,所述方法还包括:
    在选中至少一个所述行为标识中的一个行为标识时,选中的行为标识在所述交互界面上发生动态变化。
  18. 根据权利要求16所述的方法,其特征在于,所述方法还包括:
    获取用户在所述交互界面上的触控指令;
    根据所述触控指令所指示的滑动方向,在所述交互界面上显示所述行为标识的滑动切换。
  19. 根据权利要求1-18任一项所述的方法,其特征在于,在所述行为执行之前,所述方法还包括:
    在所述预设触发条件为特定触发条件时,检测所述智能设备是否满足执行所述行为的预设条件;
    若否,则控制所述智能设备执行相应的操作,以满足执行所述行为的所述预设条件。
  20. 根据权利要求1-19任一项所述的方法,其特征在于,所述方法还包括:
    在获取到所述用户操作指令时,根据所述用户操作信息,控制所述智能设备执行所述行为。
  21. 根据权利要求1-20任一项所述的方法,其特征在于,所述智能设备包括可移动平台。
  22. 一种智能设备的控制方法,应用于所述智能设备的控制设备,其特征在于,所述方法包括:
    获取用户操作指令,所述用户操作指令包括用于表征所述智能设备的行为的用户操作信息;
    将所述用户操作信息发送给所述智能设备,以使所述智能设备存储所述用户操作信息,并在满足预设触发条件时,根据所述用户操作信息执行所述行为。
  23. 根据权利要求22所述的控制方法,其特征在于,所述用户操作信息包括以下的任意一种或多种:
    触控信息、物理控件操作信息、体感信息、语音信息。
  24. 根据权利要求22或23所述的方法,其特征在于,所述用户操作信息包括用于指示所述行为的行为参数;
    所述行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。
  25. 根据权利要求22-24任一项所述的方法,其特征在于,所述用户操作指令的个数为多个,每一个所述用户操作指令还包括指令标识,所述指令标识用于标识所述用户操作指令。
  26. 根据权利要求22-25任一项所述的方法,其特征在于,所述获取用户操作指令,包括:
    获取多个用户操作指令;
    所述将所述用户操作信息发送给所述智能设备,包括:
    将多个所述用户操作指令对应的用户操作信息形成的用户操作信息集合发送给所述智能设备。
  27. 根据权利要求26所述的方法,其特征在于,多个所述用户操作指令的生成时间是连续的;
    其中,在满足所述预设触发条件时,各个所述用户操作信息表征的所述行为的执行顺序与多个所述用户操作指令的生成顺序相同。
  28. 根据权利要求27所述的方法,其特征在于,所述方法还包括:
    在获取多个所述用户操作指令的过程中,若接收到暂定指令,则暂定所述用户操作信息的录制。
  29. 根据权利要求26所述的方法,其特征在于,至少部分所述用户操作指令的生成时间是不连续的。
  30. 根据权利要求22-29任一项所述的方法,其特征在于,在所述获取所述用户操作指令之前,所述方法还包括:
    获取开启指令,所述开启指令用于指示所述用户操作信息的录制开始。
  31. 根据权利要求30所述的方法,其特征在于,所述方法还包括:
    获取结束指令,所述结束指令用于指示所述用户操作信息的录制结束。
  32. 根据权利要求31所述的方法,其特征在于,所述结束指令为依据所述开启指令指示的时间以及预设时长生成。
  33. 根据权利要求22-32任一项所述的方法,其特征在于,所述方法还包括:
    对至少一个所述用户操作信息进行编辑处理,得到处理后的用户操作信息,以在满足所述预设触发条件时,控制所述智能设备执行所述处理后的用户操作信息表征的行为。
  34. 根据权利要求33所述的方法,其特征在于,所述编辑处理包括以下中的任意一种或多种:
    移动处理,所述移动处理用于指示对所述用户操作信息表征的行为的执行顺序进行调整;
    删除处理,所述删除处理用于指示对所述用户操作信息进行删除;
    增加处理,所述增加处理用于指示增加至少一个用户操作信息,以形成用户操作信息集合;
    叠加处理,所述叠加处理用于指示将至少两个所述用户操作信息表征的行为的执行顺序进行关联;
    缩放处理,所述缩放处理用于指示所述用户操作信息表征的行为的执行时长被拉长或被缩短;
    裁剪处理,所述裁剪处理用于指示所述用户操作信息表征的行为的执行被分割成多个子行为或所述用户操作信息表征的行为的执行是不完整的。
  35. 根据权利要求22-34任一项所述的方法,其特征在于,所述方法还包括:
    在交互界面上显示行为播放条;
    其中,在所述行为的执行过程中,所述行为播放条发生动态变化。
  36. 根据权利要求35所述的方法,其特征在于,用于指示对所述用户操 作信息进行处理的指令为依据用户针对所述行为播放条的操作和/或用户针对所述交互界面上的虚拟按键的操作生成。
  37. 根据权利要求22-36任一项所述的方法,其特征在于,所述方法还包括:
    在交互界面上显示至少一个行为标识,各个所述行为标识用于指示至少一个所述用户操作信息表征的行为的集合。
  38. 根据权利要求37所述的方法,其特征在于,所述方法还包括:
    在选中至少一个所述行为标识中的一个行为标识时,选中的行为标识在所述交互界面上发生动态变化。
  39. 根据权利要求37所述的方法,其特征在于,所述方法还包括:
    获取用户在所述交互界面上的触控指令;
    根据所述触控指令所指示的滑动方向,在所述交互界面上显示所述行为标识的滑动切换。
  40. 根据权利要求22-39任一项所述的方法,其特征在于,在所述行为执行之前,所述方法还包括:
    在所述预设触发条件为特定触发条件时,检测所述智能设备是否满足执行所述行为的预设条件;
    若否,则控制所述智能设备执行相应的操作,以满足执行所述行为的所述预设条件。
  41. 根据权利要求22-40任一项所述的方法,其特征在于,所述方法还包括:
    在获取到所述用户操作指令时,根据所述用户操作信息,控制所述智能设备执行所述行为。
  42. 根据权利要求22-41任一项所述的方法,其特征在于,所述智能设备包括可移动平台。
  43. 一种智能设备的控制装置,其特征在于,包括:处理器和存储器;
    所述存储器,用于存储程序代码;
    所述处理器,用于调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    获取用户操作指令,所述用户操作指令包括用于表征所述智能设备的行 为的用户操作信息;
    存储所述用户操作信息,以在满足预设触发条件时,根据所述用户操作信息控制所述智能设备执行所述行为。
  44. 根据权利要求43所述的控制装置,其特征在于,所述用户操作信息包括以下的任意一种或多种:
    触控信息、物理控件操作信息、体感信息、语音信息。
  45. 根据权利要求43或44所述的控制装置,其特征在于,所述用户操作信息包括用于指示所述行为的行为参数;
    所述行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。
  46. 根据权利要求43-45所述的控制装置,其特征在于,所述用户操作指令的个数为多个,每一个所述用户操作指令还包括指令标识,所述指令标识用于标识所述用户操作指令。
  47. 根据权利要求43-46所述的控制装置,其特征在于,所述处理器,具体用于:
    获取多个用户操作指令;
    关联存储多个所述用户操作指令对应的用户操作信息,得到用户操作信息集合,以在满足预设触发条件时,根据所述用户操作信息集合控制所述智能设备执行各个所述用户操作信息表征的行为。
  48. 根据权利要求47所述的控制装置,其特征在于,多个所述用户操作指令的生成时间是连续的;
    其中,在满足所述预设触发条件时,各个所述用户操作信息表征的所述行为的执行顺序与多个所述用户操作指令的生成顺序相同。
  49. 根据权利要求48所述的控制装置,其特征在于,所述处理器,还用于:
    在获取多个所述用户操作指令的过程中,若接收到暂定指令,则暂定所述用户操作信息的录制。
  50. 根据权利要求47所述的控制装置,其特征在于,至少部分所述用户操作指令的生成时间是不连续的。
  51. 根据权利要求43-50任一项所述的控制装置,其特征在于,所述处 理器,还用于:
    在所述获取所述用户操作指令之前,获取开启指令,所述开启指令用于指示所述用户操作信息的录制开始。
  52. 根据权利要求51所述的控制装置,其特征在于,所述处理器,还用于:
    获取结束指令,所述结束指令用于指示所述用户操作信息的录制结束。
  53. 根据权利要求52所述的控制装置,其特征在于,所述结束指令为依据所述开启指令指示的时间以及预设时长生成。
  54. 根据权利要求43-53任一项所述的控制装置,其特征在于,所述处理器,还用于:
    对至少一个所述用户操作信息进行编辑处理,得到处理后的用户操作信息,以在满足所述预设触发条件时,控制所述智能设备执行所述处理后的用户操作信息表征的行为。
  55. 根据权利要求54所述的控制装置,其特征在于,所述编辑处理包括以下中的任意一种或多种:
    移动处理,所述移动处理用于指示对所述用户操作信息表征的行为的执行顺序进行调整;
    删除处理,所述删除处理用于指示对所述用户操作信息进行删除;
    增加处理,所述增加处理用于指示增加至少一个用户操作信息,以形成用户操作信息集合;
    叠加处理,所述叠加处理用于指示将至少两个所述用户操作信息表征的行为的执行顺序进行关联;
    缩放处理,所述缩放处理用于指示所述用户操作信息表征的行为的执行时长被拉长或被缩短;
    裁剪处理,所述裁剪处理用于指示所述用户操作信息表征的行为的执行被分割成多个子行为或所述用户操作信息表征的行为的执行是不完整的。
  56. 根据权利要求43-55任一项所述的控制装置,其特征在于,所述控制装置还包括:显示器;
    所述显示器,用于在交互界面上显示行为播放条;
    其中,在所述行为的执行过程中,所述行为播放条发生动态变化。
  57. 根据权利要求56所述的控制装置,其特征在于,用于指示对所述用户操作信息进行处理的指令为依据用户针对所述行为播放条的操作和/或用户针对所述交互界面上的虚拟按键的操作生成。
  58. 根据权利要求43-57任一项所述的控制装置,其特征在于,所述控制装置还包括:显示器;
    所述显示器,用于在交互界面上显示至少一个行为标识,各个所述行为标识用于指示至少一个所述用户操作信息表征的行为的集合。
  59. 根据权利要求58所述的控制装置,其特征在于,所述显示器,还用于:
    在选中至少一个所述行为标识中的一个行为标识时,选中的行为标识在所述交互界面上发生动态变化。
  60. 根据权利要求58所述的控制装置,其特征在于,所述处理器,还用于获取用户在所述交互界面上的触控指令;
    所述显示器,还用于根据所述触控指令所指示的滑动方向,在所述交互界面上显示所述行为标识的滑动切换。
  61. 根据权利要求43-60任一项所述的控制装置,其特征在于,所述处理器,还用于:
    在所述行为执行之前,在所述预设触发条件为特定触发条件时,检测所述智能设备是否满足执行所述行为的预设条件;若否,则控制所述智能设备执行相应的操作,以满足执行所述行为的所述预设条件。
  62. 根据权利要求43-61任一项所述的装置,其特征在于,所述处理器,还用于:
    在获取到所述用户操作指令时,根据所述用户操作信息,控制所述智能设备执行所述行为。
  63. 根据权利要求43-62任一项所述的控制装置,其特征在于,所述智能设备包括可移动平台。
  64. 一种智能设备的控制***,其特征在于,包括:控制设备和所述智能设备;
    所述控制设备,用于获取用户操作指令,所述用户操作指令包括用于表征所述智能设备的行为的用户操作信息,并将所述行为操作信息发送给所述 智能设备;
    所述智能设备,用于存储所述用户操作信息,并在确定满足预设触发条件时,根据所述用户操作信息执行所述行为。
  65. 根据权利要求64所述的***,其特征在于,所述用户操作信息包括以下的任意一种或多种:
    触控信息、物理控件操作信息、体感信息、语音信息。
  66. 根据权利要求64或65所述的***,其特征在于,所述用户操作信息包括用于指示所述行为的行为参数;
    所述行为参数中包括以下的任意一种或多种:移动参数、位姿参数、射击参数、技能参数、音频参数。
  67. 根据权利要求64-66所述的***,其特征在于,所述用户操作指令的个数为多个,每一个所述用户操作指令还包括指令标识,所述指令标识用于标识所述用户操作指令。
  68. 根据权利要求64-67所述的***,其特征在于,所述控制设备,具体用于获取多个用户操作指令,并将所述多个所述用户操作指令对应的用户操作信息发送给所述智能设备;
    所述智能设备,具体用于关联存储多个所述用户操作指令对应的用户操作信息,得到用户操作信息集合;并在确定满足预设触发条件时,根据所述用户操作信息集合执行各个所述用户操作信息表征的行为。
  69. 根据权利要求68所述的***,其特征在于,多个所述用户操作指令的生成时间是连续的;
    其中,在满足所述预设触发条件时,各个所述用户操作信息表征的所述行为的执行顺序与多个所述用户操作指令的生成顺序相同。
  70. 根据权利要求69所述的***,其特征在于,所述控制设备,还用于:
    在获取多个所述用户操作指令的过程中,若接收到暂定指令,则暂定所述用户操作信息的录制。
  71. 根据权利要求68所述的***,其特征在于,至少部分所述用户操作指令的生成时间是不连续的。
  72. 根据权利要求64-71任一项所述的***,其特征在于,所述控制设备,还用于:
    在所述获取用户操作指令之前,获取开启指令,所述开启指令用于指示所述用户操作信息的录制开始。
  73. 根据权利要求72所述的***,其特征在于,所述控制设备,还用于:
    获取结束指令,所述结束指令用于指示所述用户操作信息的录制结束。
  74. 根据权利要求73所述的***,其特征在于,所述结束指令为依据所述开启指令指示的时间以及预设时长生成。
  75. 根据权利要求64-74任一项所述的***,其特征在于,所述控制设备,还用于对至少一个所述用户操作信息进行编辑处理,得到处理后的用户操作信息;
    所述智能设备,还用于在满足所述预设触发条件时,执行所述处理后的用户操作信息表征的行为。
  76. 根据权利要求75所述的***,其特征在于,所述编辑处理包括以下中的任意一种或多种:
    移动处理,所述移动处理用于指示对所述用户操作信息表征的行为的执行顺序进行调整;
    删除处理,所述删除处理用于指示对所述用户操作信息进行删除;
    增加处理,所述增加处理用于指示增加至少一个用户操作信息,以形成用户操作信息集合;
    叠加处理,所述叠加处理用于指示将至少两个所述用户操作信息表征的行为的执行顺序进行关联;
    缩放处理,所述缩放处理用于指示所述用户操作信息表征的行为的执行时长被拉长或被缩短;
    裁剪处理,所述裁剪处理用于指示所述用户操作信息表征的行为的执行被分割成多个子行为或所述用户操作信息表征的行为的执行是不完整的。
  77. 根据权利要求64-76任一项所述的***,其特征在于,所述控制设备,还用于:
    在交互界面上显示行为播放条;
    其中,在所述行为的执行过程中,所述行为播放条发生动态变化。
  78. 根据权利要求77所述的***,其特征在于,用于指示对所述用户操作信息进行处理的指令为依据用户针对所述行为播放条的操作和/或用户针 对所述交互界面上的虚拟按键的操作生成。
  79. 根据权利要求64-78任一项所述的***,其特征在于,所述控制设备,还用于:
    在交互界面上显示至少一个行为标识,各个所述行为标识用于指示至少一个所述用户操作信息表征的行为的集合。
  80. 根据权利要求79所述的***,其特征在于,所述控制设备,还用于:
    在选中至少一个所述行为标识中的一个行为标识时,选中的行为标识在所述交互界面上发生动态变化。
  81. 根据权利要求79所述的***,其特征在于,所述控制设备,还用于:
    获取用户在所述交互界面上的触控指令;
    根据所述触控指令所指示的滑动方向,在所述交互界面上显示所述行为标识的滑动切换。
  82. 根据权利要求64-81任一项所述的***,其特征在于,所述智能设备,还用于:
    在所述行为执行之前,在所述预设触发条件为特定触发条件时,检测所述智能设备是否满足执行所述行为的预设条件;
    若否,则执行相应的操作,以满足执行所述行为的所述预设条件。
  83. 根据权利要求64-82任一项所述的***,其特征在于,所述智能设备,还用于:
    在获取到所述用户操作指令时,根据所述用户操作信息,控制所述智能设备执行所述行为。
  84. 根据权利要求64-83任一项所述的***,其特征在于,所述智能设备包括可移动平台。
  85. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被处理器执行以实现如权利要求1-21任一项所述的方法,或者所述计算机程序被处理器执行以实现如权利要求22-42任一项所述的方法。
PCT/CN2019/121612 2019-11-28 2019-11-28 智能设备的控制方法、装置、***和存储介质 WO2021102800A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980040074.9A CN112313590A (zh) 2019-11-28 2019-11-28 智能设备的控制方法、装置、***和存储介质
PCT/CN2019/121612 WO2021102800A1 (zh) 2019-11-28 2019-11-28 智能设备的控制方法、装置、***和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121612 WO2021102800A1 (zh) 2019-11-28 2019-11-28 智能设备的控制方法、装置、***和存储介质

Publications (1)

Publication Number Publication Date
WO2021102800A1 true WO2021102800A1 (zh) 2021-06-03

Family

ID=74336570

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121612 WO2021102800A1 (zh) 2019-11-28 2019-11-28 智能设备的控制方法、装置、***和存储介质

Country Status (2)

Country Link
CN (1) CN112313590A (zh)
WO (1) WO2021102800A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023028830A1 (zh) * 2021-08-31 2023-03-09 深圳市大疆创新科技有限公司 一种无人机的控制方法、设备、无人机及存储介质
CN117250883A (zh) * 2022-12-06 2023-12-19 北京小米机器人技术有限公司 智能设备控制方法、装置、存储介质与芯片

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103156645A (zh) * 2013-03-18 2013-06-19 飞依诺科技(苏州)有限公司 超声诊断设备的工作流自定制方法及装置
CN106790424A (zh) * 2016-12-01 2017-05-31 同方工业信息技术有限公司 定时控制方法、客户端、服务器及定时控制***
US20170315545A1 (en) * 2016-04-29 2017-11-02 Shenzhen Hubsan Technology Co., Ltd. Method for recording flight path and controlling automatic flight of unmanned aerial vehicle
CN108227729A (zh) * 2016-12-15 2018-06-29 北京臻迪机器人有限公司 一种体感控制***及体感控制方法
CN108563161A (zh) * 2018-01-22 2018-09-21 深圳市牧激科技有限公司 开放式智能控制方法、***及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103156645A (zh) * 2013-03-18 2013-06-19 飞依诺科技(苏州)有限公司 超声诊断设备的工作流自定制方法及装置
US20170315545A1 (en) * 2016-04-29 2017-11-02 Shenzhen Hubsan Technology Co., Ltd. Method for recording flight path and controlling automatic flight of unmanned aerial vehicle
CN106790424A (zh) * 2016-12-01 2017-05-31 同方工业信息技术有限公司 定时控制方法、客户端、服务器及定时控制***
CN108227729A (zh) * 2016-12-15 2018-06-29 北京臻迪机器人有限公司 一种体感控制***及体感控制方法
CN108563161A (zh) * 2018-01-22 2018-09-21 深圳市牧激科技有限公司 开放式智能控制方法、***及计算机可读存储介质

Also Published As

Publication number Publication date
CN112313590A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
US10181268B2 (en) System and method for toy visual programming
US20210205980A1 (en) System and method for reinforcing programming education through robotic feedback
CN109952757B (zh) 基于虚拟现实应用录制视频的方法、终端设备及存储介质
US10086267B2 (en) Physical gesture input configuration for interactive software and video games
WO2019056041A1 (en) SYSTEM AND METHOD FOR VIRTUAL CAMERA CONTROL
CN105117008B (zh) 操作引导方法及装置、电子设备
WO2021102800A1 (zh) 智能设备的控制方法、装置、***和存储介质
DE202017105283U1 (de) Sitzungsende-Erkennung in einer Augmented- und/oder Virtual-Reality-Umgebung
WO2016179912A1 (zh) 一种应用程序的控制方法、装置及移动终端
JP2021524116A (ja) 動的動作検出方法、動的動作制御方法及び装置
US20200106967A1 (en) System and method of configuring a virtual camera
CN108317379B (zh) 控制手持云台的方法、装置及手持云台
CN110614634B (zh) 控制方法、便携式终端和存储介质
WO2022161268A1 (zh) 视频拍摄方法及其装置
JP2019171498A (ja) ロボットプログラム実行装置、ロボットプログラム実行方法、プログラム
WO2021232273A1 (zh) 无人机及其控制方法和装置、遥控终端、无人机***
WO2022156490A1 (zh) 虚拟场景中画面展示方法、装置、设备、存储介质及程序产品
CN104079701A (zh) 控制移动终端视频播放的方法及装置
WO2020059342A1 (ja) ロボットシミュレータ
WO2020042063A1 (zh) 云台***的控制方法、云台***及计算机可读存储介质
CN212541342U (zh) 一种多模态的沉浸式vr体验学习平台
US20240173860A1 (en) Motion control method, method for generating trajectory of motion, and electronic device
WO2023226569A9 (zh) 虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品
WO2017048598A1 (en) System and method for toy visual programming
CN118154738A (zh) 视频生成方法、装置和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954073

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954073

Country of ref document: EP

Kind code of ref document: A1