CN211065979U - Robot - Google Patents

Robot Download PDF

Info

Publication number
CN211065979U
CN211065979U CN201921138466.2U CN201921138466U CN211065979U CN 211065979 U CN211065979 U CN 211065979U CN 201921138466 U CN201921138466 U CN 201921138466U CN 211065979 U CN211065979 U CN 211065979U
Authority
CN
China
Prior art keywords
robot
model
motion
sensor
motion platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201921138466.2U
Other languages
Chinese (zh)
Inventor
王东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201921138466.2U priority Critical patent/CN211065979U/en
Application granted granted Critical
Publication of CN211065979U publication Critical patent/CN211065979U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The embodiment of the utility model discloses robot relates to robot and application technical field. The robot includes: the model comprises a model and a motion platform, wherein the model is arranged on the motion platform, and the motion platform is provided with a motion mechanism and a first controller for controlling the motion of the motion mechanism. The utility model is suitable for the occasions of commodity exhibition, shopping guide, tour guide, reception, explanation, body action teaching and interaction or family accompanying and the like.

Description

Robot
Technical Field
The utility model relates to a robot and applied technical field especially relate to a robot.
Background
At present, the goods selling and displaying and customer flow attracting modes of physical shops such as shopping malls and supermarkets are passive, and the goods in the physical shops can be displayed for people by waiting for natural people to flow to shops. For example, in stores such as clothes and accessories, the clothes or accessories are worn on models in the stores, but traditional models can only be set in the stores, and the clothes or accessories can be displayed to people and marketed by waiting for customers to go to the doors, so that the existing clothes display and customer flow attraction modes are passive.
SUMMERY OF THE UTILITY MODEL
In view of this, an embodiment of the present invention provides a robot, which can solve the technical problem that the existing methods of displaying goods and attracting the flow of customers are passive.
In a first aspect, an embodiment of the present invention provides a robot, including: the model comprises a model and a motion platform, wherein the model is arranged on the motion platform, and the motion platform is provided with a motion mechanism and a first controller for controlling the motion of the motion mechanism.
With reference to the first aspect, in a first implementation manner of the first aspect, the motion platform is provided with a positioning module and a line planning module, an output end of the positioning module is connected to an input end of the line planning module, an output end of the line planning module is connected to the first controller, the positioning module is configured to determine position information of the robot, and the line planning module is configured to plan a motion path of the robot;
and the first controller controls the robot to move along the planned movement path according to the movement path planned by the line planning module and the current position information determined by the positioning module.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, an input end of the positioning module is connected to an environment data acquisition module, the environment data acquisition module is disposed on a side portion of the motion platform and provided with a path finding sensor, and is configured to acquire environment data around the robot and send the acquired environment data to the positioning module and the first controller for positioning and path planning, respectively, the positioning module calculates current position information based on the environment data, and sends the position information to the path planning module, and matches the current position information with a planned motion path to find a path; the collision sensor keeps away barrier sensor, and first controller still is used for whether have the barrier in the environmental data perception motion process based on that environmental data acquisition module obtained, and the robot is based on the velocity of movement and the direction of motion of perception data adjustment robot are in order to keep away the barrier.
With reference to the first aspect, the first or second implementation manner of the first aspect, in a third implementation manner of the first aspect, the environment data acquisition module includes a path-finding sensor and an obstacle-avoiding sensor, where the path-finding sensor and the obstacle-avoiding sensor respectively include: one or more of a laser sensor, an inertial sensor, an infrared sensor, an ultrasonic sensor, and an RGB-D visual ranging sensor.
With reference to the first aspect and any one of the first to third embodiments of the first aspect, in a fourth embodiment of the first aspect, a drop-prevention sensor is disposed at the bottom of the moving platform and used for detecting whether the robot walks to the edge of the platform.
With reference to the first aspect and any one of the first to fourth embodiments of the first aspect, in a fifth embodiment of the first aspect, the model is rotatably connected to the moving platform.
With reference to the first aspect and any one of the first to fifth embodiments of the first aspect, in a sixth embodiment of the first aspect, the model is connected to the moving platform through a connecting arm.
With reference to the first aspect and any one of the first to sixth embodiments of the first aspect, in a seventh embodiment of the first aspect, the connecting arm has a telescoping portion, and/or the model is vertically movable along the connecting arm.
With reference to the first aspect and any one of the first to seventh implementation manners of the first aspect, in an eighth implementation manner of the first aspect, the movable joint further includes a shoulder joint, an elbow joint, a wrist joint, a finger joint, a lumbar vertebra, a knee joint, and/or an ankle joint, a driving motor is provided at each joint, the driving motor drives each joint to move, and the driving motor is connected to the second controller or the first controller.
With reference to the first aspect and any one of the first to eighth embodiments of the first aspect, in a ninth embodiment of the first aspect, simulated skin is attached to each part of the model.
With reference to the first aspect or any one of the first to ninth embodiments of the first aspect, in a tenth embodiment of the first aspect, the applying the robot includes: merchandise display, shopping guide, tour guide, reception, explanation, body action teaching and interaction or family accompanying occasions.
The robot provided by the embodiment of the utility model arranges the model on the motion platform, and the motion platform is provided with a motion mechanism and a first controller for controlling the motion of the motion mechanism; when the robot is used for commodity display, such as dress display or shopping guide, the dress can be worn or worn on the model, the first controller of the motion platform receives the control signal, the motion of the motion mechanism is controlled based on the control signal, and the model which is worn or worn with the dress can not only be in a shop but also be taken out of the shop due to the fact that the model is arranged on the motion platform, and the model actively shows the dress to attract passenger flow along with the shuttling movement of the motion platform at places with more people flow, so that the technical problem that the existing dress display and attraction of the passenger flow are passive is solved, and the display rate of the dress can be improved. Further, more passenger flows can be attracted, and the transaction rate can be improved to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of a robot according to an embodiment of the present invention;
FIG. 1B is a schematic structural diagram of another embodiment of the robot of the present invention;
FIG. 2 is a flow chart of a robot for displaying clothes according to an embodiment of the present invention;
FIG. 3A is a block diagram of a robot according to an embodiment of the present invention;
FIG. 3B is a block diagram of the robot according to an embodiment of the present invention;
FIG. 4A is a schematic structural diagram of another embodiment of the robot of the present invention;
FIG. 4B is a schematic structural diagram of a robot according to another embodiment of the present invention;
FIG. 5A is a block diagram of another embodiment of the robot of the present invention;
FIG. 5B is a block diagram of a robot according to another embodiment of the present invention;
FIG. 6 is a rear view of an embodiment of the robot of the present invention;
FIG. 7 is a diagram showing the posture of the robot during changing clothes;
fig. 8 is a diagram of the action attitude of the robot in the form of a straight horse jumping performance;
FIG. 9 is a diagram of the posture of the robot for handstand performance;
FIG. 10 is a diagram illustrating the path of a robot according to an embodiment of the present invention;
fig. 11 is a schematic view of a robot according to another embodiment of the present invention;
fig. 12 is a schematic view of an application scenario of a robot according to another embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
It should be apparent that numerous technical details are set forth in the following detailed description to provide a more thorough description of the present invention, and it will be apparent to those skilled in the art that the present invention may be practiced without some of these details. In addition, some methods, means, components and applications thereof known to those skilled in the art are not described in detail in order to highlight the gist of the present invention, but the implementation of the present invention is not affected thereby. The embodiments described herein are only some embodiments of the present invention, and not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the protection scope of the present invention.
Example one
Fig. 1A is a schematic structural diagram of a robot according to an embodiment of the present invention, and fig. 1B is a schematic structural diagram of a robot according to another embodiment of the present invention; referring to fig. 1A and 1B, the embodiment of the present invention provides a robot, which can be applied to the occasions of merchandise display, shopping guide, navigation, reception, explanation, body motion teaching and interaction, family accompanying and the like. The robot includes: the model comprises a model 1 and a moving platform 2, wherein the model 1 is arranged on the moving platform 2, and the moving platform 2 is provided with a moving mechanism 3 and a first controller 4 for controlling the moving mechanism 3 to move.
Wherein, the model 1 can be a human-shaped model, an animal model or other design model; the model 1 has different shapes according to the scene change of the robot, for example, when the robot is used for dress exhibition, shopping guide and other scenes, the model is preferably a human-shaped shape, and the dress is a general name of dress and adornment, and comprises: clothing and ornaments; the ornament may be, for example, jewelry, a satchel, cosmetics, etc., and may be an object to be displayed; when the robot is used for marketing explanation, an animal model can be adopted to attract field listeners and people passing by; when the robot is used for navigation, a human figure can be used, and an animal figure can be used.
The model can be connected to the motion platform 2 through the foot to play the supporting role to the model, also can be connected to the motion platform 2 through other positions of the model, in addition, cables are respectively arranged in the two legs of the model and penetrate the foot to be connected with the motion platform. It should be noted that the first controller 4 is located inside the motion platform, and the lead 4 in fig. 1B is only a simple illustration of the first controller on the motion platform, and the position indicated by the lead 4 is not the first controller.
The existing autonomous mobile robot is supported by feet, is not easy to incline and fall down in order to ensure that the robot is stable in the moving process, and the appearance of the robot is designed to be thicker so as to ensure stable operation. This results in that the existing humanoid robot can not realize the high fidelity simulation effect on the figure at a lower cost. In this embodiment, the model is connected on motion platform 2 through the foot, and when the robot was at the operation in-process, the holding power acted on motion platform 2, motion platform 2 because the contact surface with ground is great, can realize under lower cost, and the model on motion platform 2 simulates real human physique completely, and the range of application is extensive, when being applied to in scenes such as dress performance, body-building body and beauty, reception and accompany, can have better acceptance relatively.
The overall shape of the motion platform 2 can be designed to be cuboid, cylinder or bionic shape, such as tortoise, alpaca and other shapes, so as to increase the attraction to people. The motion mechanism 3 can be a wheel type motion mechanism, a crawler type motion mechanism, a multi-foot type motion mechanism, a guide rail type motion mechanism, a magnetic suspension type motion mechanism and the like.
The first controller 4 may be used to control the starting, running, advancing and retreating, speed, stopping of the motion mechanism 3 on the motion platform 2, and other electronics on the motion platform 2, such as various sensors; the motion platform 2 may further be provided with a Wireless transmission module 5, such as a bluetooth module or a WI-FI (Wireless-Fidelity) module, for implementing communication or interactive control with other devices; the motion platform 2 may also be provided with a data interface 6.
Specifically, the robot can be remotely controlled to start, run and the like through a remote controller; the remote controller can be a conventional remote controller which receives and sends signals through infrared rays, and can also be terminal electronic equipment such as a mobile phone, a tablet personal computer and the like, and the robot is controlled through a wireless transmission module 5 such as a Bluetooth module or a WI-FI module and the like.
The model is a simulation model, such as a simulation human and a simulation animal, and specifically, simulation skin can be attached to each part of the model; the simulated skin is the skin of a simulated human, and can be made of silica gel, TPE (thermoplastic elastomer) material andor hydrogel and the like; the specific manufacturing process has mature technology and is not described again. It will be appreciated that when the model is an animal figure, the simulated skin is a fur that simulates a real animal.
Referring to fig. 1A to fig. 3B, in order to clearly illustrate the technical solution and the technical effect of the robot provided by the embodiment of the present invention, the following description is provided with specific clothing display or shopping guide scenes:
101. a first controller 4 on a motion platform 2 of the robot receives a clothing display trigger signal; the model of the robot is worn with clothes to be displayed;
102. the first controller 4 controls the motion mechanism 3 of the motion platform 2 to move along the first path based on the display trigger signal so as to actively display the clothes. The first path may be a planned designated path or a path formed by the random movement of the robot.
It will be appreciated that in order to increase the presentation or audience rate, it is desirable to plan the robot to move along the path of as many areas as possible.
The robot provided by the embodiment of the utility model arranges the model on the motion platform 2, the motion platform 2 is provided with a motion mechanism 3 and a first controller 4 for controlling the motion of the motion mechanism 3; when the robot is used for commodity display, for example, during dress display or shopping guide, dress can be worn or worn on the model, the first controller 4 of the motion platform 2 receives a control signal, the motion mechanism 3 is controlled to move based on the control signal, the model with dress can not only be in a shop but also be taken out of the shop due to the fact that the model with dress is arranged on the motion platform 2, the model can actively show dress to people to attract passenger flow along with the shuttle movement of the motion platform 2 in places with more people flow, the technical problem that the existing dress display and the mode of attracting the passenger flow are passive is solved, and therefore the display rate of dress can be improved.
In addition, one person sees the dress once, which is equivalent to dress exhibition once. It can be understood that the higher the display rate of the clothes is, the more the audiences of the display are, the larger the passenger flow is, the higher the volume of the transaction is, the mode of the robot display clothes of the embodiment is that the robot display clothes can actively go out of a store to display the clothes to people, the display rate is relatively higher, the audiences of the display are relatively more, so that more passenger flows can be attracted, and the transaction rate can be improved to a certain extent.
It should be noted that the improvement of the robot of this embodiment lies in that existing components are utilized, and a new robot category is formed through the line connection relationship or the position arrangement relationship between the components, and the robot can be applied to the aforementioned multiple scenes, actively show commodities to people to attract passenger flow, or interact with natural people to perform body motion teaching, and achieve good technical effects. As for the specific program or logic instructions for controlling the motion mechanism and other components to perform corresponding tasks or actions through the first controller, the prior art is not considered as an improvement of the embodiments of the present invention. Namely, the robot of the embodiment is an improvement of the structure, not an improvement of the method.
In this embodiment, the robot may also display other articles besides clothing; the model 1 can be a traditional static model or a model for doing various actions, such as waving hands, swinging arms, swinging hips, smiling and the like; the motion platform 2 may further be provided with a function menu, where the function menu includes: clothing display, shopping guide, tour guide, explanation and the like; different function menus correspond to different command control buttons.
Specifically, the motion platform 2 may be provided with a display screen, the function menus may be provided on the display screen, and when each function menu is clicked, an interface having a plurality of command control buttons may be displayed correspondingly, and a path navigation area may be displayed on the interface, so as to display a motion path, a surrounding environment, and a current position of the robot.
The utility model discloses an embodiment still provides an electronic equipment's graphical user interface, have on the interface function menu, each function menu can correspondingly demonstrate the interface that has a plurality of command control button can also show route navigation region on the interface for show the motion route of robot, all ring edge borders and the position that locates at present.
Referring to fig. 3A or 3B, in an embodiment of the present invention, the motion platform 2 is provided with a route planning module and a positioning module.
The output end of the positioning module is connected with the input end of the line planning module, the output end of the line planning module is connected with the first controller, the positioning module is used for determining the position information of the robot, and the line planning module is used for planning the motion path of the robot;
and the first controller controls the robot to move along the planned movement path according to the movement path planned by the line planning module and the current position information determined by the positioning module.
Specifically, as shown in fig. 3A or 3B, the motion platform is further provided with a map drawing module for automatically drawing a map, wherein output ends of the map drawing module and the positioning module are respectively connected to an input end of the route planning module, and input ends of the map drawing module and the positioning module are connected to an environment data acquisition module 7 on the motion platform. The environment data acquisition module 7 is arranged on the side part of the motion platform and used for acquiring environment data around the robot and respectively sending the acquired environment data to the positioning module and the first controller, the positioning module calculates current position information based on the environment data, and sends the position information and the environment data to the path planning module to be matched with a planned motion path for path finding; the first controller is further used for adjusting the moving speed and the moving direction of the robot based on the environmental data acquired by the environmental data acquisition module 7 so as to avoid obstacles.
The planned route may be manually set on a map in advance, or may be automatically generated by the robot on a map drawn by the robot. The environment data refers to an environment of a travel path or a periphery of the travel path of the robot, for example, a building, a passing pedestrian, a vehicle, and the like, and distance information thereof to the robot. The first controller 4 further has a data storage function, and the planned path may be stored in the first controller 4 on the motion platform 2, or may be stored in a control end device, such as the aforementioned remote controller or a mobile phone.
Because the theory of operation of GPS locator is based on satellite signal carries out real-time location, when being in the indoor that the signal is weaker, can influence the reliability of location, in order to realize the comparatively reliable location of robot self, in an embodiment of the utility model, 2 lateral parts of motion platform are equipped with the sensor, based on the environmental data information that the sensor perception was obtained, realize synchronous location and map construction (S L AM or Simultaneous localization and mapping).
In order to automatically seek to advance and avoid obstacles in the movement process, in an embodiment of the present invention, the environmental data acquisition module 7 includes a seeking sensor and an obstacle avoidance sensor, the seeking sensor is used for acquiring first environmental data, the seeking sensor is connected with the positioning module, in an embodiment, the seeking sensor is further connected with a mapping module, and the mapping module maps a path map based on the first environmental data information acquired by the seeking sensor; keep away barrier sensor and locate 2 lateral parts of motion platform are connected with first controller for acquire second environment data information to whether there is the barrier in the detection motion process, the robot is based on the movement speed and the direction of second environment data information adjustment robot are in order to realize keeping away the barrier function.
In one embodiment, an obstacle avoidance module may be further provided based on the control logic, and the obstacle avoidance sensor is connected to the first controller through the obstacle avoidance module.
In addition, the obstacle avoidance sensor can also be used for reporting sensed or detected obstacle information, such as the distance from the robot to the obstacle avoidance sensor, and inputting the detected obstacle information into a positioning algorithm for calculation based on the acquired detected obstacle information to obtain the current position of the robot; the route planning module can also draw a route map based on the obstacle information and send the route map to the route planning module, and the route planning module replans the route based on the drawn route map; the specific scheme of drawing a map based on the acquired obstacle information around the robot is the prior art, and is not described again.
Specifically, the path finding sensor and the obstacle avoidance sensor respectively include: one or more of a laser sensor, an inertial sensor, an infrared sensor, an ultrasonic sensor, and an RGB-D visual ranging sensor.
The laser sensor and the inertial sensor can be combined to draw a path and a surrounding environment map and position the robot; the ultrasonic and infrared sensors can detect objects in a short distance, so that people and objects in the short distance can be avoided; the RGB-D visual ranging sensor is a depth camera based on combination of RGB (red, green and blue three-channel color image) and DepthMap (depth image) and serves as the visual ranging sensor; the RGB-D images collected are two images: one is a normal RGB three-channel color image and the other is a Depth image.
In one embodiment, the bottom of the moving platform 2 is provided with a falling-prevention sensor 8 for detecting whether the robot walks to the edge of the platform.
The anti-falling sensor 8 is generally an ultrasonic sensor and can also be an infrared sensor, when the robot walks to the edge of a stair, the robot can detect that the robot walks to the edge of a platform through the anti-falling sensor arranged at the bottom of the robot, so that the robot can safely avoid the platform.
In this embodiment, the driving platform is further provided with an emergency stop button 15, which is used for manually pressing the emergency stop button when the obstacle avoidance sensor fails to detect effectively, so as to prevent the robot from entering an unsafe area.
In another embodiment of the invention, the model is rotatably connected to the motion platform 2. Can rotate on motion platform 2 like this, put out the rotation gesture, realize turning round to more real simulation real person, when being arranged in the show dress scene, can show the dress to people dynamically.
Referring to fig. 2 and 3A, the model is provided with movable joints, wherein the movable joints comprise cervical vertebra 9, shoulder joint 10, elbow joint 11, wrist joint 12 and/or lumbar vertebra 13; and a driving motor is arranged at each joint and drives each joint to act, and the driving motor is connected to the second controller. In the process that the motion platform moves forward along the first path, the upper part of the robot can obtain more sights by swinging the movable joints to have better drainage effect. Finger joints 14 may also be provided to bring the model into a motion pose closer to that of a real person.
Wherein, the cervical vertebra and the driving motor at the position are used for driving and controlling the neck of the model to make an action gesture, such as twisting the neck and the part above the neck; the shoulder joint and the driving motor at the position are used for driving and controlling the big arm to make an action posture; the elbow joint and the driving motor at the position are used for driving and controlling the forearm to make an action posture; the wrist joint and the driving motor at the position are used for driving and controlling the wrist to make an action gesture; the lumbar vertebra and the driving motor at the position are used for driving and controlling the waist to make an action gesture. Similarly, the driving motor is arranged at other parts of the model, the corresponding parts can be driven to make the motion posture of the simulated human, for example, the driving motor is arranged at the eyes, and the eyes can be controlled to be opened and closed.
Referring to fig. 3A, in an embodiment, the second controller may be connected to the first controller 4 on the motion platform 2 to implement mutual communication, when the robot needs to swing various motion postures, the control end may send a control signal to the first controller 4 on the motion platform 2, and the first controller 4 sends the control signal to the second controller, so that the second controller controls the driving motors at the joints to drive the joints to make various motion postures; the real person can be simulated more truly to increase the attraction in different scenes. Referring to fig. 3B, in another embodiment, the control terminal may also directly send a control signal to the second controller, so that the second controller controls the driving motors at the joints to drive the joints to make various motion postures.
The second controller may be mounted on the motion platform 2, or may be mounted in the model 1, which is not limited thereto.
In addition, the operator can also program, debug and maintain the robot through the wireless transmission module 5 or the data interface 6.
Referring to fig. 2 to fig. 3B, in order to clearly illustrate the technical solution and the technical effect of the robot provided by the embodiment of the present invention, the following descriptions are provided in combination with the aforementioned clothes display or shopping guide scenario: the model is a simulated human-shaped model, and simulated skin is attached to the simulated human-shaped model so as to simulate a simulated person more truly.
102, the first controller 4 controls the movement mechanism 3 of the movement platform 2 to move along the first path based on the display trigger signal, so as to actively display the clothes, specifically: in the process that a movement mechanism 3 of the movement platform 2 moves along a first path, whether an obstacle exists on an advancing path is detected through an obstacle avoidance sensor so as to avoid the obstacle; specifically, an object in a short distance is detected by an ultrasonic sensor and an infrared sensor so as to avoid people and objects in a short distance.
When the obstacle is avoided, a signal detected by the obstacle avoiding sensor is sent to the first controller 4, and the first controller 4 finishes redrawing the path and positioning the robot based on the detected signal.
Specifically, the laser sensor may be disposed in front of the motion platform 2 for detecting obstacles and measuring the precise distance from the obstacles to the model robot itself; the specific implementation mode is as follows: the method can adopt single line laser to scan line by line with the center of the robot, or adopt multiple rows of laser to scan line by line with the center of the robot, and continuously repeat scanning at a certain frequency, thus detecting the obstacles and the environment around the robot. Meanwhile, the robot moves by itself to generate continuous displacement, and the actual displacement of the robot can be calculated by combining the inertial sensor and the movement parameter data of the servo motor/stepping motor of the robot movement mechanism 3, and the specific calculation mode is the prior art and is not repeated. Therefore, the path planning module can gradually finish the redrawing of the path map according to the laser sensor, the inertia sensor and the motor rotation angle, and send the redrawing path map to the motion control module. When the robot completes a week of detection in one area, the path map drawing in fig. 10 can be completed. Meanwhile, the position of the robot can be accurately obtained based on the measured distance information of the robot, and positioning is achieved.
When putting the robot in the shopping mall and showing dress along first route, still detect whether walk to the landing edge through the dropproof sensor who installs in the robot bottom to avoid the risk of falling the stair safely.
The process of displaying the clothes also can comprise the following steps: in the motion process of the motion platform 2, a first controller 4 receives a first control signal which is sent by a control end and used for controlling a model on the motion platform 2 to swing into various postures;
the first controller 4 sends the control signal to the second controller based on the first control signal, and the second controller sends a motor driving signal after receiving the control signal, drives the motor at the corresponding joint, and drives the corresponding joint to swing out the corresponding action posture. For example, the cervical vertebra of the model is controlled to rotate, the head twisting gesture is performed, the eye motor of the model is controlled to perform the blinking gesture, and the waving gesture can be performed by controlling the big arm, the small arm and the wrist joint, so that the attraction force to people is increased, the audience rate is increased, and the passenger flow is attracted.
Referring to fig. 11, in an application commodity display scenario, a plurality of robots of the embodiment may be connected in sequence to actively display a plurality of commodities at a time. Or, in order to save cost, a plurality of trolleys carrying models can be sequentially connected behind the robot of the embodiment, and the trolleys connected behind the robot move along with the robot, so that commodities can be displayed in the movement of people to and from actively, and the flow of customers is attracted.
Referring to fig. 5, in the navigation application scenario, a voice module connected to the second controller is further disposed in the model or on the motion platform to realize explanation of the scenic spots and simple question conversation; the model can be controlled to swing out of an action gesture, such as a road pointing gesture, adapted to the scene direction.
In a guest reception application scenario, the model lumbar, cervical, and shoulder joints can be controlled to bow the visitor. For example, the system can be used for reception of each guest in the winter Olympic meeting site in 2020, so that the workload of reception workers can be reduced, and the festive atmosphere can be increased.
Because the utility model provides a scene that robot was suitable for can not be exhaustive, several concrete examples of the aforesaid are for helping the public understanding the utility model provides a technical scheme and effect. According to the above illustration, it can be known that the robot provided by the embodiment of the present invention can put out the action gesture adapted to the scene according to the different application scene control models.
Referring to fig. 4A to 6, the model is connected with the moving platform 2 through a connecting rod 1; wherein, connecting rod 1 can play the supporting role, also can be the linking arm for lift the model.
Specifically, the dummy is provided with at least a hip joint 16, and the connection point of the connecting rod 1 and the dummy is positioned above the hip joint. In this way, the model on the moving platform 2 can swing out of the action posture of lifting the legs upwards under the support of the connecting rod 1 and the moving platform 2, as shown in fig. 7, fig. 8 or fig. 9; the model on the motion platform can also make a running motion posture under the support of the connecting rod 1 and the motion platform 2, as shown in fig. 12; when the connecting rod is the connecting arm, the action gesture of lifting and supporting the model can be made. Specifically, the lower end of the connecting rod is connected to the motion platform, and the connecting point of the connecting rod and the model can be positioned at the tail bone of the model, namely the upper end of the connecting rod can be connected to the tail bone of the model; referring to fig. 12, the upper end of the connecting rod can also be connected to the side part of the model waist, etc.; it should be noted that the positions of the connection points of the connecting rod and the model can be set according to specific situations, and are not limited to the illustrated positions, and the above-mentioned specific connection point positions are used for helping the public to understand the inventive concept of the present invention, and are not used for understanding exclusive limitations of other implementation modes.
When the model 1 on the motion platform 2 is connected with the motion platform 2 through a connecting rod, in order to realize the action postures of squatting and standing of the model, the simulated human body of the model is also provided with a knee joint 18, and particularly, the connecting rod is provided with a telescopic part; thus, when the second controller of the model receives the squatting control signal, the knee joint of the model bends, and the telescopic part of the connecting rod synchronously contracts and shortens to complete the squatting action posture; when the standing control signal is received, the knee joint of the model extends, and meanwhile, the connecting rod extends synchronously to complete the standing action gesture.
In addition, the model can also ascend or descend by extending or shortening the connecting rod and is combined with other movable joints to finish the performances of various action postures; for example, the model makes a combined action gesture of rotation and arm extension during the ascending process; or rotating to jump the action posture of the Chinese character yi ma during the ascending process, and the like.
The telescopic part of the connecting rod can be in a linear telescopic structure, for example, the linear guide rail is matched with the sliding chute; the telescopic part of the connecting rod can also be a hinge structure similar to a knee joint, and it can be understood that the structure can also realize the actions of squatting and standing, but the action posture of ascending or descending the model is difficult to realize. Specifically, a linear motor can be connected to the telescopic part of the connecting rod to realize the arbitrary or continuous switching of the ascending or descending action posture of the model according to the control of the second controller.
The model is followed the action gesture of squatting also can be accomplished to the structure that the connecting rod can vertical motion, and concrete implementation mode can set up the slide rail at the model back, the connecting rod be provided with the gliding structure of slide rail cooperation still can realize the structure of motion for other similar cooperations.
In addition, in another embodiment, the combination structure of the two previous embodiments can also be realized, namely, the connecting rod is provided with a telescopic part, and the model can move vertically along the connecting rod; therefore, the squatting action, the rising or falling action posture of the model along the connecting rod and the combination action posture of the model and other action postures can be finished.
In addition, the connecting rod can be replaced by a mechanical arm or made into a structure with a plurality of movable joints of the mechanical arm, so that the model can be swung in various directions through the mechanical arm to make a flying-like action gesture.
In the present embodiment, since the model is also rotatably connected to the moving platform 2, in the foregoing embodiments, one end of the connecting rod is rotatably connected to the model; when the leg-up and leg-up actions are performed, the leg-up and leg-up device can also rotate to perform various dance and other action performance postures. Such as a queen horse jump performance, an inverted performance, etc.
In the embodiment, the robot can perform various dance and other movement postures in the movement process like a real person, so that the attraction to people can be increased, and the drainage efficiency of customers is improved. For example, when the robot of the embodiment is used for displaying a clothing scene, the robot can be controlled to perform various dance postures in the process of moving along the first path by a model wearing clothing on the moving platform 2, so that the clothing of the model in different postures can be actively displayed to the passing people, the attraction to the flow of the passing people is increased, the clothing display rate is increased, the audience rate is increased, the passenger flow is attracted, and the sales volume of the clothing can be increased to a certain extent.
In an application scenario, the robot of the embodiment of the present invention can be used as a body motion teaching and interaction robot, such as a fitness trainer robot, an adult robot, etc., and can perform body motion teaching and interact with natural people; for example, when the robot is used as a fitness robot, a series of action postures can be made at fixed places such as a gymnasium, and a fitness action teaching instruction can be made for people who have a previous fitness so as to reduce the working strength of a fitness trainer. In addition, the fitness robot can actively walk on the street through the movement mechanism of the movement platform to swing various fitness action postures so as to attract people to build the fitness in the future. When the robot is used as an adult robot, the robot can simulate the interaction of physical and motion between a real person and a natural person, and the real experience of people is enhanced.
The model is also provided with an ankle joint 19 and a motor for driving the ankle joint, so that the foot can move freely.
In an embodiment of the present invention, the second controller is further connected to a voice control module for interacting with the model.
Through addding the voice module in this scheme, can control the action of robot through pronunciation, if need change the clothes, the robot just can enter into the mode of changing the clothes so. It is also possible to perform a straight horse jump performance, an inverted performance, and the like as described above.
In order to facilitate the public to understand the embodiment of the present invention, the whole utility model concept of the embodiment is now described in detail by combining with the specific application example of the clothing display as follows:
referring to fig. 4A to 9, the control end may be a mobile phone, and an APP corresponding to the robot may be installed on the mobile phone, or may be a control interface; when the robot is required to leave a store to display the clothes, putting the clothes on a model of the robot; can pass through wireless transmission module 5 with the cell-phone and send the control signal of show dress to motion platform 2 of robot, after this control signal is received to first controller 4 on the motion platform 2, drive motion 3 and advance along first route, the great region of people's flow on the first route can the choice shop or street, just so can show dress to people voluntarily, with the rate of exhibition of increase dress, thereby realize the drainage purpose of dress sales, of course, first route planning is when the street, need go on according to corresponding road traffic regulation.
During the process that the robot moves to show clothes to people, an operator can send commands for controlling the model to perform various motion postures to the first controller 4 and send the commands to the second controller by the first controller 4, and the second controller drives the corresponding joints of the model to move; according to the foregoing disclosure, the model can make various single-action postures or combined-action postures, for example, a jumping horse, an inverted posture, a flying posture with the upper and lower flying posture while rotating, or a moving posture with the legs raised and the arms used for beating the legs while half rotating; to attract the attraction of people from and to the world; therefore, the audience rate of the clothing display is improved, and the purposes of actively displaying the clothing and attracting the customer flow are achieved.
It can be understood that the merchandise display method of the embodiment may also be used for displaying other merchandise besides clothing, no matter the merchandise is virtual merchandise or physical merchandise, the display rate of the merchandise display method affects the volume of transaction to a certain extent; for e-commerce platforms or physical stores, the size of the passenger flow is a factor influencing the volume of the transaction, and if the commodity display rate can be improved or the passenger flow can be attracted, the quantity of the pins can be detonated to a certain degree.
The robot that this embodiment provided can initiatively show commodity and attract the guest flow to improve rate of exhibition and drainage volume, can improve the volume of friendship to a certain extent. In addition, just the robot of this embodiment is improved in structure, and it brings the change of commodity display mode directly, and can detonate the passenger flow volume, and its application is extensive, for example, as body-building coach robot, adult robot etc. are used in body action teaching and interactive scene, can get huge commercial success.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising", without further limitation, means that the element so defined is not excluded from the group consisting of additional identical elements in the process, method, article, or apparatus that comprises the element. All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention should be covered by the protection scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A robot, comprising: the model comprises a model and a motion platform, wherein the model is arranged on the motion platform, and the motion platform is provided with a motion mechanism and a first controller for controlling the motion of the motion mechanism.
2. The robot of claim 1, wherein the motion platform is provided with a positioning module and a route planning module, an output end of the positioning module is connected with an input end of the route planning module, and an output end of the route planning module is connected with the first controller.
3. The robot of claim 2, wherein the input end of the positioning module is connected to an environmental data acquisition module disposed at a side of the motion platform.
4. The robot of claim 3, wherein the environmental data collection module comprises a path-finding sensor and an obstacle-avoiding sensor, and the path-finding sensor and the obstacle-avoiding sensor respectively comprise:
one or more of a laser sensor, an inertial sensor, an infrared sensor, an ultrasonic sensor, and an RGB-D visual ranging sensor.
5. The robot of claim 1, wherein the bottom of the motion platform is provided with a drop-proof sensor for detecting whether the robot walks to the edge of the platform.
6. A robot as claimed in claim 1, wherein the model is rotatably connected to the motion platform.
7. A robot as claimed in claim 6, wherein the model is connected to the motion platform by a connecting arm.
8. A robot as claimed in claim 7, wherein the model is provided with a moveable joint imitating the human body, the moveable joint comprising a hip joint, and the connection point of the connecting arm to the model being located above the hip joint.
9. A robot as claimed in claim 7, characterized in that the connecting arm has a telescopic part and/or the model is vertically movable along the connecting arm.
10. The robot of claim 8, wherein the movable joints further comprise cervical joints, shoulder joints, elbow joints, wrist joints, finger joints, lumbar joints, knee joints and/or ankle joints, each joint is provided with a driving motor, the driving motors drive each joint to move, and the driving motors are connected to the second controller or the first controller.
11. The robot of claim 1, wherein a simulated skin layer is attached to each part of the model.
12. A robot according to claim 1, characterized in that the applications of the robot comprise: merchandise display, shopping guide, tour guide, reception, explanation, body action teaching and interaction or family accompanying occasions.
CN201921138466.2U 2019-07-18 2019-07-18 Robot Expired - Fee Related CN211065979U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201921138466.2U CN211065979U (en) 2019-07-18 2019-07-18 Robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201921138466.2U CN211065979U (en) 2019-07-18 2019-07-18 Robot

Publications (1)

Publication Number Publication Date
CN211065979U true CN211065979U (en) 2020-07-24

Family

ID=71636416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201921138466.2U Expired - Fee Related CN211065979U (en) 2019-07-18 2019-07-18 Robot

Country Status (1)

Country Link
CN (1) CN211065979U (en)

Similar Documents

Publication Publication Date Title
JP7407919B2 (en) Video processing method, video processing device, computer program and electronic equipment
JP6645658B2 (en) Mobile training support device
US20200376398A1 (en) Interactive plush character system
CN105094311B (en) Systems and methods for viewport-based augmented reality haptic effects
US20220277526A1 (en) Device for simulating a virtual fitness partner and methods for use therewith
CN105027030A (en) Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing
CN106601062A (en) Interactive method for simulating mine disaster escape training
JP5318623B2 (en) Remote control device and remote control program
CN107537135A (en) A kind of lower limb rehabilitation training device and system based on virtual reality technology
JP6793905B2 (en) Robot behavior simulation device
CN103488291A (en) Immersion virtual reality system based on motion capture
JP6134895B2 (en) Robot control system, robot control program, and explanation robot
KR20220116237A (en) smart treadmill
CN110969905A (en) Remote teaching interaction and teaching aid interaction system for mixed reality and interaction method thereof
US20140039675A1 (en) Instructional humanoid robot apparatus and a method thereof
JP2003050559A (en) Autonomously movable robot
JP6709633B2 (en) Simulation system and game system
US10429924B1 (en) Virtual reality simulation system
US10650591B1 (en) Collision avoidance system for head mounted display utilized in room scale virtual reality system
US20140258192A1 (en) Apparatus for training recognition capability using robot and method for same
CN111694426A (en) VR virtual picking interactive experience system, method, electronic equipment and storage medium
CN106125909A (en) A kind of motion capture system for training
KR20190104934A (en) Artificial intelligence learning method and operation method of robot using the same
JP7139643B2 (en) Robot, robot control method and program
Hong et al. Development and application of key technologies for Guide Dog Robot: A systematic literature review

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200724

Termination date: 20210718