NL2020224B1 - Intelligent Robot - Google Patents

Intelligent Robot Download PDF

Info

Publication number
NL2020224B1
NL2020224B1 NL2020224A NL2020224A NL2020224B1 NL 2020224 B1 NL2020224 B1 NL 2020224B1 NL 2020224 A NL2020224 A NL 2020224A NL 2020224 A NL2020224 A NL 2020224A NL 2020224 B1 NL2020224 B1 NL 2020224B1
Authority
NL
Netherlands
Prior art keywords
unit
gesture
expression
module
data
Prior art date
Application number
NL2020224A
Other languages
Dutch (nl)
Other versions
NL2020224A (en
Inventor
Zhu Xuan
Original Assignee
Zhuhai Hengqin Qi Xiang Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Hengqin Qi Xiang Tech Co Ltd filed Critical Zhuhai Hengqin Qi Xiang Tech Co Ltd
Publication of NL2020224A publication Critical patent/NL2020224A/en
Application granted granted Critical
Publication of NL2020224B1 publication Critical patent/NL2020224B1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

The invention belongs to the robot field, in particular, to an intelligent robot. Known intelligent robots do not automatically adjust height based on human height, accurately identify human expression and gesture and automatically match appropriate expression and gesture for interaction. The invention provides an intelligent robot, comprising a bottom base and a lower torso welded on top of the bottom base, wherein, an upper torso is formed right above the lower torso; the lower torso is mounted with a human sensing unit by bolts; a first placement cavity is formed on the lower torso; and the bottom inner wall of the first placement cavity is mounted with a first push rod motor. The invention can automatically adjust height based on human height to accurately identify human expression and gesture, and automatically match appropriate expression and gesture for interaction. The invention provides high intelligence, simple structure and convenient usage.

Description

FIELD
The present invention relates to the technical field of robots, in particular to an intelligent robot.
BACKGROUND
As technology develops, more attention has been paid to and research and development have been made for intelligent robots; as intelligent robots quickly become a part of our work and life with increasingly popular applications, it poses higher requirements on intelligent robots.
Patent 201510955745.8 discloses an intelligent robot used for improving intelligence of the intelligent robot in the prior art. However, it is still of low intelligence for failure to automatically adjust height based on human height, failure to accurately identify human expression and gesture and failure to automatically match appropriate expression and gesture for interaction.
Patent 201510339278.6 discloses an intelligent robot capable of simulating human walking, attracting the attention of children, and free obstacle avoidance and free walking within a certain scope; in addition, it can play learning files to raise learning interests of children. However, it is of poor intelligence for failure to automatically adjust height based on human height, failure to accurately identify human expression and gesture and failure to automatically match appropriate expression and gesture for interaction.
SUMMARY
The present invention provides an intelligent robot to solve the problem of poor intelligence of the prior art for failure to automatically adjust height based on human height, failure to accurately identify human expression and gesture and failure to automatically match appropriate expression and gesture for interaction.
To achieve the above object, the present invention provides the following technical scheme:
An intelligent robot comprises a bottom base and a lower torso welded on the top of the bottom base; an upper torso is formed right above the lower torso; the lower torso is mounted with a human sensing unit by bolts; a first placement cavity is formed on the lower torso; the bottom inner wall of the first placement cavity is mounted with a first push rod motor by bolts; the output shaft of the first push rod motor is welded on the bottom of the upper torso; the upper torso is mounted with a gesture identification unit by bolts; both sides of the upper torso are flexibly mounted with an arm; a top base is mounted right above the upper torso; a second placement cavity is formed on the upper torso; the bottom inner wall of the second placement cavity is mounted with a second push rod motor by bolts; the output shaft of the second push rod motor is welded on the bottom of the top base; a head is flexibly arranged on the top of the top base, wherein, the head is mounted with an expression identification unit and a display unit by bolts;
The human sensing unit, the gesture identification unit and the expression identification unit form a sensing identification module; the sensing identification module is connected to a matching module and a data processing module respectively; the matching module is connected to a multiple databases, an retrieving module and a data processing module respectively; the retrieving module is connected to a multiple databases, an execution module and a data processing module respectively; the data processing module is connected to a driver module and a multiple databases respectively; the driver module is connected to a first push rod motor and a second push rod motor respectively; and the execution module is connected to an arm and a display unit respectively.
Preferably, a first through hole connected to the first placement cavity is formed on the top of the lower torso, and the output shaft of the first push rod motor is mounted in the first through hole in a sliding manner.
Preferably, a second through hole connected to the second placement cavity is formed on the top of the upper torso, and the output shaft of the second push rod motor is mounted in the second through hole in a sliding manner.
Preferably, the human sensing unit is used for human sensing, and sends signals to the data processing module; the gesture identification unit is used for gesture identification, and transmits identification results to the matching module; and the expression identification unit is used for expression identification, and transmits identification results to the matching module.
Preferably, the matching module comprises an expression matching unit and a gesture matching unit, wherein, the expression matching unit and the gesture matching unit are connected to the expression identification unit and the gesture identification unit respectively; the expression matching unit is used for matching expression data in the multiple databases based on identification results of the expression identification unit, and transmits matching results to the retrieving module; and the gesture matching unit is used for matching gesture data in the multiple databases based on identification results of the gesture identification unit, and transmits matching results to the retrieving module.
Preferably, the retrieving module comprises an expression retrieve unit and a gesture retrieve unit, wherein, the expression retrieve unit and the gesture retrieve unit are connected to the expression matching unit and the gesture matching unit respectively; the expression retrieve unit is used for retrieving expression data in the multiple databases based on matching results of the expression matching unit, and transmitting retrieved expression data to the execution module; the gesture retrieve unit is used for retrieving gesture data in the multiple databases based on matching results of the gesture matching unit, and transmitting the retrieved gesture data to the execution module.
Preferably, the execution module comprises an expression executing unit and a gesture executing unit, wherein, the expression executing unit and the gesture executing unit are connected to the expression retrieve unit and the gesture retrieve unit respectively, and the expression executing unit and the gesture executing unit are connected to the display unit and the arm respectively; the expression executing unit is used for controlling the display unit to simulate corresponding expression based on expression data retrieved by the expression retrieve unit; and the gesture executing unit is used for controlling the arm to simulate corresponding gesture based on gesture data retrieved by the gesture retrieve unit.
Preferably, the driver module comprises a driving circuit, a first switch circuit and a second switch circuit, wherein, the driving circuit, the first switch circuit and the second switch circuit are connected to the data processing module; the first switch circuit and the second switch circuit are connected to the first push rod motor and the second push rod motor; and the driving circuit is used for driving the first push rod motor and the second push rod motor for operation.
Preferably, the multiple databases comprises a corresponding expression library, an expression library, a corresponding gesture library and a gesture library, wherein, the expression library and the gesture library are connected to the matching module; the corresponding expression library and the corresponding gesture library are connected to the retrieving module; expression data in the corresponding expression library correspond to expression data in the expression library; and gesture data in the corresponding gesture library correspond to gesture data in the gesture library.
Preferably, the data processing module is used for controlling operation of the driver module based on the sensing signals of the human sensing unit, and the data processing module is used for driving and controlling the sensing identification module, the matching module, the retrieving module and the execution module.
Compared with the prior art, the present disclosure has the advantages that:
1. Through the human sensing unit, the data processing module, the driver module, the first push rod motor and the second push rod motor, height of the gesture identification unit and the expression identification unit can be automatically adjusted so as to correctly identify human expression and gesture;
2. Through the gesture identification unit, the expression identification unit, the matching module, the retrieving module and the execution module, appropriate expression and gesture can be automatically matched for interaction to realize high intelligence.
The present invention can automatically adjust height based on human height so as to accurately identify human expression and gesture, and automatically match appropriate expression and gesture for interaction. The present invention has the advantages of high intelligence, simple structure and convenient usage.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a structural diagram of the intelligent robot according to the present invention;
FIG. 2 is profiled structural diagram of the intelligent robot according to the present invention;
FIG. 3 is a block diagram for working principles of the intelligent robot according to the present invention;
FIG. 4 is a block diagram for working principles of the sensing identification module of the intelligent robot according to the present invention;
FIG. 5 is a block diagram for working principles of the matching module of the intelligent robot according to the present invention;
FIG. 6 is a block diagram for working principles of the retrieving module of the intelligent robot according to the present invention;
FIG. 7 is a block diagram for working principles of the execution module of the intelligent robot according to the present invention;
FIG. 8 is a block diagram for working principles of the driver module of the intelligent robot according to the present invention;
FIG. 9 is a block diagram for working principles of the multiple databases of the intelligent robot according to the present invention.
In the drawings: 1 bottom base, 2 lower torso, 3 upper torso, 4 first placement cavity, 5 first push rod motor, 6 first through hole, 7 top base, 8 second placement cavity, 9 second push rod motor, 10 second through hole, 11 gesture identification unit, 12 head, 13 expression identification unit.

Claims (10)

EMBODIMENTSEMBODIMENTS The following clearly and comprehensively describes the technical scheme according to the embodiments of the present invention in combination of the drawings. Apparently, the embodiments in the following description are merely a part rather than all of the embodiments of the present inventThe following clearly and comprehensively described the technical scheme according to the present of the present invention in combination of the drawings. Apparently, the following in the following description are merely a part rather than all of the present or the present invent Referring to FIGS. 1-9, an intelligent robot, comprises a bottom base 1 and a lower torso 2 welded on the top of the bottom base 1; wherein, an upper torso 3 is formed right above the lower torso 2; the lower torso 2 is mounted with a human sensing unit by bolts; a first placement cavity 4 is formed on the lower torso 2; the bottom inner wall of the first placement cavity 4 is mounted with a first push rod motor 5 by bolts; the output shaft of the first push rod motor 5 is welded on the bottom of the upper torso 3; the upper torso 3 is mounted with a gesture identification unit 11 by bolts; both sides of the upper torso 3 are flexibly mounted with an arm; a top base 7 is formed right above the upper torso 3; a second placement cavity 8 is formed on the upper torso 3; the bottom inner wall of the second placement cavity 8 is mounted with a second push rod motor 9 by bolts; the output shaft of the second push rod motor 9 is welded on the bottom of the top base 7; a head 12 is flexibly mounted on the top of the top base 7, wherein, the head 12 is mounted with an expression identification unit 13 and a display unit by bolts;Referring to FIGS. 1-9, an intelligent robot, comprises a bottom base 1 and a lower torso 2 welded on the top of the bottom base 1; false, an upper torso 3 is formed right above the lower torso 2; the lower torso 2 is mounted with a human sensing unit by bolts; a first placement cavity 4 is formed on the lower torso 2; the bottom inner wall of the first placement cavity 4 is mounted with a first push rod motor 5 by bolts; the output shaft or the first push rod motor 5 is welded on the bottom of the upper torso 3; the upper torso 3 is mounted with a gesture identification unit 11 by bolts; both sides of the upper torso 3 are flexibly mounted with an arm; a top base 7 is formed right above the upper torso 3; a second placement cavity 8 is formed on the upper torso 3; the bottom inner wall of the second placement cavity 8 is mounted with a second push rod motor 9 by bolts; the output shaft or the second push rod motor 9 is welded on the bottom or the top base 7; a head 12 is flexibly mounted on the top of the top base 7, where, the head 12 is mounted with an expression identification unit 13 and a display unit by bolts; The human sensing unit, the gesture identification unit 11 and the expression identification unit 13 form a sensing identification module, wherein, the sensing identification module is connected to a matching module and a data processing module respectively; the matching module is connected to a multiple databases, an retrieving module and a data processing module; the retrieving module is connected to a multiple databases, an execution module and a data processing module respectively; the data processing module is connected to a driver module and a multiple databases; the driver module is connected to the first push rod motor 5 and the second push rod motor 9 respectively; the execution module is connected to the arm and the display unit respectively.The human sensing unit, the gesture identification unit 11 and the expression identification unit 13 form a sensing identification module, the sensing identification module is connected to a matching module and a data processing module respectively; the matching module is connected to a multiple databases, an retrieving module and a data processing module; the retrieving module is connected to a multiple databases, an execution module and a data processing module respectively; the data processing module is connected to a driver module and a multiple databases; the driver module is connected to the first push rod motor 5 and the second push rod motor 9 respectively; the execution module is connected to the arm and the display unit respectively. In this embodiment, after sensing a human body, the human sensing unit sends signals to the data processing module; the data control module controls operation of the driving circuit, and the open or close of the first switch circuit and the second switch circuit; when the first switch circuit is closed, the driving circuit controls operation of the first push rod motor 5, and adjusts height of the upper torso 3, so as to adjust height of the gesture identification unit 11 to the extent that the gesture identification unit 11 can accurately identify gesture; when the second switch circuit is closed, the driving circuit controls operation of the second push rod motor 9, and adjusts height of the head 12, so as to adjust height of the expression identification unit 13 to the extent that the expression identification unit 13 can accurately identify expression; the expression identification unit 13 and the gesture identification unit 11 identify human expression and gesture, and transmit identification results to the expression matching unit and the gesture matching unit respectively; the expression matching unit matches expression data in the expression library according to identification results of the expression identification unit 13, and transmits matching results to the expression retrieve unit; the gesture matching unit matches gesture data in the gesture library based on identification results of the gesture identification unit 11, and transmits matching results to the gesture retrieve unit; the expression retrieve unit retrieves expression data in the corresponding expression library based on matching results of the expression matching unit, and transmits retrieved expression data to the expression executing unit; the gesture retrieve unit retrieves gesture data in the corresponding gesture library based on matching results of the gesture matching unit, and transmits retrieved gesture data to the gesture executing unit; the expression executing unit controls the display unit to simulate corresponding expression based on expression data retrieved by the expression retrieve unit; and the gesture executing unit controls the aim to simulate corresponding gesture based on gesture data retrieved by the gesture retrieve unit, thus completing interaction.In this embodiment, after sensing a human body, the human sensing unit sends signals to the data processing module; the data control module controls operation of the driving circuit, and the open or close of the first switch circuit and the second switch circuit; when the first switch circuit is closed, the driving circuit controls operation of the first push rod motor 5, and adjusts height of the upper torso 3, so as to adjust height of the gesture identification unit 11 to the extent that the gesture identification unit 11 can accurately identify gesture; when the second switch circuit is closed, the driving circuit controls operation of the second push rod motor 9, and adjusts height of the head 12, so as to adjust height of the expression identification unit 13 to the extent that the expression identification unit 13 can accurately identify expression; the expression identification unit 13 and the gesture identification unit 11 identify human expression and gesture, and transmit identification results to the expression matching unit and the gesture matching unit respectively; the expression matching unit matches expression data in the expression library according to identification results of the expression identification unit 13, and transmits matching results to the expression retrieve unit; the gesture matching unit matches gesture data in the gesture library based on identification results of the gesture identification unit 11, and transmits matching results to the gesture retrieve unit; the expression retrieve unit retrieves expression data in the corresponding expression library based on matching results of the expression matching unit, and transmits retrieved expression data to the expression executing unit; the gesture retrieve unit retrieves gesture data in the corresponding gesture library based on matching results of the gesture matching unit, and transmits retrieved gesture data to the gesture executing unit; the expression executing unit controls the display unit to simulate corresponding expression based on expression data retrieved by the expression retrieve unit; and the gesture executing unit controls aim to simulate corresponding gesture based on gesture data retrieved by the gesture retrieve unit, thus completing interaction. In this embodiment, a first through hole 6 connected to the first placement cavity 4 is formed on the top of the lower torso 2, and the output shaft of the first push rod motor 5 is mounted in the first through hole 6 in a sliding manner; a second through hole 10 connected to the second placement cavity 8 is formed on the top of the upper torso 3, and the output shaft of the second push rod motor 9 is mounted in the second through hole 10 in a sliding manner; the human sensing unit is used for human sensing, and sends signal to the data processing module; the gesture identification unit 11 is used for gesture identification, and transmits identification results to the matching module; the expression identification unit 13 is used for expression identification, and transmits identification results to the matching module; the matching module comprises an expression matching unit and a gesture matching unit, wherein, the expression matching unit and the gesture matching unit are connected to the expression identification unit 13 and the gesture identification unit 11 respectively; the expression matching unit is used for matching expression data in the multiple databases based on identification results of the expression identification unit 13, and transmits matching results to the retrieving module; the gesture matching unit is used for matching gesture data in the multiple database based on identification results of the gesture identification unit 11, and transmits matching results to the retrieving module; the retrieving module comprises an expression retrieve unit and a gesture retrieve unit, wherein, the expression retrieve unit and the gesture retrieve unit are connected to the expression matching unit and the gesture matching unit respectively; the expression retrieve unit is used for retrieving expression data in the multiple databases based on matching results of the expression matching unit, and transmits retrieved expression data to the execution module; the gesture retrieve unit is used for retrieving gesture data in the multiple databases based on matching results of the gesture matching unit, and transmits retrieved gesture data to the execution module; the execution module comprises an expression executing unit and a gesture executing unit, wherein, the expression executing unit and the gesture executing unit are connected to the expression retrieve unit and the gesture retrieve unit respectively; the expression executing unit and the gesture executing unit are connected to the display unit and the arm respectively; the expression executing unit is used for controlling the display unit to simulate corresponding expression based on expression data retrieved by the expression retrieve unit; the gesture executing unit is used for controlling the arm to simulate corresponding gesture based on gesture data retrieved by the gesture retrieve unit; the driver module comprises a driving circuit, a first switch circuit and a second switch circuit, wherein, the driving circuit, the first switch circuit and the second switch circuit are connected to the data processing module; the first switch circuit and the second switch circuit are connected to the first push rod motor 5 and the second push rod motor 9 respectively; and the driving circuit is used for driving operation of the first push rod motor 5 and the second push rod motor 9; the multiple databases comprises a corresponding expression library, an expression library, a corresponding gesture library and a gesture library, wherein, the expression library and the gesture library are connected to the matching module; the corresponding expression library and the corresponding gesture library are connected to the retrieving module; expression data in the corresponding expression library correspond to expression data in the expression library, and gesture data in the corresponding gesture library correspond to gesture data in the gesture library; the data processing module is used for controlling operation of the driver module based on sensing signals of the human sensing unit, and the data processing module is used for driving and controlling the sensing identification module, the matching module, the retrieving module and the execution module; compared with the prior art, this embodiment has the advantages that: through the human sensing unit, the data processing module, the driver module, the first push rod motor 5 and the second push rod motor 9, height of the gesture identification unit 11 and the expression identification unit 13 can be automatically identified so as to accurately identify human expression and gesture; through the gesture identification unit 11, the expression identification unit 13, the matching module, the retrievingIn this embodiment, a first through hole 6 connected to the first placement cavity 4 is formed on the top of the lower torso 2, and the output shaft of the first push rod motor 5 is mounted in the first through hole 6 in a sliding manner ; a second through hole 10 connected to the second placement cavity 8 is formed on the top of the upper torso 3, and the output shaft or the second push rod motor 9 is mounted in the second through hole 10 in a sliding manner; the human sensing unit is used for human sensing, and the signal to the data processing module; the gesture identification unit 11 is used for gesture identification, and transmits identification results to the matching module; the expression identification unit 13 is used for expression identification, and transmits identification results to the matching module; the matching module comprises an expression matching unit and a gesture matching unit, in, the expression matching unit and the gesture matching unit are connected to the expression identification unit 13 and the gesture identification unit 11 respectively; the expression matching unit is used for matching expression data in the multiple databases based on identification results of the expression identification unit 13, and transmits matching results to the retrieving module; the gesture matching unit is used for matching gesture data in the multiple database based on identification results of the gesture identification unit 11, and transmits matching results to the retrieving module; the retrieving module comprises an expression retrieve unit and a gesture retrieve unit, the expression retrieve unit and the gesture retrieve unit are connected to the expression matching unit and the gesture matching unit respectively; the expression retrieve unit is used for retrieving expression data in the multiple databases based on matching results of the expression matching unit, and transmits retrieved expression data to the execution module; the gesture retrieve unit is used for retrieving gesture data in the multiple databases based on matching results of the gesture matching unit, and transmits retrieved gesture data to the execution module; the execution module comprises an expression executing unit and a gesture executing unit, the expression execution unit and the gesture executing unit are connected to the expression retrieve unit and the gesture retrieve unit respectively; the expression executing unit and the gesture executing unit are connected to the display unit and the arm respectively; the expression executing unit is used for controlling the display unit to simulate corresponding expression based on expression data retrieved by the expression retrieve unit; the gesture executing unit is used for controlling the arm to simulate corresponding gesture based on gesture data retrieved by the gesture retrieve unit; the driver module comprises a driving circuit, a first switch circuit and a second switch circuit, the driving circuit, the first switch circuit and the second switch circuit are connected to the data processing module; the first switch circuit and the second switch circuit are connected to the first push rod motor 5 and the second push rod motor 9 respectively; and the driving circuit is used for driving operation of the first push rod motor 5 and the second push rod motor 9; the multiple databases comprises a corresponding expression library, an expression library, a corresponding gesture library and a gesture library, the expression library and the gesture library are connected to the matching module; the corresponding expression library and the corresponding gesture library are connected to the retrieving module; expression data in the corresponding expression library correspond to expression data in the expression library, and gesture data in the corresponding gesture library correspond to gesture data in the gesture library; the data processing module is used for controlling operation of the driver module based on sensing signals of the human sensing unit, and the data processing module is used for driving and controlling the sensing identification module, the matching module, the retrieving module and the execution module ; compared to the prior art, this embodiment has the advantages that: through the human sensing unit, the data processing module, the driver module, the first push rod motor 5 and the second push rod motor 9, height of the gesture identification unit 11 and the expression identification unit 13 can be automatically identified so as to accurately identify human expression and gesture; through the gesture identification unit 11, the expression identification unit 13, the matching module, the retrieving 5 module and the execution module, appropriate expression and gesture can be automatically matched for interaction, thus realizing high intelligence; the present invention can automatically adjust height based on human height, so as to accurately identify human expression and gesture, and automatically match appropriate expression and gesture for interaction. Therefore, it has the io advantages of high intelligence, simple structure and convenient usage.5 module and the execution module, appropriate expression and gesture can be automatically matched for interaction, thus realizing high intelligence; the present invention can automatically adjust height based on human height, so as to accurately identify human expression and gesture, and automatically match appropriate expression and gesture for interaction. Therefore, it has the advantages of high intelligence, simple structure and convenient usage. The above embodiments are merely preferred embodiments of the present invention, and should not be used to limit the present invention in any way. Equivalent substitutions or modifications made by those skilled in the art in accordance with the technical scheme and ideas of the present disclosure 15 within the disclosed technical scope shall fall within the protection scope of the present invention.The above expired are merely preferred expended of the present invention, and should not be used to limit the present invention in any way. Equivalent substitutions or modifications made by those skilled in the art in accordance with the technical scheme and ideas of the present disclosure 15 within the disclosed technical scope shall fall within the protection scope of the present invention. Conclusies:Conclusions: 1. Een intelligente robot, die bestaat uit een basis (1) en een onderste romp (2), aan de bovenkant van de basis (1) gelast, waarin een bovenste romp (3 ) is voorzien net boven de onderste romp (2), de onderste romp (2) is vast voorzien van een eenheid voor lichaamsdetectie door middel van bouten, en een eerste plaatsingsholte (4) is gevormd in de onderste romp (2); een binnenste binnenwand van de eerste plaatsingsholte (4) is vast voorzien van een eerste drijfstangmotor (5) door de bouten, en een uitvoerschacht van de eerste drijfstangmotor (5) is gelast op de onderkant van de bovenste romp (3); de bovenste romp (3) is vast voorzien van een eenheid voor herkenning van gebaren (11) door bouten, de twee zijden van de bovenste romp (3) zijn beweegbaar voorzien van een arm, en een spuit (7) is voorzien net boven de bovenste romp (3); een tweede plaatsingsholte (8) is gevormd in de bovenste romp (3), en een binnenste binnenwand van de tweede plaatsingsholte (8) is vast voorzien van een tweede drijfstangmotor (9) door bouten; een uitvoerschacht van de tweede drijfstangmotor (9) is gelast op de onderkant van de spuit (7), en een hoofd (12) is beweegbaar voorzien op de bovenkant van de spuit (7), en het hoofd (12) is vast voorzien van een eenheid voor herkenning van gezichtsuitdrukkingen (13) en een displayeenheid door bouten;An intelligent robot consisting of a base (1) and a lower body (2) welded to the top of the base (1), wherein an upper body (3) is provided just above the lower body (2) the lower body (2) is fixedly provided with a body detection unit by means of bolts, and a first placement cavity (4) is formed in the lower body (2); an inner inner wall of the first placement cavity (4) is fixedly provided with a first connecting rod motor (5) through the bolts, and an output shaft of the first connecting rod motor (5) is welded to the bottom of the upper hull (3); the upper hull (3) is fixedly provided with a bolt recognition unit (11), the two sides of the upper hull (3) are movably provided with an arm, and a syringe (7) is provided just above the upper body (3); a second locating cavity (8) is formed in the upper hull (3), and an inner inner wall of the second locating cavity (8) is fixedly provided with a second connecting rod motor (9) by bolts; an output shaft of the second connecting rod motor (9) is welded to the bottom of the syringe (7), and a head (12) is movably provided on the top of the syringe (7), and the head (12) is fixedly provided with a facial expression recognition unit (13) and a display unit by bolts; De eenheid voor lichaamsdetectie, de eenheid voor herkenning van gebaren (11) en de eenheid voor herkenning van gezichtsuitdrukkingen (13) vormen samen een detectieherkenningsmodule, de detectieherkenningsmodule is respectievelijk verbonden met een koppelingsmodule en een eenheid voor dataverwerking; de koppelingsmodule is respectievelijk verbonden met een databasegroep, een aanroepmodule en een dataverwerkingsmodule; de aanroepmodule is respectievelijk verbonden met de databasegroep, een uitvoeringsmodule en de dataverwerkingsmodule, en de dataverwerkingsmodule is respectievelijk verbonden met een aandrijvingsmodule en de databasegroep; de aandrijvingsmodule is respectievelijk verbonden met de eerste drijfstangmotor (5) en de tweede drijfstangmotor (9), en de uitvoeringsmodule is respectievelijk verbonden met de arm en de displayeenheid.The body detection unit, the gesture recognition unit (11) and the facial expression unit (13) together form a detection recognition module, the detection recognition module is connected to a coupling module and a data processing unit, respectively; the coupling module is respectively connected to a database group, a calling module and a data processing module; the calling module is respectively connected to the database group, an execution module and the data processing module, and the data processing module is respectively connected to a drive module and the database group; the drive module is connected to the first connecting rod motor (5) and the second connecting rod motor (9), respectively, and the execution module is connected to the arm and the display unit, respectively. 2. De intelligente robot volgens conclusie 1, waarin een eerste gat (6) dat communiceert met de eerste plaatsingsholte (4) is gevormd aan de bovenkant van de onderste romp (2), en een uitvoerschacht van de eerste drijfstangmotor (5) is voorzien in het eerste gat (6) op een schuivende manier.The intelligent robot according to claim 1, wherein a first hole (6) communicating with the first placement cavity (4) is formed on the top of the lower hull (2), and an output shaft of the first connecting rod motor (5) is provided in the first hole (6) in a sliding manner. 3. De intelligente robot volgens conclusie 1, waarin een tweede gat (10) dat communiceert met de tweede plaatsingsholte (8) is gevormd aan de bovenkant van de bovenste romp (3), en een uitvoerschacht van de tweede drijfstangmotor (9) is voorzien in het tweede gat (10) op een schuivende manier.The intelligent robot according to claim 1, wherein a second hole (10) communicating with the second placement cavity (8) is formed at the top of the upper fuselage (3), and an output shaft of the second connecting rod motor (9) is provided in the second hole (10) in a sliding manner. 4. De intelligente robot volgens conclusie 1, waarin de eenheid voor lichaamsdetectie wordt gebruikt om het menselijk lichaam te detecteren, en dan een signaal doorgeeft aan de dataverwerkingsmodule; de eenheid voor gebarenherkenning (11) wordt gebruikt om het gebaar te herkennen, en dan een herkenningsresultaat door te geven aan de koppelingsmodule; de eenheid voor herkenning van gezichtsuitdrukkingen (13) wordt gebruikt om de gezichtsuitdrukking te herkennen, en dan het herkenningsresultaat door te geven aan de koppelingsmodule.The intelligent robot of claim 1, wherein the body detection unit is used to detect the human body, and then passes a signal to the data processing module; the gesture recognition unit (11) is used to recognize the gesture, and then to pass on a recognition result to the link module; the facial expression recognition unit (13) is used to recognize the facial expression, and then to pass on the recognition result to the coupling module. 5. De intelligente robot volgens conclusie 4, waarin de koppelingsmodule bestaat uit een koppelingseenheid voor gezichtsuitdrukkingen en een koppelingseenheid voor gebaren, koppelingseenheid voor gezichtsuitdrukkingen en de koppelingseenheid voor gebaren zijn respectievelijk verbonden met de eenheid voor herkenning van gezichtsuitdrukkingen (13) en de eenheid voor gebarenherkenning (11); de koppelingseenheid voor gezichtsuitdrukkingen wordt gebruikt om gegevens over gezichtsuitdrukkingen te koppelen in de databasegroep aan de hand van het herkenningsresultaat van de eenheid voor herkenning van gezichtsuitdrukkingen (13), en dan het passende resultaat door te zenden naar de aanroepmodule; de koppelingseenheid voor gebaren wordt gebruikt om gegevens over gebaren te koppelen in de databasemodule aan de hand van het herkenningsresultaat van de eenheid voor gebarenherkenning (11), en dan het passende resultaat door te zenden naar de aanroepmodule.The intelligent robot according to claim 4, wherein the coupling module comprises a facial expression coupling unit and a gesture coupling unit, facial expression coupling unit and the gesture coupling unit are connected to the facial expression recognition unit (13) and the gesture recognition unit, respectively (11); the facial expression coupling unit is used to link facial expression data in the database group based on the recognition result of the facial expression recognition unit (13), and then forwards the appropriate result to the calling module; the gesture signing unit is used to link gestures data in the database module based on the recognition result of the gesture recognition unit (11), and then forward the appropriate result to the calling module. 6. De intelligente robot volgens conclusie 5, waarin de aanroepmodule bestaat uit een eenheid die gezichtsuitdrukkingen aanroept en een eenheid die gebaren aanroept, de eenheid die gezichtsuitdrukkingen aanroept en de eenheid die gebaren aanroept zijn respectievelijk verbonden met de koppelingseenheid voor gezichtsuitdrukkingen en de koppelingseenheid voor gebaren; de eenheid die gezichtsuitdrukkingen aanroept wordt gebruikt om gegevens over gezichtsuitdrukkingen aan te roepen uit de databasegroep aan de hand van het passende resultaat van de koppelingseenheid voor gezichtsuitdrukkingen, en dan de aangeroepen uitdrukkingsgegevens naar de uitvoeringsmodule te zenden; de eenheid die gebaren aanroept wordt gebruikt om gegevens over gebaren aan te roepen uit de databasegroep aan de hand van het passende resultaat van de koppelingseenheid voor gebaren, en dan de aangeroepen gebarengegevens naar de uitvoeringsmodule te zenden.The intelligent robot according to claim 5, wherein the invocation module consists of a unit that calls facial expressions and a unit that calls gestures, the unit that invokes facial expressions, and the unit that invokes gestures are respectively connected to the facial expression coupling unit and the gesturing coupling unit ; the facial expression unit is used to invoke facial expression data from the database group based on the appropriate result of the facial expression coupling unit, and then send the called expression data to the execution module; the gesture-calling unit is used to invoke gesture data from the database group based on the appropriate result of the gesture-linking unit, and then send the called gesture data to the execution module. 7. De intelligente robot volgens enige van de conclusieen 1-6, waarin de uitvoeringsmodule bestaat uit een uitvoeringseenheid voor gezichtsuitdrukkingen en een uitvoeringseenheid voor gebaren, de uitvoeringseenheid voor gezichtsuitdrukkingen en de uitvoeringseenheid voor gebaren zijn respectievelijk verbonden met de eenheid die gezichtsuitdrukkingen aanroept en de eenheid die gebaren aanroept, en de uitvoeringseenheid voor gezichtsuitdrukkingen en de uitvoeringseenheid voor gebaren zijn respectievelijk verbonden met de displayeenheid en de arm; de uitvoeringseenheid voor gezichtsuitdrukkingen wordt gebruikt om de displayeenheid te besturen om de overeenkomstige gezichtsuitdrukkingen op te wekken aan de hand van de gegevens over gezichtsuitdrukkingen aangeroepen door de eenheid die gezichtsuitdrukkingen aanroept; de uitvoeringseenheid voor gebaren wordt gebruikt om de arm te besturen om de overeenkomstige gebaren op te wekken aan de hand van de gegevens over gebaren aangeroepen door de eenheid die gebaren aanroept.The intelligent robot according to any of claims 1-6, wherein the execution module consists of a facial expression execution unit and a gestures execution unit, the facial expressions execution unit and the gestures execution unit are respectively connected to the facial expressions unit and the unit invoking gestures, and the facial expression execution unit and the gesturing execution unit are connected to the display unit and the arm, respectively; the facial expression execution unit is used to control the display unit to generate the corresponding facial expressions from the facial expression data invoked by the facial expression unit; the gestures execution unit is used to control the arm to generate the corresponding gestures based on the gestures data invoked by the gestures unit. 8. De intelligente robot volgens conclusie 1, waarin de aandrijvingsmodule bestaat uit een aandrijvingskring, een eerste schakelkring en een tweede schakelkring, en de aandrijvingskring, de eerste schakelkring en de tweede schakelkring zijn allemaal verbonden met de dataverwerkingsmodule; de eerste schakelkring en de tweede schakelkring zijn respectievelijk verbonden met de eerste aandrijfmotor (5) en de tweede aandrijfmotor (9), en de aandrijfkring wordt gebruikt om de eerste aandrijfmotor (5) en de tweede aandrijfmotor (9) in werking te zetten.The intelligent robot according to claim 1, wherein the drive module consists of a drive circuit, a first switching circuit and a second switching circuit, and the driving circuit, the first switching circuit and the second switching circuit are all connected to the data processing module; the first switching circuit and the second switching circuit are connected to the first driving motor (5) and the second driving motor (9), respectively, and the driving circuit is used to operate the first driving motor (5) and the second driving motor (9). 9. De intelligente robot volgens conclusie 1, waarin de databasegroep bestaat uit een bibliotheek van overeenkomstige gezichtsuitdrukkingen, een bibliotheek van gezichtsuitdrukkingen, een bibliotheek van overeenkomstige gebaren, en een bibliotheek van gebaren, en de bibliotheek van gezichtsuitdrukkingen en de bibliotheek van gebaren zijn allebei verbonden met de koppelingsmodule, de bibliotheek van overeenkomstige gezichtsuitdrukkingen en de bibliotheek van overeenkomstige gebaren zijn allebei verbonden met de aanroepmodule; gegevens over gezichtsuitdrukkingen in de bibliotheek voor overeenkomstige gezichtsuitdrukkingen komen overeen met gegevens over gezichtsuitdrukkingen in de bibliotheek voor gezichtsuitdrukkingen, en gegevens over gebaren in de bibliotheek voor overeenkomstige gebaren komen overeen met gegevens over gebaren in de bibliotheek voor gebaren.The intelligent robot of claim 1, wherein the database group consists of a library of corresponding facial expressions, a library of facial expressions, a library of corresponding gestures, and a library of gestures, and the library of facial expressions and the library of gestures are both connected the link module, the library of corresponding facial expressions, and the library of corresponding gestures are both connected to the calling module; data on facial expressions in the library for corresponding facial expressions correspond to data on facial expressions in the library for facial expressions, and data on gestures in the library for corresponding gestures correspond to data on gestures in the library for gestures. 10. De intelligente robot volgens conclusie 1, waarin de module voorThe intelligent robot according to claim 1, wherein the module is for 5 gegevensverwerking wordt gebruikt om de aandrijvingsmodule te besturen om te werken aan de hand van een detectiesignaal van de eenheid voor lichaamsdetectie, en de eenheid voor gegevensverwerking wordt gebruikt om de aandrijving van respectievelijk de module voor detectieherkenning, de koppelingsmodule, de aanroepmodule en de uitvoeringsmodule te besturen.5 data processing is used to control the drive module to operate on the basis of a detection signal from the body detection unit, and the data processing unit is used to control the drive of the detection recognition module, the coupling module, the calling module and the execution module, respectively. driving.
NL2020224A 2017-01-05 2018-01-02 Intelligent Robot NL2020224B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710007531.7A CN106737745A (en) 2017-01-05 2017-01-05 Intelligent robot

Publications (2)

Publication Number Publication Date
NL2020224A NL2020224A (en) 2018-07-23
NL2020224B1 true NL2020224B1 (en) 2018-10-10

Family

ID=58950318

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2020224A NL2020224B1 (en) 2017-01-05 2018-01-02 Intelligent Robot

Country Status (2)

Country Link
CN (1) CN106737745A (en)
NL (1) NL2020224B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108406782A (en) * 2018-05-29 2018-08-17 朱晓丹 A kind of financial counseling intelligent robot easy to use
CN109920347B (en) * 2019-03-05 2020-12-04 重庆大学 Motion or expression simulation device and method based on magnetic liquid
CN114260916B (en) * 2022-01-05 2024-02-27 森家展览展示如皋有限公司 Interactive exhibition intelligent robot

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6604021B2 (en) * 2001-06-21 2003-08-05 Advanced Telecommunications Research Institute International Communication robot
US9014848B2 (en) * 2010-05-20 2015-04-21 Irobot Corporation Mobile robot system
CN202315292U (en) * 2011-11-11 2012-07-11 山东科技大学 Comprehensive greeting robot based on smart phone interaction
EP2933067B1 (en) * 2014-04-17 2019-09-18 Softbank Robotics Europe Method of performing multi-modal dialogue between a humanoid robot and user, computer program product and humanoid robot for implementing said method
FR3021891A1 (en) * 2014-06-05 2015-12-11 Aldebaran Robotics DEVICE FOR REMOVABLE PREPOSITIONING AND FASTENING OF ARTICULATED MEMBERS OF A HUMANOID ROBOT
CN104102346A (en) * 2014-07-01 2014-10-15 华中科技大学 Household information acquisition and user emotion recognition equipment and working method thereof
CN105563493A (en) * 2016-02-01 2016-05-11 昆山市工业技术研究院有限责任公司 Height and direction adaptive service robot and adaptive method
CN205594506U (en) * 2016-04-12 2016-09-21 精效新软新技术(北京)有限公司 Human -computer interaction device among intelligence work systems
CN205651333U (en) * 2016-04-21 2016-10-19 深圳市笑泽子智能机器人有限公司 Guest -meeting robot

Also Published As

Publication number Publication date
NL2020224A (en) 2018-07-23
CN106737745A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
NL2020224B1 (en) Intelligent Robot
Li et al. Complicated robot activity recognition by quality-aware deep reinforcement learning
CN107116553B (en) Mechanical arm operation method and device
CN110147091B (en) Robot motion control method and device and robot
US20150283703A1 (en) Apparatus and methods for remotely controlling robotic devices
KR20180013757A (en) Using human motion sensors to detect movement when in the vicinity of hydraulic robots
CN108524187B (en) six-degree-of-freedom upper limb rehabilitation robot control system
Chen et al. Controlling a robot using leap motion
CN205068294U (en) Human -computer interaction of robot device
CN207380482U (en) Intelligent interaction service robot
CN105511400A (en) Control system of stamping robots
CN110877334A (en) Method and apparatus for robot control
Cheng et al. Human-robot interaction method combining human pose estimation and motion intention recognition
CN111331603B (en) Stress type motion posture conversion method and system for wheel-legged robot
Sreekar et al. Positioning the 5-DOF robotic arm using single stage deep CNN model
US20220379469A1 (en) Massage motion control method, robot controller using the same, and computer readable storage medium
CN104656676A (en) Hand, leg and eye servo control device and method for humanoid robot
CN107263539A (en) Jerk robot
CN203109954U (en) Minimum amplitude control device at operation tail end of mechanical arm
Patil et al. Design and implementation of gesture controlled robot with a robotic arm
Ji et al. Improving teleoperation through human-aware haptic feedback: a distinguishable and interpretable physical interaction based on the contact state
Pallavan et al. VOICE CONTROLLED ROBOT WITH REAL TIME BARRIER DETECTION AND AVERTING
Song et al. The development of interface device for human robot interaction
Maheswari et al. Voice Controlled Robot Using Bluetooth Module
Ravipati et al. Real-time gesture recognition and robot control through blob tracking

Legal Events

Date Code Title Description
MM Lapsed because of non-payment of the annual fee

Effective date: 20210201