CN113370229A - Exhibition hall intelligent explanation robot and implementation method - Google Patents
Exhibition hall intelligent explanation robot and implementation method Download PDFInfo
- Publication number
- CN113370229A CN113370229A CN202110636109.4A CN202110636109A CN113370229A CN 113370229 A CN113370229 A CN 113370229A CN 202110636109 A CN202110636109 A CN 202110636109A CN 113370229 A CN113370229 A CN 113370229A
- Authority
- CN
- China
- Prior art keywords
- exhibition
- explanation
- module
- exhibition hall
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000003993 interaction Effects 0.000 claims abstract description 26
- 239000000463 material Substances 0.000 claims abstract description 25
- 238000007726 management method Methods 0.000 claims abstract description 20
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 10
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 10
- 238000010276 construction Methods 0.000 claims description 19
- 239000002245 particle Substances 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 239000003642 reactive oxygen metabolite Substances 0.000 claims 2
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/008—Manipulators for service tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/003—Controls for manipulators by means of an audio-responsive input
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Manipulator (AREA)
Abstract
The invention discloses an exhibition hall intelligent explanation robot and a realization method thereof, belonging to an intelligent robot, aiming at solving the technical problem of how to replace manual work or a multimedia system to guide and explain a visitor, namely providing services such as voice interaction, voice explanation, video playing, navigation guiding and the like for a user, and improving the experience feeling of the visit of the user, and the technical scheme is as follows: the robot comprises a robot body, wherein the robot body deploys an exhibition site material management module, a map management module, a navigation control module, a voice synthesis module, a voice interaction module and an exhibition site explanation module based on an ROS system; the exhibition booth material management module is used for storing material information required to be explained by each exhibition booth. The realization method is that the robot body uses a laser radar sensor to scan an exhibition hall area to establish a map, establishes navigation points for each exhibition position, and simultaneously carries out path planning and indoor navigation on the appointed exhibition position, thereby providing services of voice interaction, navigation guidance, voice explanation and video playing for users.
Description
Technical Field
The invention relates to the technical field of intelligent robots, in particular to an exhibition hall intelligent explanation robot and an implementation method.
Background
At present, when visitors visit an exhibition hall, in order to increase the knowledge of visitors to the theme of the exhibition hall, the exponents generally follow the exponents to guide and explain or set a multimedia system for explanation at an exhibition area fixed position, the exponents need to be cultured in one of the two modes, a large amount of manpower, material resources and time are consumed, the guide and explanation forms of the exponents are limited, and the other multimedia system needs to be set according to the number of the exhibition areas, so that the capital construction and operation and maintenance cost is high.
Therefore, how to lead and explain visitors instead of manual work or a multimedia system, that is, providing services such as voice interaction, voice explanation, video playing, navigation leading and the like for users, and improving the experience sense of user visiting is a technical problem to be solved urgently at present.
Disclosure of Invention
The technical task of the invention is to provide an exhibition hall intelligent explanation robot and an implementation method thereof, so as to solve the problem of how to replace manual work or a multimedia system to guide and explain a visitor, namely, provide services such as voice interaction, voice explanation, video playing, navigation guiding and the like for a user, and improve the experience of visiting the user.
The technical task of the invention is realized in the following way, the intelligent explanation robot for the exhibition hall comprises a robot body, wherein the robot body deploys an exhibition position material management module, a map management module, a navigation control module, a voice synthesis module, a voice interaction module and an exhibition position explanation module based on an ROS system;
the exhibition booth material management module is used for storing material information required to be explained in each exhibition booth;
the map management module is used for controlling the robot body to scan the map of the exhibition area and calibrating the specific position of the exhibition position;
the navigation control module is used for planning a navigation path for the target point and controlling the robot body to move to the target point according to the planned path;
the voice synthesis module is used for synthesizing the text information into an audio file to be broadcasted;
the voice interaction module is used for voice conversation between a user and the robot body;
the exhibition booth explanation module is used for automatically executing explanation tasks.
Preferably, the material information types managed by the exhibition site material management module comprise videos, pictures and characters;
the voice interaction module triggers a voice interaction entrance through keywords, and a user issues a control instruction through voice.
Preferably, the robot body is provided with a laser radar sensor, the whole exhibition hall area is scanned through the laser radar sensor, a grid map is established, and the position of each exhibition position on the map is set.
Preferably, the map management module is provided with an SLAM map construction service sub-module, and the SLAM map construction service sub-module is used for controlling the operation of the laser radar sensor and collecting data collected by the laser radar sensor; the SLAM map construction service sub-module is used for constructing a two-dimensional grid map based on a Rao-Blackwellized particle filtering algorithm.
A robot body scans an exhibition hall area by using a laser radar sensor to establish a map, establishes navigation points for each exhibition hall, and simultaneously carries out path planning and indoor navigation on a designated exhibition hall so as to provide services of voice interaction, navigation guidance, voice explanation and video playing for users; the method comprises the following specific steps:
the robot body controls the laser radar sensor to operate, the whole exhibition hall area is scanned, a grid map is established, the position of each exhibition position on the map is set, and the construction of the exhibition hall map is completed;
uploading the exhibition material information to be explained of each exhibition, and correspondingly storing according to the exhibition numbers; the types of the exhibition site material information comprise videos, pictures and characters;
and setting a default explanation task of the robot and appointing an explanation sequence of each exhibition position.
Preferably, the exhibition hall map is obtained by starting SLAM map construction service, and the robot body controls the laser radar sensor to operate and collects data collected by the laser radar sensor;
the SLAM map construction service establishes a two-dimensional grid map based on a Rao-Blackwellized particle filter algorithm.
Preferably, a default explanation task of the robot is set, and explanation sequence of each exhibition position is specified and explained as follows:
a user performs touch operation on a terminal screen of the robot body, sets an explanation task, and supports explanation according to a preset exhibition position sequence or explanation by selecting a designated exhibition position;
the exhibition position explanation module is used for setting and loading a first exhibition position point according to the explanation task, acquiring the specific position information of the exhibition position and sending the position to the navigation control module;
the navigation control module plans a navigation path, controls the robot body to advance to a target point, and sends a notification to the exhibition position explanation module after reaching the target point;
the exhibition booth explanation module monitors the arriving exhibition booth information and searches the explanation data corresponding to the exhibition booth;
after the explanation of the current exhibition position is completed, the exhibition position explanation module loads the next exhibition position point and continues the explanation of the content of the next exhibition position.
Preferably, the exhibition booth interpretation module monitors the arriving exhibition booth information, and finding the interpretation data corresponding to the exhibition booth includes the following conditions:
for the video, playing the video on a terminal screen of the robot body;
for the picture file, the picture is displayed on a terminal screen of the robot body in a polling mode;
and for the text information, calling a voice synthesis module to convert the text into an audio file and broadcasting voice content.
Preferably, in the process of setting a default explanation task of the robot, appointing an explanation sequence of each exhibition position and explaining, the voice interaction module collects external voice to perform voice recognition, enters a voice interaction state after recognizing appointed keywords, suspends the execution of the current explanation task, and recognizes and executes a user instruction;
the user instruction comprises the steps of going to a designated exhibition position, ending explanation and continuing explanation.
A computer-readable storage medium having stored thereon a computer program executable by a processor to implement the exhibition hall intelligent interpretation implementation method as described above.
The intelligent explanation robot for the exhibition hall and the implementation method have the following advantages:
the service robot is applied to exhibition service, a map is established for an exhibition hall area by using the service robot, navigation points of each exhibition position are created, the robot carries out path planning and indoor navigation on the appointed exhibition position, services such as voice interaction, navigation guidance, voice explanation, video playing and the like are provided for a user, better visiting experience is brought to the user, and the service is provided for the user to replace manual work to the greatest extent.
Drawings
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a block diagram of an intelligent explanation robot in an exhibition hall.
Detailed Description
The intelligent exhibition hall explaining robot and the realization method are explained in detail below with reference to the attached drawings and the specific embodiment of the specification.
Example 1:
as shown in fig. 1, the exhibition hall intelligent explanation robot comprises a robot body, wherein the robot body deploys an exhibition position material management module, a map management module, a navigation control module, a voice synthesis module, a voice interaction module and an exhibition position explanation module based on an ROS system;
the exhibition booth material management module is used for storing material information required to be explained in each exhibition booth; the material information types comprise videos, pictures and characters;
the map management module is used for controlling the robot body to scan the map of the exhibition area and calibrating the specific position of the exhibition position;
the navigation control module is used for planning a navigation path for the target point and controlling the robot body to move to the target point according to the planned path;
the voice synthesis module is used for synthesizing the text information into an audio file for broadcasting, triggering the voice interaction inlet through the keyword, and issuing a control instruction through voice by a user.
The voice interaction module is used for voice conversation between a user and the robot body;
the exhibition booth explanation module is used for automatically executing explanation tasks.
The robot body in this embodiment is provided with the laser radar sensor, and through the whole exhibition room region of laser radar sensor scanning, establish the grid map and set up the position of every exhibition position on the map.
The map management module in the embodiment comprises an SLAM map construction service sub-module, wherein the SLAM map construction service sub-module is used for controlling the operation of the laser radar sensor and collecting data collected by the laser radar sensor; the SLAM map construction service sub-module is used for constructing a two-dimensional grid map based on a Rao-Blackwellized particle filtering algorithm.
Example 2:
the invention relates to a method for realizing intelligent explanation of an exhibition hall, which is characterized in that a robot body scans an exhibition hall area by using a laser radar sensor to establish a map, establishes navigation points for each exhibition hall, performs path planning and indoor navigation on a designated exhibition hall at the same time, and provides services of voice interaction, navigation guidance, voice explanation and video playing for users; the method comprises the following specific steps:
s1, the robot body controls the laser radar sensor to operate, the whole exhibition hall area is scanned, a grid map is established, the position of each exhibition position on the map is set, and the construction of the exhibition hall map is completed;
s2, uploading the exhibition material information to be explained of each exhibition, and correspondingly storing the exhibition material information according to the exhibition numbers; the types of the exhibition site material information comprise videos, pictures and characters;
and S3, setting a default explanation task of the robot and appointing an explanation sequence of each exhibition position.
In the exhibition hall map in step S2 of this embodiment, the robot body controls the operation of the laser radar sensor and collects data collected by the laser radar sensor by starting the SLAM map construction service;
the SLAM map construction service establishes a two-dimensional grid map based on a Rao-Blackwellized particle filter algorithm.
In this embodiment, the default explanation task for the robot in step S3 is set, and the explanation sequence for each exhibition booth is specified and explained as follows:
s301, a user performs touch operation on a terminal screen of the robot body, sets an explanation task, and supports explanation according to a preset exhibition position sequence or explanation by selecting a designated exhibition position;
s302, the exhibition position explanation module sets and loads a first exhibition site according to the explanation task, obtains the specific position information of the exhibition position and sends the position to the navigation control module;
s303, the navigation control module plans a navigation path, controls the robot body to advance to a target point, and sends a notification to the exhibition position explanation module after the robot body reaches the target point;
s304, the exhibition booth explanation module monitors the arriving exhibition booth information and searches explanation data corresponding to the exhibition booth;
s305, after the explanation of the current exhibition position is finished, the exhibition position explanation module loads the next exhibition position point and continues the explanation of the content of the next exhibition position.
In this embodiment, the exhibition booth interpretation module in step S304 monitors the arriving exhibition booth information, and finding the interpretation data corresponding to the exhibition booth includes the following conditions:
for the video, playing the video on a terminal screen of the robot body;
for the picture file, the picture is displayed on a terminal screen of the robot body in a polling mode;
and for the text information, calling a voice synthesis module to convert the text into an audio file and broadcasting voice content.
In the process of setting the default explanation task of the robot, designating the explanation sequence of each exhibition location and explaining in step S3, the voice interaction module collects external sounds to perform voice recognition, enters a voice interaction state after recognizing designated keywords, suspends the execution of the current explanation task, and recognizes and executes a user instruction in this embodiment;
the user instruction comprises the steps of going to a designated exhibition position, ending explanation and continuing explanation.
Example 3:
the embodiment of the invention also provides a computer-readable storage medium, wherein a plurality of instructions are stored, and the instructions are loaded by the processor, so that the processor executes the exhibition hall intelligent explanation implementation method in any embodiment of the invention. Specifically, a system or an apparatus equipped with a storage medium on which software program codes that realize the functions of any of the above-described embodiments are stored may be provided, and a computer (or a CPU or MPU) of the system or the apparatus is caused to read out and execute the program codes stored in the storage medium.
In this case, the program code itself read from the storage medium can realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code constitute a part of the present invention.
Examples of the storage medium for supplying the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RYM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer via a communications network.
Further, it should be clear that the functions of any one of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform a part or all of the actual operations based on instructions of the program code.
Further, it is to be understood that the program code read out from the storage medium is written to a memory provided in an expansion board inserted into the computer or to a memory provided in an expansion unit connected to the computer, and then causes a CPU or the like mounted on the expansion board or the expansion unit to perform part or all of the actual operations based on instructions of the program code, thereby realizing the functions of any of the above-described embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. The intelligent explanation robot for the exhibition hall is characterized by comprising a robot body, wherein the robot body deploys an exhibition hall material management module, a map management module, a navigation control module, a voice synthesis module, a voice interaction module and an exhibition hall explanation module based on an ROS (reactive oxygen species) system;
the exhibition booth material management module is used for storing material information required to be explained in each exhibition booth;
the map management module is used for controlling the robot body to scan the map of the exhibition area and calibrating the specific position of the exhibition position;
the navigation control module is used for planning a navigation path for the target point and controlling the robot body to move to the target point according to the planned path;
the voice synthesis module is used for synthesizing the text information into an audio file to be broadcasted;
the voice interaction module is used for voice conversation between a user and the robot body;
the exhibition booth explanation module is used for automatically executing explanation tasks.
2. The intelligent robot for exhibition hall interpretation according to claim 1, wherein the material information types managed by the exhibition hall material management module include video, pictures and characters;
the voice interaction module triggers a voice interaction entrance through keywords, and a user issues a control instruction through voice.
3. The intelligent robot for exhibition hall according to claim 1, wherein the robot body is provided with a lidar sensor, and the lidar sensor scans the whole exhibition hall area to establish a grid map and set the position of each exhibition position on the map.
4. The intelligent exposition robot for exhibition hall according to any one of claims 1-3, wherein the map management module is provided with a SLAM map construction service sub-module, and the SLAM map construction service sub-module is used for controlling the operation of the laser radar sensor and collecting data collected by the laser radar sensor; the SLAM map construction service sub-module is used for constructing a two-dimensional grid map based on a Rao-Blackwellized particle filtering algorithm.
5. A method for realizing intelligent explanation of an exhibition hall is characterized in that a robot body scans an exhibition hall area by using a laser radar sensor to establish a map, establishes navigation points for each exhibition hall, and simultaneously carries out path planning and indoor navigation on a designated exhibition hall so as to provide services of voice interaction, navigation guidance, voice explanation and video playing for users; the method comprises the following specific steps:
the robot body controls the laser radar sensor to operate, the whole exhibition hall area is scanned, a grid map is established, the position of each exhibition position on the map is set, and the construction of the exhibition hall map is completed;
uploading the exhibition material information to be explained of each exhibition, and correspondingly storing according to the exhibition numbers; the types of the exhibition site material information comprise videos, pictures and characters;
and setting a default explanation task of the robot and appointing an explanation sequence of each exhibition position.
6. The intelligent exhibition hall explanation implementation method of claim 5, characterized in that the exhibition hall map is created by starting SLAM map construction service, and the robot body controls the operation of the laser radar sensor and collects data collected by the laser radar sensor;
the SLAM map construction service establishes a two-dimensional grid map based on a Rao-Blackwellized particle filter algorithm.
7. The exhibition hall intelligent explanation implementation method of claim 5, characterized in that default explanation tasks of the robot are set, and explanation sequence of each exhibition hall is specified and explained as follows:
a user performs touch operation on a terminal screen of the robot body, sets an explanation task, and supports explanation according to a preset exhibition position sequence or explanation by selecting a designated exhibition position;
the exhibition position explanation module is used for setting and loading a first exhibition position point according to the explanation task, acquiring the specific position information of the exhibition position and sending the position to the navigation control module;
the navigation control module plans a navigation path, controls the robot body to advance to a target point, and sends a notification to the exhibition position explanation module after reaching the target point;
the exhibition booth explanation module monitors the arriving exhibition booth information and searches the explanation data corresponding to the exhibition booth;
after the explanation of the current exhibition position is completed, the exhibition position explanation module loads the next exhibition position point and continues the explanation of the content of the next exhibition position.
8. The intelligent exhibition hall explanation implementation method of claim 7, wherein the exhibition hall explanation module monitors the arriving exhibition hall information, and finding the explanation data corresponding to the exhibition hall comprises the following conditions:
for the video, playing the video on a terminal screen of the robot body;
for the picture file, the picture is displayed on a terminal screen of the robot body in a polling mode;
and for the text information, calling a voice synthesis module to convert the text into an audio file and broadcasting voice content.
9. The exhibition hall intelligent explanation implementation method of claim 7, characterized in that a robot default explanation task is set, in the process of appointing explanation sequence of each exhibition hall and explanation, a voice interaction module collects external voice to perform voice recognition, enters a voice interaction state after recognizing appointed keywords, suspends the execution of the current explanation task, recognizes and executes a user instruction;
the user instruction comprises the steps of going to a designated exhibition position, ending explanation and continuing explanation.
10. A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and the computer program is executable by a processor to implement the exhibition hall intelligent interpretation implementation method according to any one of claims 5 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110636109.4A CN113370229A (en) | 2021-06-08 | 2021-06-08 | Exhibition hall intelligent explanation robot and implementation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110636109.4A CN113370229A (en) | 2021-06-08 | 2021-06-08 | Exhibition hall intelligent explanation robot and implementation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113370229A true CN113370229A (en) | 2021-09-10 |
Family
ID=77576480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110636109.4A Pending CN113370229A (en) | 2021-06-08 | 2021-06-08 | Exhibition hall intelligent explanation robot and implementation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113370229A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113851064A (en) * | 2021-09-28 | 2021-12-28 | 湖北聚游科技有限公司 | Electronic navigation system |
CN114131626A (en) * | 2021-12-09 | 2022-03-04 | 昆山市工研院智能制造技术有限公司 | Robot, service system and method |
CN114571434A (en) * | 2022-03-15 | 2022-06-03 | 山东新一代信息产业技术研究院有限公司 | Deformable multifunctional intelligent medical auxiliary robot |
CN116901105A (en) * | 2023-08-31 | 2023-10-20 | 海南电网有限责任公司信息通信分公司 | Exhibition hall intelligent service inspection integrated robot |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004216528A (en) * | 2003-01-16 | 2004-08-05 | Matsushita Electric Works Ltd | Personal service robot and method of creating work plan for the same |
JP2004283983A (en) * | 2003-03-24 | 2004-10-14 | Seiko Epson Corp | Robot, and waiting service system using robot |
KR20090000637A (en) * | 2007-03-13 | 2009-01-08 | 주식회사 유진로봇 | Mobile intelligent robot having function of contents provision and location guidance |
KR20130078128A (en) * | 2011-12-30 | 2013-07-10 | 인제대학교 산학협력단 | Stamp robot and system of exhibition |
CN106970614A (en) * | 2017-03-10 | 2017-07-21 | 江苏物联网研究发展中心 | The construction method of improved trellis topology semantic environment map |
CN107553505A (en) * | 2017-10-13 | 2018-01-09 | 刘杜 | Autonomous introduction system platform robot and explanation method |
CN108592936A (en) * | 2018-04-13 | 2018-09-28 | 北京海风智能科技有限责任公司 | A kind of service robot and its interactive voice air navigation aid based on ROS |
CN109366504A (en) * | 2018-12-17 | 2019-02-22 | 广州天高软件科技有限公司 | A kind of intelligence exhibition and fair service robot system |
CN109571499A (en) * | 2018-12-25 | 2019-04-05 | 广州天高软件科技有限公司 | A kind of intelligent navigation leads robot and its implementation |
CN110154053A (en) * | 2019-06-05 | 2019-08-23 | 东北师范大学 | A kind of indoor explanation robot and its explanation method based on OCR |
CN111805557A (en) * | 2020-07-22 | 2020-10-23 | 上海上实龙创智能科技股份有限公司 | Indoor explanation system and method based on humanoid robot |
CN112882481A (en) * | 2021-04-28 | 2021-06-01 | 北京邮电大学 | Mobile multi-mode interactive navigation robot system based on SLAM |
-
2021
- 2021-06-08 CN CN202110636109.4A patent/CN113370229A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004216528A (en) * | 2003-01-16 | 2004-08-05 | Matsushita Electric Works Ltd | Personal service robot and method of creating work plan for the same |
JP2004283983A (en) * | 2003-03-24 | 2004-10-14 | Seiko Epson Corp | Robot, and waiting service system using robot |
KR20090000637A (en) * | 2007-03-13 | 2009-01-08 | 주식회사 유진로봇 | Mobile intelligent robot having function of contents provision and location guidance |
KR20130078128A (en) * | 2011-12-30 | 2013-07-10 | 인제대학교 산학협력단 | Stamp robot and system of exhibition |
CN106970614A (en) * | 2017-03-10 | 2017-07-21 | 江苏物联网研究发展中心 | The construction method of improved trellis topology semantic environment map |
CN107553505A (en) * | 2017-10-13 | 2018-01-09 | 刘杜 | Autonomous introduction system platform robot and explanation method |
CN108592936A (en) * | 2018-04-13 | 2018-09-28 | 北京海风智能科技有限责任公司 | A kind of service robot and its interactive voice air navigation aid based on ROS |
CN109366504A (en) * | 2018-12-17 | 2019-02-22 | 广州天高软件科技有限公司 | A kind of intelligence exhibition and fair service robot system |
CN109571499A (en) * | 2018-12-25 | 2019-04-05 | 广州天高软件科技有限公司 | A kind of intelligent navigation leads robot and its implementation |
CN110154053A (en) * | 2019-06-05 | 2019-08-23 | 东北师范大学 | A kind of indoor explanation robot and its explanation method based on OCR |
CN111805557A (en) * | 2020-07-22 | 2020-10-23 | 上海上实龙创智能科技股份有限公司 | Indoor explanation system and method based on humanoid robot |
CN112882481A (en) * | 2021-04-28 | 2021-06-01 | 北京邮电大学 | Mobile multi-mode interactive navigation robot system based on SLAM |
Non-Patent Citations (1)
Title |
---|
朱东晟: "基于 ROS 室内服务机器人控制***的设计与实现", 《中国优秀硕士论文全文数据库》, 15 March 2020 (2020-03-15), pages 10 - 18 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113851064A (en) * | 2021-09-28 | 2021-12-28 | 湖北聚游科技有限公司 | Electronic navigation system |
CN114131626A (en) * | 2021-12-09 | 2022-03-04 | 昆山市工研院智能制造技术有限公司 | Robot, service system and method |
CN114571434A (en) * | 2022-03-15 | 2022-06-03 | 山东新一代信息产业技术研究院有限公司 | Deformable multifunctional intelligent medical auxiliary robot |
CN116901105A (en) * | 2023-08-31 | 2023-10-20 | 海南电网有限责任公司信息通信分公司 | Exhibition hall intelligent service inspection integrated robot |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113370229A (en) | Exhibition hall intelligent explanation robot and implementation method | |
CN100382095C (en) | Information processing apparatus , input device and method, program and information processing system | |
US7023452B2 (en) | Image generation system, image generating method, and storage medium storing image generation program | |
US20120085819A1 (en) | Method and apparatus for displaying using image code | |
CN111757175A (en) | Video processing method and device | |
CN105338391A (en) | Intelligent television control method and mobile terminal | |
JP2004361587A (en) | Map display device, map display system, map display method, and map display program | |
CN105573484A (en) | Projection method and terminal | |
CN104853125A (en) | Intelligent projection method and electronic equipment | |
CN112925520A (en) | Method and device for building visual page and computer equipment | |
CN113345108B (en) | Augmented reality data display method and device, electronic equipment and storage medium | |
WO2023115927A1 (en) | Cloud robot mapping method, system, device and storage medium | |
CN104754132A (en) | Electronic device and method of determining operating mode of electronic device | |
CN102724185A (en) | Residential gateway, residential gateway based game implementation method and mobile terminal | |
CN106155740A (en) | For the method and apparatus carrying out Unloading Control | |
CN106790424B (en) | Timing control method, client, server and timing control system | |
CN110149679A (en) | device discovery method, device and storage medium | |
CN114727090B (en) | Entity space scanning method, device, terminal equipment and storage medium | |
JP3888688B2 (en) | Air traffic control interface device, display control method thereof, and computer program | |
CN115904183A (en) | Interface display process, apparatus, device and storage medium | |
US20240100415A1 (en) | Game control method and apparatus, and storage medium | |
CN110337099B (en) | Method and device for controlling connection between devices, electronic device and storage medium | |
CN114579128A (en) | Visual page building method and device, storage medium and computer equipment | |
CN114937121A (en) | Simulation test method and device, electronic device and storage medium | |
CN113791821A (en) | Animation processing method, device, medium and electronic equipment based on illusion engine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210910 |
|
RJ01 | Rejection of invention patent application after publication |