CN115107057A - Intelligent endowment service robot based on ROS and depth vision - Google Patents

Intelligent endowment service robot based on ROS and depth vision Download PDF

Info

Publication number
CN115107057A
CN115107057A CN202210965480.XA CN202210965480A CN115107057A CN 115107057 A CN115107057 A CN 115107057A CN 202210965480 A CN202210965480 A CN 202210965480A CN 115107057 A CN115107057 A CN 115107057A
Authority
CN
China
Prior art keywords
information
module
unit
image
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210965480.XA
Other languages
Chinese (zh)
Inventor
王骥
刘振华
钟远昊
彭健锋
罗圳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Ocean University
Original Assignee
Guangdong Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Ocean University filed Critical Guangdong Ocean University
Priority to CN202210965480.XA priority Critical patent/CN115107057A/en
Publication of CN115107057A publication Critical patent/CN115107057A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Navigation (AREA)

Abstract

The invention provides an intelligent endowment service robot based on ROS and deep vision, which comprises: the acquisition module is used for acquiring user input information including sound information and video information and transmitting the user input information to the processing module; the GPS module is used for positioning the longitude and latitude of the position where the intelligent endowment service robot is located and transmitting a position signal to the processing module; the navigation module is used for positioning a satellite map within a preset radius around the position signal according to the position signal sent by the GPS module; planning an optimal path from the current position of the intelligent endowment service robot to a destination according to the destination signal transmitted by the processing module and forming a navigation signal; the processing module is used for receiving the user input information transmitted by the acquisition module, extracting a destination signal and transmitting the destination signal to the navigation module. The invention realizes the intelligent integrated services of real-time position positioning, information calling, family interaction, health state monitoring of the old, network service and the like of the old.

Description

Intelligent endowment service robot based on ROS and depth vision
Technical Field
The invention belongs to the technical field of robots, and particularly relates to an intelligent endowment service robot based on ROS and deep vision.
Background
The nursing of old people not only needs a large amount of funds but also needs a large amount of manpower, and the development of an intelligent robot with certain nursing capacity to help the old people live at home becomes a good strategy for solving the old people nursing problem in China by utilizing the intelligent robot technology. With the development of the robot technology, people feel more and more in the real pace of the deep production, life and society of the robot. On one hand, as the aging of each country is more and more serious, more old people need to be cared for, the requirements of social security and service are more urgent, the aged family structure inevitably increases the pressure of more young families, and the accelerated pace of life and the working pressure also enable young people not to have more time to accompany own children, so that the young people are in the vast family service robot market along with the brewing. On the other hand, the service robot can more widely replace people to do various tasks, and people are relieved from heavy, repeated and monotonous tasks which are harmful to health and danger.
Disclosure of Invention
In order to solve the technical problems, the invention provides an intelligent elderly care service robot based on ROS and deep vision, which realizes intelligent integrated services of real-time position positioning, information calling, family interaction, elderly health state monitoring, network service and the like of the elderly.
In order to achieve the above object, the present invention provides an intelligent endowment service robot based on ROS and deep vision, comprising:
the system comprises an acquisition module, a GPS module, a communication module, a navigation module and a processing module;
the acquisition module is used for acquiring user input information including sound information and video information and transmitting the user input information to the processing module;
the GPS module is used for positioning the longitude and latitude of the position where the intelligent endowment service robot is located and transmitting a position signal to the processing module;
the communication module is used for communicating with the processing module through a wireless network;
the navigation module is used for positioning a satellite map within a preset radius around the position signal according to the position signal sent by the GPS module; planning an optimal path from the current position of the intelligent endowment service robot to a destination according to the destination signal transmitted by the processing module and forming a navigation signal;
the processing module is used for receiving the user input information transmitted by the acquisition module, extracting a destination signal and transmitting the destination signal to the navigation module, and updating and storing a satellite map in the navigation module in real time through the communication module, so that the intelligent integrated services of real-time position positioning of a user, information calling, family interaction, old people health state monitoring and network service are realized.
Optionally, the acquisition module comprises: an audio input unit and a video input unit;
the audio input unit is used for obtaining a sound signal of the user through an audio input device;
the video input unit is used for obtaining the gesture and expression data of the user through a visual camera.
Optionally, before the processing module extracts the destination signal according to the user input information, the processing module is further configured to distinguish the user input information, and the distinguishing includes: a sound information discrimination unit and a video information discrimination unit;
the voice information judging unit is used for judging whether the voice information is clear according to a voice information judging table, and when the voice information is not clear, removing impurities and filtering the voice information;
the video information judging unit is used for judging whether the video information is clear according to a video information judging table and deleting the video information when the video information is not clear.
Optionally, the processing module includes: the system comprises a depth recognition unit, an image processing unit, an inertia processing unit and a fusion positioning unit;
the depth recognition unit is used for carrying out reinforcement learning on the pronunciation information of the user based on a deep learning algorithm to obtain pronunciation information with higher recognition rate; the camera is further used for acquiring expression and posture videos of the user based on the deep vision, and judging the emotional state of the user through the face and the gestures in the videos; the voice recognition module is also used for respectively converting the higher recognition rate pronunciation information and the expression and gesture videos of the user into image information;
the image processing unit is used for processing the image information input by the depth recognition unit;
the inertial processing unit is composed of an inertial sensor and used for sensing rotation angle information, acceleration information and translation speed information of the navigation module in real time, wherein the inertial sensor comprises a speedometer, a gyroscope and an accelerometer;
the fusion positioning unit is used for fusing the sound information and the video information collected by the collection module with the inertial data collected by the inertial processing unit according to the matching result between the satellite map and the input image information so as to correct the current position information of the navigation module; and simultaneously adding the features of the depth image data which are extracted by the depth identification unit and do not match with the features associated with the satellite images of the navigation module into the navigation module to become new satellite images.
Optionally, the image processing unit comprises: the image preprocessing subunit and the feature matching subunit are connected;
the image preprocessing subunit is used for converting the image into a gray image;
the feature matching subunit is used for performing feature matching on the image features in the grayscale image and the image features of the satellite map stored in the navigation module.
Optionally, the feature matching process uses ORB features for matching.
Optionally, the processing module is further provided with a driving module, and the driving module is controlled to act according to the user input information.
Optionally, the processing module is connected with an interface module arranged on the intelligent endowment service robot body; the interface module is used for connecting external equipment and inputting information to the processing module or reading information from the processing module.
Compared with the prior art, the invention has the following advantages and technical effects:
according to the invention, by utilizing the characteristics of the indoor environment of the nursing home, a global grid map of the environment of each room is established, and global environment information is provided for the service robot; and secondly, an RFID environmental parameter calibration method is used for assisting the robot in positioning according to the artificial road signs, the robot is helped to update environmental parameters, a wireless network of the nursing home is used, and real-time information interaction between the robot and a server is achieved. Based on the internet of things technology, the interconnection of different networks and the access of different devices in the nursing home are realized, and the intelligent integrated services of real-time position positioning, information calling, family interaction, health state monitoring of the old, network service and the like of the old are realized with the help of a robot system.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 is a schematic structural diagram of an intelligent endowment service robot based on ROS and deep vision according to an embodiment of the present invention.
Detailed Description
It should be noted that, in the present application, the embodiments and features of the embodiments may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
Example one
According to fig. 1, the embodiment provides an intelligent endowment service robot based on ROS and deep vision, which includes:
the system comprises an acquisition module, a GPS module, a communication module, a navigation module and a processing module;
the acquisition module is used for acquiring user input information including sound information and video information and transmitting the user input information to the processing module;
the GPS module is used for positioning the longitude and latitude of the position where the intelligent endowment service robot is located and transmitting a position signal to the processing module;
the communication module is used for communicating with the processing module through a wireless network;
the navigation module is used for positioning a satellite map within a preset radius around the position signal according to the position signal sent by the GPS module; planning an optimal path from the current position of the intelligent endowment service robot to a destination according to the destination signal transmitted by the processing module and forming a navigation signal;
the processing module is used for receiving the user input information transmitted by the acquisition module, extracting a destination signal and transmitting the destination signal to the navigation module, and updating and storing a satellite map in the navigation module in real time through the communication module, so that the intelligent integrated service of real-time position positioning of a user, information calling, family interaction, health state monitoring of the old and network service is realized.
Further, the acquisition module includes: an audio input unit and a video input unit;
the audio input unit is used for obtaining a sound signal of a user through the audio input equipment;
the video input unit is used for obtaining gesture and expression data of a user through the visual camera.
Further, before the processing module extracts the destination signal according to the user input information, the processing module is further configured to distinguish the user input information, and includes: a sound information discrimination unit and a video information discrimination unit;
the voice information judging unit is used for judging whether the voice information is clear according to the voice information judging table, and when the voice information is not clear, removing impurities and filtering the voice information;
the video information judging unit is used for judging whether the video information is clear according to the video information judging table and deleting the video information when the video information is not clear.
Further, the processing module includes: the system comprises a depth recognition unit, an image processing unit, an inertia processing unit and a fusion positioning unit;
the deep recognition unit is used for carrying out reinforcement learning on the pronunciation information of the user based on a deep learning algorithm to obtain the pronunciation information with higher recognition rate; the camera based on the depth vision is further used for acquiring expression and posture videos of the user and judging the emotional state of the user through the face and the gestures in the videos; the system is also used for respectively converting the higher-recognition-rate pronunciation information and the expression and gesture videos of the user into image information;
the image processing unit is used for processing the image information input by the depth recognition unit;
the inertial processing unit is composed of an inertial sensor and used for sensing rotation angle information, acceleration information and translation speed information of the navigation module in real time, wherein the inertial sensor comprises a speedometer, a gyroscope and an accelerometer;
the fusion positioning unit is used for fusing the sound information and the video information collected by the collection module with the inertial data collected by the inertial processing unit according to the matching result between the satellite map and the input image information so as to correct the current position information of the navigation module; and simultaneously adding the features of the depth image data which are extracted by the depth identification unit and do not match with the features associated with the satellite images of the navigation module into the navigation module to become new satellite images.
Further, the image processing unit includes: the image preprocessing subunit and the feature matching subunit are connected;
the image preprocessing subunit is used for converting the image into a gray image;
and the feature matching subunit is used for performing feature matching on the image features in the grayscale image and the image features of the satellite map in the storage navigation module.
Further, the ORB features are used for matching in the feature matching process.
The method comprises the following steps of performing feature extraction on voice data of the old, wherein the feature extraction is mainly performed in a frequency domain; for the expression and gesture images of the old people, the shape information of the face is obtained as the characteristics, and the calibration point positions of the face and the gesture are obtained by using the active shape model.
After the voice, gesture and facial expression features are extracted, a similarity measurement model among the features is established, the importance and influence of a feature space on recognition are analyzed, and a similarity measurement model which is in line with the actual situation is established according to different features.
The obtained algorithm is classified by using a mode classification algorithm, a simple voice recognition function of the old people is completed, on the basis, the gesture and expression recognition of the old people is performed, the emotional state of the old people is obtained, and further more specific service object information of the robot is provided.
Furthermore, the processing module is also provided with a driving module, and the driving module is controlled to act according to the user input information.
Further, the processing module is connected with an interface module arranged on the intelligent endowment service robot body; and the interface module is used for connecting external equipment and inputting information to the processing module or reading information from the processing module.
The intelligent interactive frame mainly completes the feedback of the old through a voice synthesis technology after the robot understands the expression of the old through the interface module, and contacts a manager through a call center to process immediately after the expression pain and/or discomfort of the old are detected.
The intelligent integrated service for realizing the real-time position positioning, information calling, family interaction, health state monitoring of the old and network service of the user in the embodiment comprises the following steps:
for example: home assistant, medicine taking reminder, small article housekeeper, regular health examination and the like.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. The utility model provides an intelligence endowment service robot based on ROS and degree of depth vision which characterized in that includes: the system comprises an acquisition module, a GPS module, a communication module, a navigation module and a processing module;
the acquisition module is used for acquiring user input information including sound information and video information and transmitting the user input information to the processing module;
the GPS module is used for positioning the longitude and latitude of the position where the intelligent endowment service robot is located and transmitting a position signal to the processing module;
the communication module is used for communicating with the processing module through a wireless network;
the navigation module is used for positioning a satellite map within a preset radius around the position signal according to the position signal sent by the GPS module; planning an optimal path from the current position of the intelligent endowment service robot to a destination according to the destination signal transmitted by the processing module and forming a navigation signal;
the processing module is used for receiving the user input information transmitted by the acquisition module, extracting a destination signal and transmitting the destination signal to the navigation module, and updating and storing a satellite map in the navigation module in real time through the communication module, so that the intelligent integrated services of real-time position positioning of a user, information calling, family interaction, old people health state monitoring and network service are realized.
2. The intelligent robot for endowment services based on ROS and depth vision of claim 1, wherein the collection module comprises: an audio input unit and a video input unit;
the audio input unit is used for obtaining a sound signal of the user through an audio input device;
the video input unit is used for obtaining the gesture and expression data of the user through a visual camera.
3. The intelligent robot for endowment services based on ROS and deep vision of claim 1, wherein before the processing module extracts the destination signal according to the user input information, the processing module is further configured to discriminate the user input information, comprising: a sound information discrimination unit and a video information discrimination unit;
the voice information judging unit is used for judging whether the voice information is clear according to a voice information judging table, and when the voice information is not clear, removing impurities and filtering the voice information;
the video information judging unit is used for judging whether the video information is clear according to a video information judging table and deleting the video information when the video information is not clear.
4. The ROS and depth vision based intelligent endowment service robot of claim 3, wherein the processing module comprises: the system comprises a depth recognition unit, an image processing unit, an inertia processing unit and a fusion positioning unit;
the depth recognition unit is used for carrying out reinforcement learning on the pronunciation information of the user based on a deep learning algorithm to obtain pronunciation information with higher recognition rate; the camera is further used for acquiring expression and posture videos of the user based on the depth vision, and judging the emotional state of the user through the face and the gestures in the videos; the voice recognition module is also used for respectively converting the higher recognition rate pronunciation information and the expression and gesture videos of the user into image information;
the image processing unit is used for processing the image information input by the depth recognition unit;
the inertial processing unit is composed of an inertial sensor and used for sensing rotation angle information, acceleration information and translation speed information of the navigation module in real time, wherein the inertial sensor comprises an odometer, a gyroscope and an accelerometer;
the fusion positioning unit is used for fusing the sound information and the video information collected by the collection module with the inertial data collected by the inertial processing unit according to the matching result between the satellite map and the input image information so as to correct the current position information of the navigation module; and simultaneously adding the features of the depth image data which are extracted by the depth recognition unit and do not match with the features associated with the satellite images of the navigation module into the navigation module to become new satellite images.
5. The intelligent ROS and depth vision based endowment service robot according to claim 4, wherein the image processing unit comprises: the image preprocessing subunit and the feature matching subunit are connected;
the image preprocessing subunit is used for converting the image into a gray image;
the feature matching subunit is used for performing feature matching on the image features in the grayscale image and the image features of the satellite map stored in the navigation module.
6. The intelligent robot for endowment services based on ROS and depth vision of claim 5, wherein the feature matching process uses ORB features for matching.
7. The intelligent robot for nursing aged based on ROS and deep vision in accordance with claim 1, wherein the processing module is further provided with a driving module, and the driving module is controlled to act according to the user input information.
8. The intelligent endowment service robot based on the ROS and the deep vision as claimed in claim 1, wherein the processing module is connected with an interface module arranged on the body of the intelligent endowment service robot; the interface module is used for connecting external equipment and inputting information to the processing module or reading information from the processing module.
CN202210965480.XA 2022-08-12 2022-08-12 Intelligent endowment service robot based on ROS and depth vision Pending CN115107057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210965480.XA CN115107057A (en) 2022-08-12 2022-08-12 Intelligent endowment service robot based on ROS and depth vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210965480.XA CN115107057A (en) 2022-08-12 2022-08-12 Intelligent endowment service robot based on ROS and depth vision

Publications (1)

Publication Number Publication Date
CN115107057A true CN115107057A (en) 2022-09-27

Family

ID=83335457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210965480.XA Pending CN115107057A (en) 2022-08-12 2022-08-12 Intelligent endowment service robot based on ROS and depth vision

Country Status (1)

Country Link
CN (1) CN115107057A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152928A (en) * 2023-09-20 2023-12-01 深圳市源流科技有限公司 Remote control type sound transmission loudspeaker for intelligent endowment service

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107186728A (en) * 2017-06-15 2017-09-22 重庆柚瓣家科技有限公司 Intelligence endowment service robot control system
CN108073167A (en) * 2016-11-10 2018-05-25 深圳灵喵机器人技术有限公司 A kind of positioning and air navigation aid based on depth camera and laser radar
CN108776474A (en) * 2018-05-24 2018-11-09 中山赛伯坦智能科技有限公司 Robot embedded computing terminal integrating high-precision navigation positioning and deep learning
CN110853135A (en) * 2019-10-31 2020-02-28 天津大学 Indoor scene real-time reconstruction tracking service method based on endowment robot
CN212265855U (en) * 2020-05-28 2021-01-01 广东海洋大学 Robot for accompanying old people
KR20210122345A (en) * 2020-03-30 2021-10-12 푼노톡 샬러폰 Robot for providing elderly management service and system for porviding elderly management service using the robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073167A (en) * 2016-11-10 2018-05-25 深圳灵喵机器人技术有限公司 A kind of positioning and air navigation aid based on depth camera and laser radar
CN107186728A (en) * 2017-06-15 2017-09-22 重庆柚瓣家科技有限公司 Intelligence endowment service robot control system
CN108776474A (en) * 2018-05-24 2018-11-09 中山赛伯坦智能科技有限公司 Robot embedded computing terminal integrating high-precision navigation positioning and deep learning
CN110853135A (en) * 2019-10-31 2020-02-28 天津大学 Indoor scene real-time reconstruction tracking service method based on endowment robot
KR20210122345A (en) * 2020-03-30 2021-10-12 푼노톡 샬러폰 Robot for providing elderly management service and system for porviding elderly management service using the robot
CN212265855U (en) * 2020-05-28 2021-01-01 广东海洋大学 Robot for accompanying old people

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
(西)UAN-ANTONIOFERNANDEZ-MADRIGAL: "《面向移动终端用户的WLAN定位技术》", 哈尔滨工程大学出版社, pages: 130 - 133 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152928A (en) * 2023-09-20 2023-12-01 深圳市源流科技有限公司 Remote control type sound transmission loudspeaker for intelligent endowment service
CN117152928B (en) * 2023-09-20 2024-04-12 深圳市源流科技有限公司 Remote control type sound transmission loudspeaker for intelligent endowment service

Similar Documents

Publication Publication Date Title
Manjari et al. A survey on assistive technology for visually impaired
CN105335696B (en) A kind of intelligence based on the identification of 3D abnormal gaits behavioral value is helped the elderly robot and implementation method
CN110405789B (en) Chart-checking accompanying robot, robot ward-checking accompanying system and method
US20190188903A1 (en) Method and apparatus for providing virtual companion to a user
CN108427910B (en) Deep neural network AR sign language translation learning method, client and server
CN104983511A (en) Voice-helping intelligent glasses system aiming at totally-blind visual handicapped
CN106997243A (en) Speech scene monitoring method and device based on intelligent robot
CN210402266U (en) Sign language translation system and sign language translation gloves
US20220277525A1 (en) User-exhibit distance based collaborative interaction method and system for augmented reality museum
Parikh et al. Android smartphone based visual object recognition for visually impaired using deep learning
CN109859324A (en) A kind of motion teaching method and device based on visual human
EP4105600A2 (en) Method for automatically producing map data, related apparatus and computer program product
CN112711249A (en) Robot positioning method and device, intelligent robot and storage medium
CN115107057A (en) Intelligent endowment service robot based on ROS and depth vision
CN106713633A (en) Deaf people prompt system and method, and smart phone
CN107330240A (en) A kind of intelligent remote based on both hands ring sensor provides special care to monitoring system and method
CN113781462A (en) Human body disability detection method, device, equipment and storage medium
CN112037305B (en) Method, device and storage medium for reconstructing tree-like organization in image
CN111611812B (en) Translation to Braille
Rahman et al. An automated navigation system for blind people
CN110803170B (en) Driving assistance system with intelligent user interface
CN116434173A (en) Road image detection method, device, electronic equipment and storage medium
CN107203211A (en) A kind of method of robot interactive motion
Kinra et al. A Comprehensive and Systematic Review of Deep Learning Based Object Recognition Techniques for the Visually Impaired
CN114662606A (en) Behavior recognition method and apparatus, computer readable medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination