CN114603557B - Robot projection method and robot - Google Patents

Robot projection method and robot Download PDF

Info

Publication number
CN114603557B
CN114603557B CN202210225542.3A CN202210225542A CN114603557B CN 114603557 B CN114603557 B CN 114603557B CN 202210225542 A CN202210225542 A CN 202210225542A CN 114603557 B CN114603557 B CN 114603557B
Authority
CN
China
Prior art keywords
projection
space
robot
stereoscopic
people
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210225542.3A
Other languages
Chinese (zh)
Other versions
CN114603557A (en
Inventor
王嘉晋
张飞刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pengxing Intelligent Research Co Ltd
Original Assignee
Shenzhen Pengxing Intelligent Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pengxing Intelligent Research Co Ltd filed Critical Shenzhen Pengxing Intelligent Research Co Ltd
Priority to CN202210225542.3A priority Critical patent/CN114603557B/en
Publication of CN114603557A publication Critical patent/CN114603557A/en
Application granted granted Critical
Publication of CN114603557B publication Critical patent/CN114603557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a robot projection method and a robot, and relates to the technical field of robots. The projection method of the embodiment of the application comprises the following steps: and carrying out map construction identification on the surrounding environment to identify a plurality of stereoscopic spaces and marking the coordinates of each stereoscopic space in the map. And calculating the area of each three-dimensional space, and calibrating the capacity of each three-dimensional space according to the area of each three-dimensional space. And acquiring a projection instruction and judging whether the projection instruction comprises the information of the number of the watching persons. If the projection instruction comprises the number of the sightseeing person information, determining a projection space according to the number of the sightseeing person information and the capacity of each stereoscopic space. If the projection instruction does not include the information of the number of the sightseeing persons, determining the stereoscopic space where the robot is currently located as a projection space. And navigating to the projection space according to the coordinates of the stereoscopic space in the map. The projection space is subjected to environment recognition to determine a projection area. And adjusting projection parameters according to the projection area to determine the projection posture. And finishing projection operation according to the projection posture.

Description

Robot projection method and robot
Technical Field
The application relates to the technical field of robots, in particular to a robot projection method and a robot.
Background
More and more intelligent mobile robots start to have projection interaction functions. In the moving process of the robot, how to quickly and accurately find out a proper projection plane and project a clear and stable picture becomes an important problem affecting the projection interaction function of the robot.
Disclosure of Invention
In view of the above, the present application provides a robot projection method and a robot, so as to improve the projection interaction function of the robot.
The first aspect of the present application provides a robot projection method, the projection method comprising: and carrying out map construction identification on the surrounding environment to identify a plurality of stereoscopic spaces and marking the coordinates of each stereoscopic space in the map. Calculating the area of each stereoscopic space, and calibrating the capacity of each stereoscopic space according to the area of the stereoscopic space, wherein the capacity refers to the number of people in which the stereoscopic space can accommodate the viewing. And acquiring a projection instruction and judging whether the projection instruction comprises the information of the number of the watching persons. If the projection instruction comprises the number of the sightseeing person information, determining a projection space according to the number of the sightseeing person information and the capacity of each stereoscopic space. If the projection instruction does not include the information of the number of the sightseeing persons, determining the stereoscopic space where the robot is currently located as a projection space. And navigating to the projection space according to the coordinates of the stereoscopic space in the map. The projection space is subjected to environment recognition to determine a projection area. And adjusting projection parameters according to the projection area to determine the projection posture. And finishing projection operation according to the projection posture.
A second aspect of the present application provides a robot comprising a processor and a memory for storing a computer program or code which, when executed by the processor, implements a projection method of an embodiment of the present application.
According to the embodiment of the application, the image construction identification is carried out on the surrounding environment, the environment identification is carried out on the projection space, the projection gesture is determined by adjusting the projection parameters, the proper projection plane can be quickly and accurately found, the quality of the projection plane is guaranteed, and therefore the projection effect is guaranteed.
Drawings
Fig. 1 is a flowchart of a projection method according to an embodiment of the present application.
Fig. 2 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 3 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 4 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 5 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 6 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 7 is a flowchart of a projection method according to another embodiment of the present application.
Fig. 8 is a schematic view of an application scenario according to an embodiment of the present application.
Fig. 9 is a schematic structural view of a robot according to an embodiment of the present application.
Fig. 10 is a schematic structural view of the multi-legged robot according to an embodiment of the present application.
Fig. 11 is an external view schematically showing a multi-legged robot according to an embodiment of the present invention.
Description of the main reference signs
Robot 100
Processor 110
Memory 120
Wall surface 200
Multi-legged robot 300
Mechanical unit 301
Communication unit 302
Sensing unit 303
Interface unit 304
Storage unit 305
Display unit 306
Input unit 307
Control module 308
Power supply 309
Drive plate 3011
Motor 3012
Mechanical structure 3013
Fuselage body 3014
Leg 3015
Foot 3016
Head structure 3017
Tail structure 3018
Carrying structure 3019
Saddle structure 3020
Camera structure 3021
Display panel 3061
Touch panel 3071
Input device 3072
Touch detection device 3073
Touch controller 3074
Detailed Description
It should be noted that, in the embodiments of the present application, "at least one" refers to one or more, and "multiple" refers to two or more. "and/or", describes an association relationship of an association object, and the representation may have three relationships, for example, a and/or B may represent: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
It should be further noted that the method disclosed in the embodiments of the present application or the method shown in the flowchart, including one or more steps for implementing the method, may be performed in an order that the steps may be interchanged with one another, and some steps may be deleted without departing from the scope of the claims.
Some of the terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
The three-dimensional space includes various forms of three-dimensional space such as various offices/conference rooms in office buildings, various classrooms in teaching buildings, various rooms in apartments or houses (e.g., living rooms, study rooms, kitchens, etc.).
And 2, the projection space is a three-dimensional space which is screened from the three-dimensional space and is used for robot projection.
The projection plane refers to a plane which is determined in a projection space and used for robot projection, and comprises, for example, a wall surface, a floor or a ceiling in the projection space, etc.
The projection area is a region on the projection plane that serves as a projection screen.
5, projection gesture refers to the gesture of the robot for projection, and the state of the robot can be detected by a sensor to adjust and control the gesture of the robot. The above-mentioned sensors may include position, posture, pressure, acceleration sensors, and the like.
And the 6,3D camera can detect the distance from each point in the 2D image to the 3D camera through the data acquired by the 3D camera, and the three-dimensional space coordinate of each point in the 2D image can be acquired by adding the coordinate of each point in the 2D image to the distance corresponding to each point. The 3D camera can be used for face recognition, gesture recognition, human skeleton recognition, three-dimensional measurement, environment perception, three-dimensional map reconstruction and the like. Herein, the robot is configured with a camera, which includes a 3D camera.
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 1 is a flowchart of a projection method according to an embodiment of the present application.
Referring to fig. 1, the projection method is applied to a robot configured with a camera and a projector. The projection method may include the steps of:
s101, mapping and identifying surrounding environment to identify a plurality of three-dimensional spaces.
In some embodiments, the robot may sense the surrounding environment through a lidar or a camera (e.g., a 3D camera) and map the surrounding environment using synchronized localization and mapping (Simultaneous Localization and Mapping, SLAM) techniques. The robot starts to move from an unknown position in an unknown environment, positions itself according to the position and the map in the moving process, and builds an incremental map on the basis of self-positioning, so as to realize autonomous positioning and navigation of the robot.
For example, when the robot is in a house, the robot performs map recognition on the environment in the house, and can recognize various three-dimensional spaces in the house, such as living rooms, bedrooms, study rooms, and the like.
S102, marking coordinates of each stereoscopic space in the map.
In this embodiment, the robot may mark coordinates of each stereoscopic space on the map in the process of map recognition.
S103, calculating the area of each three-dimensional space, and calibrating the capacity of each three-dimensional space according to the area of each three-dimensional space.
Wherein, the capacity refers to the number of people in the stereoscopic space for watching the video.
In this embodiment, the robot may calculate the area of each stereoscopic space according to the result of map creation recognition, and then set the correspondence between the area of each stereoscopic space and the number of people that can accommodate the viewing according to the area of each stereoscopic space. For example, in some embodiments, the area of the stereoscopic space and the number of people that can accommodate the viewing satisfy the following formula:
N≤S<(N+1)m 2
wherein S is the area of the three-dimensional space, N is the number of people in the three-dimensional space which can hold the viewing, and N is a positive integer. For example, when the area S of the stereoscopic space satisfies 8.ltoreq.S<9m 2 When the stereoscopic space is used, the number of people for containing the video is 8. As another example, when the area S of the three-dimensional space satisfies 12.ltoreq.S<13m 2 When the stereoscopic space is used, the number of people who can hold the viewing is 12. For another example, when the area S of the three-dimensional space satisfies 20.ltoreq.S<21m 2 When the stereoscopic space is used, the number of people capable of accommodating the video is 20.
S104, acquiring a projection instruction.
The projection instruction is used for informing the robot to start a projection mode so as to find a projection space.
In some embodiments, the robot may acquire the projecting instruction by recognizing a voice/text input, a touch operation or a gesture action of the user, and may also receive the projecting instruction from the terminal application.
S105, determining whether the projection instruction comprises the information of the number of people watching the film.
In step S105, if the projection instruction includes the number of viewers information, step S106 is performed. If not, step S107 is performed.
S106, determining a projection space according to the number of people watching the video and the capacity of each stereoscopic space.
It will be understood that, in step S105, when it is determined that the projection instruction includes the viewing person number information, step S106 will acquire the viewing person number information from the projection instruction, and determine the projection space according to the acquired viewing person number information and the capacity of each stereoscopic space.
For example, when the projection instruction of the robot receiving the voice/text input includes the information of the number of people to be watched, for example, the projection instruction is "help me find a projection space of 8 people", the information of the number of people to be watched, i.e., "8 people", can be extracted from the projection instruction by keyword or semantic analysis.
If the touch control module of the robot is provided with a control for triggering the projection instruction, when a user triggers the control through touch control operation, the control prompts the user to continuously input the number of people to be observed, and after the user inputs the number of people to be observed, the robot can directly acquire or extract the number of people to be observed.
The camera of the robot can acquire the information of the number of people to be observed through recognizing gesture actions of a user, for example, the user can plan 'projection' and '10' through gesture actions, the robot can recognize a projection instruction as 'search projection space of 10 people' from the gesture actions of the user through a fuzzy recognition algorithm, and then the information of the number of people to be observed, namely '10 people', is extracted from the projection instruction through keyword or semantic analysis.
If an application program for controlling the robot is installed on the terminal, the user may input a projection instruction "find a projection space of 20 persons" in the application program. When the robot receives the projection instruction from the application program, the information of the number of people watching the video, namely '20 people', can be extracted from the projection instruction through keyword or semantic analysis.
It will be appreciated that after step S106 is performed, step S108 is performed.
S107, determining the three-dimensional space where the robot is currently located as a projection space.
For example, when the projection instruction received by the robot with voice/text input is "Trojan, please project", since the number of viewers information is not queried from the projection instruction, it may be determined that the projection instruction does not include the number of viewers information. Therefore, the three-dimensional space where the robot is currently located can be directly determined as the projection space.
And S108, navigating to the projection space according to the coordinates of the stereoscopic space in the map.
In this embodiment, after the robot determines that a stereoscopic space is used as the projection space, it is possible to navigate to the position of the stereoscopic space (i.e., the projection space) according to the coordinates of the stereoscopic space.
S109, carrying out environment recognition on the projection space to determine a projection area.
In this embodiment, when the robot determines that the projection space is unoccupied, the camera may be used to perform environmental recognition on the projection space to find and determine a suitable projection area.
S110, adjusting projection parameters according to the projection area to determine the projection posture.
The projection parameters at least comprise projection height, projection distance and projection angle.
In some embodiments, after the robot determines the projection area, the projection parameters may be adjusted according to the features corresponding to the identified object. For example, when the object is a sofa, the robot may simulate that the audience is at a middle position on the front surface of the sofa, adjust the projection distance with the viewing distance of the audience as a reference, and adjust the projection height with the viewing height of the audience as a reference until a projection gesture suitable for viewing the audience is found.
S111, acquiring environmental parameters.
The environmental parameters may include, among others, brightness and noise. The robot can test the light intensity of the current environment through the light sensor to acquire the brightness value. The robot may test the noise of the current environment by turning on the microphone to obtain a noise value.
S112, determining whether the environmental parameter is smaller than a preset threshold. If the environmental parameter is smaller than the preset threshold, step S117 is performed. If not, the process returns to step S114.
The preset threshold is determined according to the attribute of the projector. For example, the brightness threshold is the maximum light intensity supported by the projector and the noise threshold is the maximum noise supported by the projector.
When the environmental parameter is smaller than a preset threshold value, the robot is indicated to support projection operation in the current environment.
S113, completing projection operation according to the projection posture.
Wherein the projection operation may include turning on the projector and starting to project the content.
S114, responding to an operation instruction of a user, and adjusting at least one of projection gesture, projection brightness and volume.
In some embodiments, the user may trigger the operation instruction when the robot completes the projection operation. The robot can acquire the operation instruction by recognizing voice/text input, touch operation or gesture action of a user, and can also receive the operation instruction from the terminal application program.
For example, the user may click on the head of the robot with a finger to turn down the projected brightness. For example, the projected brightness decreases by 10% per click of the head of the robot by the user. The user can press the head of the robot with the palm to increase the projection brightness. For example, the projection brightness is increased by 10% every time the user presses the head of the robot. The user can click on the tail of the robot with a finger to turn down the volume. For example, the volume is reduced by 10% each time the user clicks the tail of the robot. The user can press the tail of the robot with the palm to turn up the volume. For example, the volume increases by 10% each time the user presses the tail of the robot. The user can slide left/right on the head of the robot with a finger to control the robot to move left/right, thereby adjusting the position of the robot.
The user can trigger an operation instruction through voice, for example, the user can control the robot to search for the projection plane again through voice 'help me find place to project again'. The user can control the robot to adjust the projection brightness by voice "turn down brightness" or "turn up brightness". The user can control the robot to adjust the volume by voice "turn up the volume" or "turn down the volume".
Referring to fig. 1 and fig. 2 together, after step S106 is performed, the projection method may further include the following steps:
s201, it is determined whether the projection space is occupied.
In step S201, if the projection space is already occupied, step S202 is performed. If not, steps S108 to S114 in fig. 1 are sequentially performed.
In some embodiments, when the robot reaches the location of the projection space, it may be checked or determined by the camera whether there is a person inside the projection space. When a person is present inside the projection space, the robot determines that the projection space is occupied. Otherwise, the robot determines that the projection space is unoccupied.
In other embodiments, if the projection space is a conference room, the robot may query or determine whether the projection space is occupied by accessing a conference room reservation system.
S202, determining whether other three-dimensional spaces meeting the number of people exist.
In step S202, if it is determined that there is another stereoscopic space that meets the condition of the number of people, step S203 is performed. If not, step S204 is performed.
And S203, determining a projection space according to the distance from the position of the other three-dimensional space to the current position of the robot, and navigating to the projection space according to the coordinates of the other three-dimensional space in the map.
For example, the robot may query the distances from the current position of the robot to other three-dimensional spaces, and determine one three-dimensional space closest to the current position of the robot as the projection space.
In other embodiments, the robot may also determine the projection space from a history of other stereo spaces. For example, the robot may inquire about the number of times it has used other stereoscopic spaces, determine one of the stereoscopic spaces having the largest number of uses as a projection space, and update the history again after the stereoscopic space is used this time.
It is understood that the specific embodiment of step S203 is substantially the same as that of step S108, and will not be described herein.
It will be appreciated that steps S109 to S114 in fig. 1 are sequentially performed after step S203 is performed.
S204, stopping projection work and feeding back the result to the user.
For example, when the robot determines that there is no other stereoscopic space meeting the condition of the number of people, the projection work is stopped, and the user is prompted by voice "other projection space of suitable 8 people is not found".
Referring to fig. 1 and fig. 3 together, fig. 3 is a schematic flow chart illustrating a sub-process of step S106 in fig. 1. As shown in fig. 3, step S106 may include the following sub-steps:
S301, responding to a projection instruction, and inquiring a plurality of stereoscopic spaces meeting the number of people.
The number of people condition means that the number of people which can be accommodated in the three-dimensional space is larger than or equal to the number of people watching the movie.
For example, when the stereoscopic space can accommodate 8 people and the number of people watching the video is 5, the stereoscopic space meets the number of people condition. When the number of people in the stereoscopic space capable of accommodating the watching is 8, and the number of people in the watching obtained from the projection instruction is 10, the stereoscopic space does not meet the number condition.
S302, determining a projection space according to the query result.
In some embodiments, when the robot does not query the space meeting the people count condition, a prompt message may be sent to inform the user that there is no space meeting the people count condition currently. For example, the robot may prompt the user by voice "do not find a suitable projection space of 8 people".
It will be appreciated that in step S201 in fig. 2, when the robot determines that the projection space is occupied, it may be determined whether there is another stereoscopic space meeting the condition of the number of persons according to the query result of step S302.
Referring to fig. 3 and fig. 4 together, fig. 4 is a schematic flow chart illustrating a sub-process of step S302 in fig. 3. As shown in fig. 4, when all the stereoscopic spaces meeting the condition of the number of people are queried, step S302 may include the following sub-steps:
S401, acquiring coordinates of all the three-dimensional spaces meeting the condition of the number of people in a map and the current position coordinates of the robot.
In this embodiment, the robot may update the current position coordinates periodically or in real time during the map recognition process.
S402, determining the distance from each three-dimensional space meeting the number of people to the current position of the robot according to the coordinates of each three-dimensional space meeting the number of people and the current position coordinates of the robot.
In this embodiment, the robot calculates the distance between two points (i.e., the three-dimensional space and the robot) according to the coordinates of the two points on the map by using the planar geometry method.
S403, determining a projection space according to the distance or the histories of all the stereoscopic spaces meeting the condition of the number of people.
In some embodiments, the robot determines the projection space based on the distance of the stereo space to the current location of the robot. For example, the robot may select one of the three-dimensional spaces closest to the current position of the robot as the projection space.
In other embodiments, the robot determines the projection space from a history of the stereoscopic space. For example, the robot may select one of the three-dimensional spaces in the history as the projection space of this time. Wherein, the history refers to the history of the stereoscopic space used as the projection space. The history may be stored in an internal memory of the robot or in an external memory that is available for the robot to call.
Referring to fig. 1 and fig. 5 together, fig. 5 is a schematic flow chart illustrating a sub-process of step S109 in fig. 1. As shown in fig. 5, step S109 may include the following sub-steps:
s501, carrying out environment recognition on the projection space to determine the projection direction.
In this embodiment, the robot performs environment recognition on the projection space, and can recognize a plane, a projection direction, and an obstacle in the projection direction for projection. Wherein, the obstacle object is an object between the robot and a plane for projection, such as a desk, a chair, a sofa, etc.
For example, the robot recognizes a wall surface on which projection is available, the direction of the robot to the wall surface is the projection direction, and the robot can recognize a sofa or a seat in the projection direction.
S502, determining whether a projection plane larger than a preset size exists in the projection direction.
In step S502, if it is determined that there is a projection plane larger than the preset size in the projection direction, step S503 is performed. If not, step S504 is performed.
Wherein the preset size is determined according to the attribute of the projector. For example, the robot determines in the projection direction whether or not there is a position of more than 120×70 square centimeters (cm) 2 ) Is provided for the projection plane of the lens.
S503, determining a projection area on a projection plane.
The size of the projection area is a multiple of the preset size. For example, when the preset dimension is a centimeters (cm) long and b centimeters wide, the projection area is n×a centimeters long and n×b centimeters wide, where n is greater than or equal to 1.
In the present embodiment, when the robot determines that there is a projection plane larger than a preset size in the projection direction, a piece of area not smaller than the preset size is divided on the projection plane to be used as a projection area.
S504, adjusting the rotation angle of the robot to adjust the projection direction.
In this embodiment, when the robot determines that there is no projection plane larger than the preset size in the projection direction, the body may be controlled to rotate so as to drive the projector to face in other directions. Alternatively, the body of the robot is stationary and the projector is controlled to rotate to face in the other direction.
In some embodiments, the robot can find all projection planes meeting the size requirement in the projection space by adjusting the rotation angle. Wherein, meeting the size requirement means that there is a projection plane larger than a preset size in the projection direction.
It will be appreciated that adjusting the rotation angle may include rotating clockwise or counterclockwise in the horizontal and/or vertical direction.
For example, if the current view of the robot does not have a projection plane meeting the size requirement, the robot may rotate 90 degrees clockwise, again identifying the current environment, until a projection plane meeting the size requirement is found or rotated to 360 degrees. If the robot rotates 360 degrees and still does not find a projection plane meeting the size requirement, the user can be prompted by voice that "no suitable projection area". Wherein, the rotation of 360 degrees may be 360 degrees in both the horizontal direction and the vertical direction. Can be rotated 360 degrees in the horizontal direction first and then rotated 360 degrees in the vertical direction. Or can be rotated by 360 degrees in the vertical direction and then rotated by 360 degrees in the horizontal direction.
After all projection planes meeting the size requirement are acquired, the robot can randomly select one of the projection planes or prompt the user to select one of the projection planes. For example, the robot may prompt the user "please select the projection plane" by voice.
Referring to fig. 5 and fig. 6 together, fig. 6 is a schematic flow chart illustrating a sub-process of step S501 in fig. 5. As shown in fig. 6, step S501 may include the following sub-steps:
s601, when the fact that a preset identification object exists in the projection space is recognized, the corresponding characteristics of the identification object are obtained.
In some embodiments, the robot counts the obstacle objects appearing in the projection space which have been used by inquiring the history record, and marks the obstacle objects with the occurrence times exceeding a preset threshold value to obtain the identification object. In other words, the identification object is an obstacle that often appears in the projection space.
The robot may record the identification object and its corresponding features. For example, features of a sofa or seat include a seat cushion and a backrest, with the side of the backrest facing the seat cushion being the front of the sofa or seat.
S602, determining the projection direction according to the characteristics corresponding to the identification object.
For example, if the object is a sofa, when the plane on which the projection is available is a plurality of walls, the robot may set the direction of the front of the sofa toward one wall as the projection direction.
Referring to fig. 1 and fig. 7 together, fig. 7 is a schematic flow chart illustrating a sub-process of step S112 in fig. 1. As shown in fig. 7, after the robot determines the projection pose, step S112 may include the following sub-steps:
s701, determining whether the brightness is less than a preset brightness threshold. If it is determined that the brightness is less than the preset brightness threshold, step S702 is performed. If not, the process returns to step S110 in fig. 1.
S702, determining whether the noise is smaller than a preset noise threshold. If it is determined that the noise is smaller than the preset noise threshold, step S113 in fig. 1 is performed. If not, the process returns to step S110 in fig. 1.
In other embodiments, the robot may determine whether the noise is smaller than a preset noise threshold, and then determine whether the brightness is smaller than a preset brightness threshold. And when the brightness is determined to be smaller than the brightness threshold value and the noise is determined to be smaller than the noise threshold value, the robot completes projection operation according to the projection gesture. Otherwise, the robot readjusts the projection parameters to adjust the projection pose.
The projection method in the embodiment of the present application is described below with reference to one of the application scenarios.
For example, referring to fig. 8, fig. 8 is a schematic view of a scene in which the projection method is applied to an office area in the embodiment of the present application. In fig. 8, the solid arrow line indicates the movement locus of the robot 100, and the broken arrow line indicates the projected line of the robot 100. The robot 100 is configured with a camera (not shown) and a projector (not shown).
As shown in fig. 8, when the robot 100 is in one office area, a plurality of conference rooms (e.g., conference rooms 1 to 4) exist in the office area. First, the robot 100 performs map recognition of an office area by a camera to recognize a plurality of conference rooms, and marks coordinates of the respective conference rooms in a map. Then, the robot determines the number of people in each conference room capable of containing the video, such as conference, according to the area of each conference room The area of the chamber satisfies 8.ltoreq.S<9 square meters (m) 2 ) The number of people in the conference room that can accommodate the viewing is determined to be 8. When the robot 100 receives the voice command "help me find a projection space of 8 people", it finds a meeting room capable of accommodating viewing of not less than 8 people according to the information of the number of people in the voice command. When the robot finds a meeting room (such as meeting room 2) capable of accommodating no less than 8 people for watching, the robot navigates to meeting room 2 according to the coordinates of the meeting room. When the robot is in the conference room 2, the camera is used for carrying out environment recognition on the interior of the conference room 2, finding out a proper projection plane, such as a wall surface 200 in the conference room 2, and determining a projection area on the projection plane. When the robot takes a region on the wall surface 200 as a projection region, the projection parameters of the projector are adjusted to determine the projection gesture, and the projection operation is completed according to the projection gesture. For example, the robot adjusts the projection height, projection distance and projection angle of the projector, so that the projector projects the wall surface 200, and a clear and stable picture is projected.
It can be understood that the projection method provided in this embodiment identifies a plurality of stereoscopic spaces by mapping and identifying the surrounding environment, acquires coordinates of the stereoscopic spaces, determines a projection space from the plurality of stereoscopic spaces according to the projection instruction, and navigates to the projection space according to the coordinates. Then, the projection space is subjected to environment recognition to determine a projection area available for projection, and projection parameters are adjusted according to the projection area to determine a projection posture. And finally, completing projection operation according to the projection gesture. Therefore, a proper projection plane can be quickly and accurately found, the quality of the projection plane is ensured, and the projection effect is ensured.
Fig. 9 is a schematic configuration diagram of a robot 100 according to an embodiment of the present application.
Referring to fig. 9, the robot 100 includes a processor 110 and a memory 120, the memory 120 storing a computer program or code, and the processor 110 may call the computer program or code stored in the memory 120 to perform: and carrying out map construction identification on the surrounding environment to identify a plurality of stereoscopic spaces and marking the coordinates of each stereoscopic space in the map. Calculating the area of each stereoscopic space, and calibrating the capacity of each stereoscopic space according to the area of the stereoscopic space, wherein the capacity refers to the number of people in which the stereoscopic space can accommodate the viewing. And acquiring a projection instruction and judging whether the projection instruction comprises the information of the number of the watching persons. If the projection instruction comprises the number of the sightseeing person information, determining a projection space according to the number of the sightseeing person information and the capacity of each stereoscopic space. If the projection instruction does not include the information of the number of the sightseeing persons, determining the stereoscopic space where the robot is currently located as a projection space. And navigating to the projection space according to the coordinates of the stereoscopic space in the map. The projection space is subjected to environment recognition to determine a projection area. And adjusting projection parameters according to the projection area to determine the projection posture. And finishing projection operation according to the projection posture.
It can be appreciated that the robot 100 can implement all the method steps of the above method embodiments, and the same method steps and advantages will not be described herein.
The configuration illustrated in the embodiment of the present application does not constitute a specific limitation on the robot. In other embodiments of the present application, the robot may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components may be provided.
For example, referring to fig. 10, fig. 10 is a schematic hardware structure of a multi-legged robot 300 according to one embodiment of the present application. In the embodiment shown in fig. 10, the multi-legged robot 300 includes a mechanical unit 301, a communication unit 302, a sensing unit 303, an interface unit 304, a storage unit 305, a display unit 306, an input unit 307, a control module 308, and a power supply 309. The various components of the multi-legged robot 300 can be connected in any manner, including wired or wireless connections, and the like.
It will be appreciated that the specific structure of the multi-legged robot 300 shown in fig. 10 is not limiting of the multi-legged robot 300, and that the multi-legged robot 300 may include more or less components than illustrated, and that certain components do not necessarily belong to the essential structure of the multi-legged robot 300, may be omitted entirely within the scope of not changing the essence of the application, or may be combined with certain components, as desired.
The various components of the multi-legged robot 300 are described in detail below in conjunction with fig. 10:
the mechanical unit 301 is hardware of the multi-legged robot 300. As shown in fig. 10, the mechanical unit 301 may include a drive plate 3011, a motor 3012, and a mechanical structure 3013. As shown in fig. 11, fig. 11 is an external view of the multi-legged robot 300. The mechanical structure 3013 may include a fuselage body 3014, extendable legs 3015, feet 3016, and in other embodiments, the mechanical structure 3013 may also include extendable robotic arms (not shown), rotatable head structures 3017, swingable tail structures 3018, cargo structures 3019, saddle structures 3020, camera structures 3021, and the like. It should be noted that, the number of the component modules of the machine unit 301 may be one or plural, and may be set according to circumstances, for example, the number of the legs 3015 may be 4, and 3 motors 3012 may be disposed for each leg 3015, and the number of the corresponding motors 3012 may be 12.
The communication unit 302 may be used for receiving and transmitting signals, or may be used for communicating with a network and other devices, for example, receiving command information sent by the remote controller or other multi-legged robot 300 to move in a specific direction at a specific speed value according to a specific gait, and then transmitting the command information to the control module 308 for processing. The communication unit 302 includes, for example, a WiFi module, a 4G module, a 5G module, a bluetooth module, an infrared module, and the like.
The sensing unit 303 is configured to acquire information data of the surrounding environment of the multi-legged robot 300 and parameter data of each component in the multi-legged robot 300, and send the information data to the control module 308. The sensing unit 303 includes various sensors such as a sensor that acquires surrounding information: lidar (for remote object detection, distance determination and/or speed value determination), millimeter wave radar (for short range object detection, distance determination and/or speed value determination), cameras, infrared cameras, global navigation satellite systems (GNSS, global Navigation Satellite System), etc. Such as sensors to monitor various components within the multi-legged robot 300: an inertial measurement unit (IMU, inertial Measurement Unit) (values for measuring velocity values, acceleration values and angular velocity values), plantar sensors (for monitoring plantar force point position, plantar posture, touchdown force magnitude and direction), temperature sensors (for detecting component temperature). As for the other sensors such as the load sensor, the touch sensor, the motor angle sensor, the torque sensor, etc. which may be further configured for the multi-legged robot 300, the detailed description thereof will be omitted.
The interface unit 304 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more components within the multi-legged robot 300, or may be used to output (e.g., data information, power, etc.) to an external device. The interface unit 304 may include a power port, a data port (e.g., a USB port), a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, and the like.
The storage unit 305 is used to store a software program and various data. The storage unit 305 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system program, a motion control program, an application program (such as a text editor), and the like; the data storage area may store data generated by the multi-legged robot 300 in use (such as various sensed data acquired by the sensing unit 303, log file data), and the like. In addition, storage unit 305 may include high-speed random access memory, but may also include non-volatile memory, such as disk memory, flash memory, or other volatile solid-state memory.
The display unit 306 is used to display information input by a user or information provided to the user. The display unit 306 may include a display panel 3061, and the display panel 3061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 307 may be used to receive input numeric or character information. In particular, the input unit 307 may include a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 3071 or in the vicinity of the touch panel 3071 using a palm, a finger, or a suitable accessory), and drive the corresponding connection device according to a preset program. The touch panel 3071 may include two parts, a touch detection device 3073 and a touch controller 3074. Wherein, the touch detection device 3073 detects the touch orientation of the user, and detects a signal caused by the touch operation, and transmits the signal to the touch controller 3074; touch controller 3074 receives touch information from touch sensing device 3073 and converts it to touch point coordinates, which are then sent to control module 308, and can receive commands from control module 308 and execute them. The input unit 307 may include other input devices 3072 in addition to the touch panel 3071. In particular, other input devices 3072 may include, but are not limited to, one or more of a remote operated handle, etc., and are not limited herein in particular.
Further, the touch panel 3071 may overlay the display panel 3061, and when the touch panel 3071 detects a touch operation thereon or thereabout, the touch operation is transferred to the control module 308 to determine a type of touch event, and then the control module 308 provides a corresponding visual output on the display panel 3061 according to the type of touch event. Although in fig. 10, the touch panel 3071 and the display panel 3061 are implemented as two separate components to implement the input and output functions, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions, which is not limited herein.
The control module 308 is a control center of the multi-legged robot 300, connects the respective components of the entire multi-legged robot 300 using various interfaces and lines, and performs overall control of the multi-legged robot 300 by running or executing a software program stored in the storage unit 305, and calling data stored in the storage unit 305.
The power supply 309 is used to power the various components, and the power supply 309 may include a battery and a power control board for controlling battery charging, discharging, and power consumption management functions. In the embodiment shown in fig. 10, the power supply 309 is electrically connected to the control module 308, and in other embodiments, the power supply 309 may be electrically connected to the sensing unit 303 (such as a camera, a radar, a speaker, etc.), and the motor 3012, respectively. It should be noted that each component may be connected to a different power source 309 or may be powered by the same power source 309.
On the basis of the above embodiments, specifically, in some embodiments, the terminal device may be in communication connection with the multi-legged robot 300, when the terminal device communicates with the multi-legged robot 300, instruction information may be sent to the multi-legged robot 300 through the terminal device, the multi-legged robot 300 may receive the instruction information through the communication unit 302, and the instruction information may be transmitted to the control module 308 when the instruction information is received, so that the control module 308 may process to obtain the target speed value according to the instruction information. Terminal devices include, but are not limited to: a mobile phone, a tablet personal computer, a server, a personal computer, a wearable intelligent device and other electrical equipment with an image shooting function.
The instruction information may be determined according to preset conditions. In one embodiment, the multi-legged robot 300 may include a sensing unit 303, and the sensing unit 303 may generate instruction information according to the current environment in which the multi-legged robot 300 is located. The control module 308 can determine whether the current speed value of the multi-legged robot 300 satisfies the corresponding preset condition according to the instruction information. If so, the current speed value and current gait movement of the multi-legged robot 300 are maintained; if not, the target speed value and the corresponding target gait are determined according to the corresponding preset conditions, so that the multi-legged robot 300 can be controlled to move at the target speed value and the corresponding target gait. The environmental sensor may include a temperature sensor, a barometric pressure sensor, a visual sensor, an acoustic sensor. The instruction information may include temperature information, air pressure information, image information, sound information. The communication between the environmental sensor and the control module 308 may be wired or wireless. Means of wireless communication include, but are not limited to: wireless networks, mobile communication networks (3G, 4G, 5G, etc.), bluetooth, infrared.
It can be appreciated that the multi-legged robot 300 can implement all the method steps of the above method embodiments, and the same method steps and advantages will not be repeated here.
The embodiments of the present application have been described in detail above with reference to the accompanying drawings, but the present application is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present application.

Claims (10)

1. A method of robotic projection, the method comprising:
carrying out map construction identification on the surrounding environment to identify a plurality of stereoscopic spaces and marking coordinates of each stereoscopic space in a map;
calculating the area of each stereoscopic space, and calibrating the capacity of each stereoscopic space according to the area of each stereoscopic space, wherein the capacity refers to the number of people in which the stereoscopic space can accommodate a film;
acquiring a projection instruction, judging whether the projection instruction comprises the number of watching persons or not, and if the projection instruction comprises the number of watching persons, determining a projection space according to the number of watching persons information and the capacity of each stereoscopic space; if the projection instruction does not include the number of people watching the video, determining a three-dimensional space where the robot is currently located as the projection space;
Navigating to the projection space according to the coordinates of the stereoscopic space in the map;
performing environment recognition on the projection space to determine a projection area;
adjusting projection parameters according to the projection area to determine a projection gesture, wherein the projection parameters comprise a projection height, a projection distance and a projection angle;
and finishing projection operation according to the projection gesture.
2. The robot projection method of claim 1, wherein the determining a projection space based on the viewing person number information and the capacity of each of the stereoscopic spaces comprises:
responding to the projection instruction, inquiring a plurality of stereoscopic spaces meeting the number condition, wherein the number condition refers to that the number of people which can be accommodated in the stereoscopic spaces is larger than or equal to the number of people watching the movie;
and determining the projection space according to the query result.
3. The robotic projection method of claim 2, wherein the determining the projection space based on the query results comprises:
when all the three-dimensional spaces meeting the number of people are inquired, acquiring coordinates of the three-dimensional spaces meeting the number of people in a map and the current position coordinates of the robot;
Determining the distance from each three-dimensional space meeting the number of people to the current position of the robot according to the coordinates of the three-dimensional space meeting the number of people and the current position coordinates of the robot;
and determining the projection space according to the distance or the histories of all the three-dimensional spaces meeting the condition of the number of people.
4. The robot projection method according to claim 2, wherein after the projection space is determined based on the viewing person number information and the capacity of each of the stereoscopic spaces, the method further comprises:
determining whether the projection space is occupied;
and when the projection space is unoccupied, navigating to the projection space according to the coordinates of the stereoscopic space in the map.
5. The robot projection method of claim 4, wherein after the projection space is determined according to the viewing person number information and the capacity of each of the stereoscopic spaces, the method further comprises:
when the projection space is occupied, judging whether other three-dimensional spaces meeting the number of people exist or not;
if other three-dimensional spaces meeting the people number condition exist, determining the projection space according to the distance from the position of the other three-dimensional spaces to the current position of the robot, and navigating to the projection space according to the coordinates of the other three-dimensional spaces in a map;
And if no other three-dimensional space meeting the number of people exists, stopping projection work and feeding back the result to the user.
6. The robotic projection method of claim 1, wherein the performing environmental recognition on the projection space to determine a projection area comprises:
performing environment recognition on the projection space to determine a projection direction;
determining whether a projection plane larger than a preset size exists in the projection direction;
when a projection plane larger than a preset size exists in the projection direction, determining the projection area on the projection plane;
and when the projection plane larger than the preset size does not exist in the projection direction, adjusting the rotation angle of the robot to adjust the projection direction.
7. The robotic projection method of claim 6, wherein the performing environmental recognition on the projection space to determine a projection direction comprises:
when the fact that the projection space has a preset identification object is recognized, acquiring the characteristics corresponding to the identification object;
and determining the projection direction according to the characteristics corresponding to the identification object.
8. The robotic projection method of claim 1, wherein after said adjusting projection parameters according to the projection area to determine a projection pose, the method further comprises:
Acquiring environmental parameters, wherein the environmental parameters comprise brightness and noise;
determining whether the brightness is smaller than a preset brightness threshold value;
determining whether the noise is smaller than a preset noise threshold;
and when the brightness is smaller than the brightness threshold and the noise is smaller than the noise threshold, completing projection operation according to the projection gesture.
9. The robotic projection method of claim 1, wherein after the projection operation is completed according to the projection pose, the method further comprises:
and responding to an operation instruction of a user, and adjusting at least one of the projection gesture, the projection brightness and the volume.
10. A robot, characterized in that the robot comprises a processor and a memory,
the memory is for storing a computer program or code which, when executed by the processor, implements the robot projection method according to any of claims 1 to 9.
CN202210225542.3A 2022-03-09 2022-03-09 Robot projection method and robot Active CN114603557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210225542.3A CN114603557B (en) 2022-03-09 2022-03-09 Robot projection method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210225542.3A CN114603557B (en) 2022-03-09 2022-03-09 Robot projection method and robot

Publications (2)

Publication Number Publication Date
CN114603557A CN114603557A (en) 2022-06-10
CN114603557B true CN114603557B (en) 2024-03-12

Family

ID=81861373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210225542.3A Active CN114603557B (en) 2022-03-09 2022-03-09 Robot projection method and robot

Country Status (1)

Country Link
CN (1) CN114603557B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1413324A (en) * 1972-04-20 1975-11-12 Captain Int Ind Ltd Apparatus and methods for monitoring the availability status of guest rooms in hotels and the like
JP2002099045A (en) * 2000-09-26 2002-04-05 Minolta Co Ltd Display device and method
JP2008009136A (en) * 2006-06-29 2008-01-17 Ricoh Co Ltd Image projection device
KR20090000637A (en) * 2007-03-13 2009-01-08 주식회사 유진로봇 Mobile intelligent robot having function of contents provision and location guidance
CN104915903A (en) * 2015-05-29 2015-09-16 深圳走天下科技有限公司 Intelligent automatic room distribution device and method
KR20180003269A (en) * 2016-06-30 2018-01-09 엘지전자 주식회사 Beam projector and operating method thereof
CN109996050A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Project the control method and control device of robot
CN210955065U (en) * 2019-11-18 2020-07-07 南京菲尔德物联网有限公司 Intelligent hotel box recommendation device
CN111476839A (en) * 2020-03-06 2020-07-31 珠海格力电器股份有限公司 Method, device and equipment for determining projection area and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107211104A (en) * 2015-02-03 2017-09-26 索尼公司 Information processor, information processing method and program
WO2020256188A1 (en) * 2019-06-20 2020-12-24 엘지전자 주식회사 Image projection method and robot implementing same

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1413324A (en) * 1972-04-20 1975-11-12 Captain Int Ind Ltd Apparatus and methods for monitoring the availability status of guest rooms in hotels and the like
JP2002099045A (en) * 2000-09-26 2002-04-05 Minolta Co Ltd Display device and method
JP2008009136A (en) * 2006-06-29 2008-01-17 Ricoh Co Ltd Image projection device
KR20090000637A (en) * 2007-03-13 2009-01-08 주식회사 유진로봇 Mobile intelligent robot having function of contents provision and location guidance
CN104915903A (en) * 2015-05-29 2015-09-16 深圳走天下科技有限公司 Intelligent automatic room distribution device and method
KR20180003269A (en) * 2016-06-30 2018-01-09 엘지전자 주식회사 Beam projector and operating method thereof
CN109996050A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Project the control method and control device of robot
CN210955065U (en) * 2019-11-18 2020-07-07 南京菲尔德物联网有限公司 Intelligent hotel box recommendation device
CN111476839A (en) * 2020-03-06 2020-07-31 珠海格力电器股份有限公司 Method, device and equipment for determining projection area and storage medium

Also Published As

Publication number Publication date
CN114603557A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
US11126257B2 (en) System and method for detecting human gaze and gesture in unconstrained environments
US9552056B1 (en) Gesture enabled telepresence robot and system
EP3342324B1 (en) Cleaning robot with a controller for controlling a quiet mode
US11407116B2 (en) Robot and operation method therefor
US10038893B2 (en) Context-based depth sensor control
US9014848B2 (en) Mobile robot system
KR20190088122A (en) Mobile home robot and controlling method of the mobile home robot
CN108234918B (en) Exploration and communication architecture method and system of indoor unmanned aerial vehicle with privacy awareness
KR20180039437A (en) Cleaning robot for airport and method thereof
KR20180038879A (en) Robot for airport and method thereof
US9477302B2 (en) System and method for programing devices within world space volumes
JP6788845B2 (en) Remote communication methods, remote communication systems and autonomous mobile devices
US11613354B2 (en) Method and device for controlling flight, control terminal, flight system and processor
US10889001B2 (en) Service provision system
CN114603557B (en) Robot projection method and robot
US20200033874A1 (en) Systems and methods for remote visual inspection of a closed space
US10620717B2 (en) Position-determining input device
US20230131217A1 (en) Methods of adjusting a position of images, video, and/or text on a display screen of a mobile robot
CN115731349A (en) Method and device for displaying house type graph, electronic equipment and storage medium
US20150153715A1 (en) Rapidly programmable locations in space
CN115686233A (en) Interaction method, device and interaction system for active pen and display equipment
KR20200076158A (en) Electronic device and method for collaborating another electronic device
US20230321834A1 (en) Remote robot system and method of controlling remote robot system
CN114745509B (en) Image acquisition method, device, foot robot and storage medium
CN117589153B (en) Map updating method and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant