CN113625700B - Self-walking robot control method, device, self-walking robot and storage medium - Google Patents

Self-walking robot control method, device, self-walking robot and storage medium Download PDF

Info

Publication number
CN113625700B
CN113625700B CN202010382048.9A CN202010382048A CN113625700B CN 113625700 B CN113625700 B CN 113625700B CN 202010382048 A CN202010382048 A CN 202010382048A CN 113625700 B CN113625700 B CN 113625700B
Authority
CN
China
Prior art keywords
area
explored
obstacle
walking
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010382048.9A
Other languages
Chinese (zh)
Other versions
CN113625700A (en
Inventor
王磊
吴震
谢濠键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Stone Innovation Technology Co ltd
Original Assignee
Beijing Stone Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Stone Innovation Technology Co ltd filed Critical Beijing Stone Innovation Technology Co ltd
Priority to CN202010382048.9A priority Critical patent/CN113625700B/en
Publication of CN113625700A publication Critical patent/CN113625700A/en
Application granted granted Critical
Publication of CN113625700B publication Critical patent/CN113625700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Provided are a self-walking robot control method, a self-walking robot control device, a self-walking robot, and a storage medium. The method comprises the following steps: selecting a position as a central point of the area to be explored; at least part of the areas to be explored are areas which are not explored yet when the self-walking robot executes the task; moving to the central point to search for surrounding obstacles, thereby recording the positions of the detected obstacles and marking the area to be searched as a searched area; and repeating all the steps until the obstacle exploration is completed for all the areas related to the task. According to the invention, the self-walking robot is not easy to collide with an obstacle.

Description

Self-walking robot control method, device, self-walking robot and storage medium
Technical Field
The invention belongs to the field of intelligent robots, and particularly relates to a self-walking robot control method and device, a self-walking robot and a storage medium.
Background
With the improvement of living standard and the development of technology, the self-walking robot is widely applied. The self-walking robot can automatically perform cleaning operation, and automatically clean the area to be explored in a direct sweeping, mopping, vacuum dust collection and other modes.
In the cleaning process, the self-walking robot can detect obstacles possibly encountered in the current working path in real time and execute corresponding obstacle avoidance operation. However, in some situations, such as in corners, the position of the obstacle is not easily found immediately, and in addition, the existing self-walking robot usually cleans and detects the obstacle, so that the self-walking robot is likely to collide with the obstacle due to untimely finding, resulting in bad user experience. In addition, the existing self-walking robot is easy to leave a missed scanning area in cleaning, so that a missed scanning phenomenon occurs.
Disclosure of Invention
In order to solve the problem that a self-walking robot easily collides with an obstacle in the prior art, the invention provides a self-walking robot control method, a self-walking robot control device, a self-walking robot and a computer readable storage medium.
According to a first aspect of the present invention, there is provided a self-walking robot control method, the method comprising:
Selecting a position as a central point of the area to be explored; at least part of the areas to be explored are areas which are not explored yet when the self-walking robot executes the task;
Moving to the central point to search the surrounding obstacles, and recording the positions of the detected obstacles to finish searching the area to be searched;
and repeating all the steps until all the areas related to the task are explored.
According to the method of the present invention, further, the moving to the center point to perform obstacle exploration for four sides includes:
and moving to the central point and performing self-rotation to obtain image information in the region to be explored, and judging whether the region to be explored has an obstacle or not according to the image information.
According to the method of the present invention, further, the moving to the center point and the self-rotating to obtain the image information in the to-be-explored area includes:
moving to the central point and continuously performing self-rotation for at least one circle to acquire image information in the region to be explored; or alternatively
And moving to the central point and stopping once every preset angle of rotation of the central point to acquire the image information in the area to be explored.
According to the method of the present invention, further, the predetermined angle is equal to or smaller than a field angle of a device for acquiring the image information.
According to the method of the invention, further, the method comprises the following steps:
after moving to the central point to explore the obstacle around, starting walking in the explored area, traversing the explored area, and executing obstacle avoidance operation when the obstacle is detected again in the walking traversing process; or alternatively
After the obstacle exploration is completed on all areas related to the current task, starting to walk, traversing all areas related to the current task, and executing obstacle avoidance operation when the obstacle is detected again in the walking traversing process; or alternatively
If the area related to the task comprises more than 2 partitions, after the exploration is completed in each pair of partitions, the user starts to walk and traverses the current partition; after the exploration and walking of the current partition are completed, the exploration and walking of the next partition are started until all the areas related to the task are traversed, and obstacle avoidance operation is executed when the obstacle is detected again in the walking traversing process.
According to the method of the invention, further, the method comprises the following steps:
in the process of executing the task, marking the area where the self-walking robot has walked as a walked area;
The walking in the explored area specifically comprises the following steps: the walking action traverses at least the area except the walking area in the area to be explored.
According to the method of the present invention, further, the selecting a location as a center point of the area to be explored specifically includes:
and selecting a preset area with the smallest overlapping area, which is overlapped with the explored area, on the boundary of the explored area as an explored area according to the preset area, and taking the geometric center of the explored area as the center point.
According to the method of the present invention, further, the selecting a location as a center point of the area to be explored specifically includes:
when a task starts to be executed, a point is arbitrarily selected as a central point of the area to be explored, and a preset area is constructed by taking the preset area as an area to be explored.
According to the method of the present invention, further, the area to be explored includes any one of the following shapes: circular, rectangular or polygonal.
According to the method of the invention, further, the method comprises the following steps:
After judging that the obstacle exists in the area to be explored through the image information, carrying out type identification on the obstacle;
And when walking in the explored area, identifying and confirming the type of the obstacle again within a certain distance range from the detected obstacle.
According to a second aspect of the present invention, there is provided a self-walking robot control device comprising:
A central point selection unit for selecting a position as a central point of the area to be explored; at least part of the areas to be explored are areas which are not explored yet when the self-walking robot executes the task;
The obstacle exploration recording unit is used for moving to the central point to explore the surrounding obstacles, recording the positions of the detected obstacles and completing exploration of the current area to be explored;
And the repeated execution unit is used for repeatedly executing all the steps until all the areas related to the task are explored.
According to the device of the present invention, further, the moving to the center point to perform obstacle exploration around includes:
and moving to the central point and performing self-rotation to obtain image information in the region to be explored, and judging whether the region to be explored has an obstacle or not according to the image information.
According to the device of the present invention, further, the moving to the center point and the self-rotating to obtain the image information in the to-be-explored area includes:
moving to the central point and continuously performing self-rotation for at least one circle to acquire image information in the region to be explored; or alternatively
And moving to the central point and stopping once every preset angle of rotation of the central point to acquire the image information in the area to be explored.
According to the apparatus of the present invention, further, the predetermined angle is equal to or smaller than a field angle of the apparatus for acquiring the image information.
According to the device of the invention, further, after moving to the central point to search for obstacles around, walking in the searched area, and traversing the searched area; or after the obstacle exploration is completed on all the areas related to the current task, starting to walk, and traversing all the areas related to the current task; or if the area related to the task comprises more than 2 partitions, after the exploration is completed in each pair of partitions, starting to walk, and traversing the current partition; after the exploration and walking of the current partition are completed, the exploration and walking of the next partition are started until all the areas related to the current task are traversed; and the apparatus further comprises: and the obstacle avoidance operation execution unit is used for executing obstacle avoidance operation when the obstacle is detected again in the walking traversal process.
The device according to the invention further comprises:
The walking area marking unit is used for marking the area which the self-walking robot has walked in the process of executing the task as a walking area;
The walking in the explored area specifically comprises the following steps: the walking action traverses at least the area except the walking area in the area to be explored.
According to the device of the present invention, further, the selecting a location as a center point of the area to be explored specifically includes:
and selecting a preset area with the smallest overlapping area, which is overlapped with the explored area, on the boundary of the explored area as an explored area according to the preset area, and taking the geometric center of the explored area as the center point.
According to the device of the present invention, further, the selecting a location as a center point of the area to be explored specifically includes:
when a task starts to be executed, a point is arbitrarily selected as a central point of the area to be explored, and a preset area is constructed by taking the preset area as an area to be explored.
According to the device of the invention, further, the area to be explored comprises any one of the following shapes: circular, rectangular or polygonal.
The device according to the invention further comprises: and the type recognition unit is used for recognizing the type of the obstacle after judging that the obstacle exists in the area to be explored through the image information, and recognizing and confirming the type of the obstacle again within a certain distance range from the detected obstacle when walking in the explored area.
According to a third aspect of the present invention there is provided a self-walking robot comprising a processor and a memory coupled to the processor, the memory storing instructions loadable by the processor and performing the method as described above.
According to a fourth aspect of the present invention there is provided a computer readable storage medium having stored thereon instructions loadable by a processor and performing the above method according to the present invention.
Advantageous effects
According to the invention, whether the preset area to be explored has the obstacle or not is determined in advance, and the position of the existing obstacle is recorded, so that the self-walking robot can execute obstacle avoidance operation when encountering the obstacle in the cleaning process, thereby avoiding colliding the obstacle and improving the customer experience. In addition, according to the present invention, it is also possible to identify an obstacle outside the area to be searched, and although the obstacle is farther away from the walking robot and the identification rate is worse, the obstacle may be confirmed a plurality of times at its center point after the next area to be searched is determined, thereby improving the identification rate of the obstacle.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present disclosure;
FIG. 2 is a perspective view of the configuration of the automatic cleaning apparatus provided in an embodiment of the present disclosure;
FIG. 3 is a top view of a robotic cleaning device provided in an embodiment of the present disclosure;
FIG. 4 is a bottom view of the robotic cleaning device provided in an embodiment of the present disclosure;
Fig. 5 is a schematic flow chart of a control method of an automatic cleaning device according to an embodiment of the disclosure;
Fig. 6a and fig. 6b are schematic diagrams of a method for determining a center of a region according to an embodiment of the disclosure;
fig. 7 is a schematic structural view of a control device for an automatic cleaning apparatus according to an embodiment of the present disclosure;
fig. 8 is an electronic structure schematic diagram of a robot according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It should be understood that although the terms first, second, third, etc. may be used in describing … … in the embodiments of the present disclosure, these … … should not be limited to these terms. These terms are only used to distinguish … … from one another. For example, the first … … may also be referred to as the second … …, and similarly the second … … may also be referred to as the first … …, without departing from the scope of the embodiments of the present disclosure.
Embodiments of the present disclosure provide a possible application scenario including self-walking robots, such as floor sweeping robots, mopping robots, dust collectors, herbicides, and the like. In this embodiment, as shown in fig. 1, a home-type sweeping robot 100 is taken as an example for explanation, and during the working process of the sweeping robot, the home-type sweeping robot can sweep in a built-in map of the robot according to a preset route or an automatically planned route, and during the sweeping process of the self-walking robot, the self-walking robot can detect obstacles possibly encountered in the current working path in real time and execute corresponding obstacle avoidance operations. In order to find the obstacle of the cleaning area in time, the robot needs to search a small part of the cleaning area in real time in the cleaning process, detect and clean the obstacle, clean the area, then clean another area by executing the same steps, and repeat the operation to finish the cleaning of all the areas to be cleaned. During the cleaning process, when an obstacle is detected to exist in the current cleaning area, the obstacle is marked, and obstacle avoidance is performed when the area is cleaned later. In this embodiment, the robot may be provided with a touch-sensitive display or controlled by a mobile terminal to receive an operation instruction input by a user. The automatic cleaning device may be provided with various sensors, such as a buffer, a cliff sensor, an ultrasonic sensor, an infrared sensor, a magnetometer, an accelerometer, a gyroscope, an odometer, and other sensing devices (the specific structure of each sensor is not described in detail, any one of the above sensors may be used in the automatic cleaning device), and the robot may be further provided with a wireless communication module such as a WIFI module, a Bluetooth module, and the like, so as to be connected to the intelligent terminal or the server, and receive an operation instruction transmitted from the intelligent terminal or the server through the wireless communication module.
As shown in fig. 2, the sweeping robot 100 may travel on the ground by various combinations of movements relative to three mutually perpendicular axes defined by the main body 110: front-rear axis X, lateral axis Y and central vertical axis Z. The forward driving direction along the front-rear axis X is denoted as "forward direction", and the backward driving direction along the front-rear axis X is denoted as "backward direction". The direction of the transverse axis Y is substantially the direction extending between the right and left wheels of the robot along the axis defined by the center point of the drive wheel module 141.
The robot cleaner 100 may rotate around the Y axis. The backward direction portion is "pitched up" when the forward direction portion of the robot 100 is tilted upward, and is "pitched down" when the backward direction portion of the robot 100 is tilted upward. In addition, the robot 100 may rotate about the Z-axis. In the forward direction of the robot cleaner 100, the right turn is when the robot cleaner 100 is tilted to the right of the X axis, and the left turn is when the robot cleaner 100 is tilted to the left of the X axis.
As shown in fig. 3, the robot cleaner 100 includes a machine body 110, a sensing system 120, a control system, a driving system 140, a cleaning system, an energy system, and a man-machine interaction system 180.
The machine body 110 includes a forward portion 111 and a rearward portion 112 having an approximately circular shape (both front and rear circular) and may have other shapes including, but not limited to, an approximately D-shape with a front and rear circular shape and a rectangular or square shape with a front and rear.
As shown in fig. 3, the sensing system 120 includes a position determining device 121 on the machine body 110, a collision sensor provided on a buffer 122 of the forward portion 111 of the machine body 110, a proximity sensor, a cliff sensor provided at a lower portion of the machine body, and sensing devices such as a magnetometer, an accelerometer, a gyroscope (Gyro), an odometer (ODO, full scale odograph) provided inside the machine body, for providing various position information and movement state information of the machine to the control system 130. The position determining device 121 includes, but is not limited to, a camera, a laser ranging device (LDS), a full scale LASER DIRECT Structuring.
As shown in fig. 3, the forward portion 111 of the machine body 110 may carry a bumper 122, and the bumper 122 detects one or more events in the travel path of the robot 100 via a sensor system, such as an infrared sensor, disposed thereon as the driving wheel module 141 advances the robot across the floor during cleaning, and the robot 100 may respond to the events, such as away from the obstacle, by controlling the driving wheel module 141 through the events, such as an obstacle, wall, detected by the bumper 122.
The control system 130 is disposed on a circuit board in the machine body 110, and includes a non-transitory memory, such as a hard disk, a flash memory, a random access memory, a communication computing processor, such as a central processing unit, and an application processor, and the application processor draws an instant map of the environment of the robot according to the obstacle information fed back by the laser ranging device by using a positioning algorithm, such as an instant localization and mapping (SLAM, full scale Simultaneous Localization AND MAPPING). And comprehensively judging what working state the sweeper is currently in and at what position, the current pose of the sweeper is, such as passing a threshold, going up a carpet, being positioned at the cliff, being blocked above or below, being full of dust box, being picked up and the like by combining distance information and speed information fed back by sensing devices such as sensors, cliff sensors, magnetometers, accelerometers, gyroscopes, odometers and the like arranged on the buffer 122, and further giving specific next-step action strategies according to different conditions, so that the work of the robot meets the requirements of an owner better user experience.
As shown in fig. 4, the drive system 140 may maneuver the robot 100 to travel across the ground based on drive commands with distance and angle information (e.g., x, y, and θ components). The drive system 140 comprises a drive wheel module 141, which drive wheel module 141 can control both the left and right wheels simultaneously, preferably the drive wheel module 141 comprises a left drive wheel module and a right drive wheel module, respectively, in order to control the movement of the machine more precisely. The left and right drive wheel modules are opposed along a transverse axis defined by the main body 110. In order for the robot to be able to move more stably or with greater motion capabilities on the ground, the robot may include one or more driven wheels 142, including but not limited to universal wheels. The driving wheel module comprises a travelling wheel, a driving motor and a control circuit for controlling the driving motor, and the driving wheel module can be connected with a circuit for measuring driving current and an odometer. The driving wheel module 141 may be detachably coupled to the main body 110 to facilitate disassembly and maintenance. The drive wheel may have a biased drop-down suspension system movably secured, e.g., rotatably attached, to the robot body 110 and receiving a spring bias biased downward and away from the robot body 110. The spring bias allows the drive wheel to maintain contact and traction with the floor with a certain footprint while the cleaning elements of the sweeping robot 100 also contact the floor 10 with a certain pressure.
The cleaning system may be a dry cleaning system and/or a wet cleaning system. As a dry cleaning system, a main cleaning function is derived from a cleaning system 151 composed of a roll brush, a dust box, a blower, an air outlet, and connection members between the four. The rolling brush with certain interference with the ground sweeps up the garbage on the ground and winds up the garbage in front of the dust collection opening between the rolling brush and the dust box, and then the dust box is sucked by the suction gas generated by the fan and passing through the dust box. The dry cleaning system may also include a side brush 152 having a rotating shaft that is angled relative to the floor for moving debris into the roll brush area of the cleaning system.
The energy system includes rechargeable batteries, such as nickel metal hydride batteries and lithium batteries. The rechargeable battery can be connected with a charging control circuit, a battery pack charging temperature detection circuit and a battery under-voltage monitoring circuit, and the charging control circuit, the battery pack charging temperature detection circuit and the battery under-voltage monitoring circuit are connected with the singlechip control circuit. The host computer charges through setting up the charging electrode in fuselage side or below and charging pile connection. If dust is attached to the exposed charging electrode, the plastic body around the electrode is melted and deformed due to the accumulation effect of the electric charge in the charging process, and even the electrode itself is deformed, so that normal charging cannot be continued.
The man-machine interaction system 180 comprises keys on a panel of the host machine, wherein the keys are used for users to select functions; the system also comprises a display screen and/or an indicator light and/or a loudspeaker, wherein the display screen, the indicator light and the loudspeaker show the current state or function selection item of the machine to a user; a cell phone client program may also be included. For the path navigation type automatic cleaning equipment, a map of the environment where the equipment is located and the position where the machine is located can be displayed to a user at a mobile phone client, and richer and humanized functional items can be provided for the user.
Fig. 5 is a schematic diagram of one embodiment of a method according to the present invention. Referring to fig. 5, the self-walking robot control method according to the embodiment includes the steps of:
S502, selecting a position as a center point of a region to be explored; at least part of the areas to be explored are areas which are not explored yet when the self-walking robot executes the task;
s504, moving to the central point to search for obstacles around, and recording the positions of the detected obstacles to finish searching the area to be searched where the current area is located;
and S506, repeatedly executing all the steps until all the areas related to the task are explored.
In one or more specific embodiments, according to a preset area, selecting a preset area with a geometric center on a boundary of an explored area, wherein the preset area overlaps with the explored area and has the smallest overlapping area on the boundary as an explored area, and taking the geometric center of the explored area as the center point. The predetermined area may be determined by means of a predetermined diameter or radius. The principle of determining the preset area is not to exceed the detection range of the self-walking robot (built-in image acquisition device). According to the above embodiment, since the center point is located at the boundary of the explored area, the preset area as the area to be explored is necessarily overlapped with the explored area, and the explored area can ensure that the center point on the boundary has no obstacle to be identified, the self-walking robot can ensure that the phenomenon of bumping into the obstacle is avoided during the process of moving to the center point, in addition, no missed sweeping area is left during the cleaning process, and therefore, the missed sweeping phenomenon can be ensured not to occur.
In other embodiments, when the task starts to be executed, a point is arbitrarily selected as a center point of the area to be explored, and a preset area is constructed by taking the preset area as an area to be explored.
In one or more specific embodiments, the preset area may be any one of the following shapes: circular, rectangular or polygonal, preferably circular. The center of the region is the geometric center of the preset region, for example, the center of a circle of a circular region is the geometric center, and the intersection point of diagonal lines of a rectangle or a polygon is the geometric center.
In one or more embodiments, walking within the explored area may begin after moving to the center point to explore obstacles around, traversing the explored area; or after the obstacle exploration is completed on all the areas related to the current task, the walking can be started, and all the areas related to the current task are traversed; or if the area related to the task comprises more than 2 partitions, the current partition can be traversed after the search is completed in each pair of partitions; after the exploration and walking of the current partition are completed, the exploration and walking of the next partition are started until all the areas related to the current task are traversed; and performing obstacle avoidance operation when the obstacle is detected again during the walk traversal.
Fig. 1 is a schematic diagram specifically illustrating the steps of determining the region to be explored in the method of the present invention, taking a circle as an example. As shown in fig. 1, 200 represents a searched area (dark gray portion), and 201 represents an unexplored area (blank portion). According to the present embodiment, the explored region 200 is recorded on the map first, then one position on the boundary between the explored region 200 and the unexplored region 201 is determined as a center point and a preset radius is determined to determine the circular area to be explored 202. The area to be explored 202 overlaps with the explored area 200.
In one or more specific embodiments, the region to be explored may be a region overlapping with the explored region and having a minimum overlapping area. In this way, the area of the area where the area to be explored overlaps with the explored area can be reduced as much as possible, thereby reducing the time of repeated sweeping and improving the sweeping efficiency.
As for the cleaning robot, a method of determining the center point is specifically shown in fig. 6, and comparing fig. 6a and 6B, it is known that the center points a and B are on the boundary between the explored area and the unexplored area, respectively, but for the preset area of the circle, it is apparent that the area to be explored overlaps with the explored area in fig. 6B and the overlapping area is small. The center point is moved on the boundary between the explored region and the unexplored region, and so on, a point which overlaps on the boundary between the explored region and the unexplored region and has the smallest overlapping area is found as the center point.
As will be appreciated with reference to fig. 6a and 6b, in one or more embodiments, the moving to the center point to explore obstacles around may include: and moving to the central point and performing self-rotation to obtain image information in the region to be explored, and judging whether the region to be explored has an obstacle or not according to the image information. The image information is acquired by an image acquisition device. The image acquisition device may be one or more optical, electrical or acoustic image acquisition devices. Examples of optical, electrical or acoustic image acquisition means include optical cameras, laser, infrared or ultrasonic detectors or sensors, and the like. In a specific embodiment, an optical camera is used as an optical, electrical or acoustic image acquisition device. In the case of multiple optical, electrical or acoustic image acquisition devices, the multiple optical, electrical or acoustic image acquisition devices may be disposed within the self-walking robot uniformly or symmetrically around a center point. One or more image acquisition devices may be built into the self-walking robot.
According to the present invention, the rotation may be continuous rotation or may be stopped once every predetermined angle of rotation. In a more specific embodiment, the angle of rotation may be at least one revolution, such as 360 degrees. In another more specific embodiment, the angle may be 720 degrees. In one or more other embodiments, the angle of rotation of the self-walking robot at the center point may also be less than one revolution, as long as the detected angle of the one or more optical, electrical or acoustic image acquisition devices of the self-walking robot overlap by at least 360 degrees. For example, in the case of a self-walking robot incorporating 1 optical camera with a 120-degree angle of view, the optical camera can be rotated by 240 degrees to achieve a 360-degree detection angle.
In another or more embodiments, the self-walking robot may not even rotate at the center point, so long as the detection angle of the one or more optical, electrical, or acoustic image acquisition devices is at least 360 degrees. For example, when the self-walking robot is provided with 3 optical cameras with 120 degrees of field angle (the field angles of the 3 cameras are overlapped to be just 360 degrees) symmetrically around the center point, the self-walking robot may not rotate.
In one or more embodiments, the self-walking robot pauses for a predetermined time for detection by the one or more optical, electrical, or acoustic image acquisition devices every time the robot rotates a predetermined angle. The predetermined angle may be, for example, 15 degrees, 30 degrees, 45 degrees, 90 degrees, 120 degrees, etc. The predetermined time may be, for example, several seconds, several tens of seconds, or several minutes, etc. The detection at the pause position, for example, the photographing may be performed once or a plurality of times. In this way it can be further ensured that obstacles present around the self-walking robot are detected.
In one or more specific embodiments, the one or more optical, electrical or acoustic image acquisition devices may be one or more optical cameras, and the predetermined angle is less than or equal to a field angle of the one or more optical cameras. Specifically, the predetermined angle may be, for example, 90 degrees, and the camera view angle may be, for example, 120 degrees.
In one or more specific embodiments, the walking action traverses an area other than the walked area within the area to be explored. In this way, the cleaning time can be further reduced, and the cleaning efficiency can be improved.
In one or more embodiments, repeatedly performing selection of a location as a center point of the area to be explored; and moving to the central point to search for the surrounding obstacles, recording the positions of the detected obstacles, and marking the area to be searched as a searched area until the obstacle search is completed for all the areas related to the task.
According to the embodiment of the invention, if the self-walking robot performs the first cleaning, the self-walking robot does not store the map in advance, and if the self-walking robot performs the non-first cleaning, the self-walking robot stores the old map before or after the non-first cleaning in advance, so that when the self-walking robot performs the non-first cleaning, the old map is updated according to the new map formed at present.
In one or more specific embodiments, the method further comprises: and after judging that the obstacle exists in the area to be explored through the image information, carrying out type identification on the obstacle. And when walking in the explored area, identifying and confirming the type of the obstacle again within a certain distance range from the detected obstacle.
As an example, after it is determined that the obstacle exists in the area to be explored through the image information, a primary type recognition process is performed on the obstacle as follows: the obstacle information can be acquired through the image acquisition device, the image information is acquired in real time in the advancing process, when the image information meets the preset model condition, the existence of the obstacle at the current position is confirmed, and the category identification is carried out according to the preset model, and the method specifically comprises the following steps: the robot acquires image information in the advancing process in real time, compares an obstacle image with an image model trained by the robot when the obstacle image exists, classifies the identified obstacle according to a comparison result, for example, matches the image with a plurality of models stored by the robot when an image of shoes is shot, and classifies the image as shoes when the proportion of the images matched with the shoes is higher. Of course, if the obstacle image recognition is ambiguous, the obstacle may be recognized from a plurality of angles, for example, when the probability of recognizing that the preceding mass is likely to be a coil is 80%, the probability of recognizing that the preceding mass is likely to be excretory is 70%, the probability of recognizing that the preceding mass is likely to be a ball is 75%, and three probabilities are relatively close to each other and difficult to classify, the robot may select to acquire the image information again from another angle and perform the second recognition until it can be recognized as a certain class with a large probability difference, for example, the probability of recognizing that the preceding mass is 80% and the probability of recognizing that the preceding mass is all 50% or less, and classify the obstacle as a coil class.
In the cleaning process, the type of the obstacle is secondarily confirmed, specifically, when the robot walks in the searched area, the type of the obstacle is again confirmed in a certain distance range (the range is usually relatively close, for example, 20 cm) from the detected obstacle. Specifically, the obstacle image information of the nearer position in the advancing process is acquired, the obstacle image is compared with the image model trained by the robot, the identified obstacle is classified according to the comparison result, for example, when the image of a shoe is shot, the image is matched with a plurality of models stored by the robot, and when the proportion of the matched shoe is higher, the image is classified as the shoe. After the second confirmation, the obstacle is identified as a certain type of obstacle.
According to the invention, whether the preset area to be explored has the obstacle or not is determined in advance, and the position of the existing obstacle is recorded, so that the self-walking robot can execute obstacle avoidance operation when encountering the obstacle in the cleaning process, thereby avoiding colliding the obstacle and improving the customer experience. In addition, according to the present invention, it is also possible to identify an obstacle outside the area to be searched, and although the obstacle is farther away from the walking robot and the identification rate is worse, the obstacle may be confirmed a plurality of times at its center point after the next area to be searched is determined, thereby improving the identification rate of the obstacle.
As shown in fig. 7, according to a second aspect of the present invention, there is provided a self-walking robot control device comprising:
A center point selection unit 702, configured to select a location as a center point of the area to be explored; at least part of the areas to be explored are areas which are not explored yet when the self-walking robot executes the task;
An obstacle search recording unit 704, configured to move to the center point to search for obstacles around, thereby recording the position of the detected obstacle and marking the area to be searched as a searched area;
And the repeated execution unit 706 is configured to repeatedly execute all the steps until the obstacle search is completed for all the areas related to the current task.
In one or more specific embodiments, the apparatus can further include: and the obstacle avoidance operation execution unit is used for executing obstacle avoidance operation when the obstacle is detected again in the walking traversal process.
In one or more specific embodiments, the apparatus can further include: and the walking area marking unit is used for marking the area which the self-walking robot has walked in the process of executing the task as a walking area.
In one or more embodiments, the apparatus may further include: and the type recognition unit is used for recognizing the type of the obstacle after judging that the obstacle exists in the area to be explored through the image information, and recognizing and confirming the type of the obstacle again within a certain distance range from the detected obstacle when walking in the explored area.
The self-walking robot control device according to the present invention is used for implementing the self-walking robot control method described in the above embodiments, and the same technical features have the same technical effects, and are not described herein again.
According to the invention, whether the preset area to be explored has the obstacle or not is determined in advance, and the position of the existing obstacle is recorded, so that the self-walking robot can execute obstacle avoidance operation when encountering the obstacle in the cleaning process, thereby avoiding colliding the obstacle and improving the customer experience. In addition, according to the present invention, it is also possible to identify an obstacle outside the area to be searched, and although the obstacle is farther away from the walking robot and the identification rate is worse, the obstacle may be confirmed a plurality of times at its center point after the next area to be searched is determined, thereby improving the identification rate of the obstacle.
The presently disclosed embodiments provide a non-transitory computer readable storage medium storing computer program instructions which, when invoked and executed by a processor, implement the method steps of any of the above.
An embodiment of the disclosure provides a robot comprising a processor and a memory storing computer program instructions executable by the processor, the processor implementing the method steps of any of the previous embodiments when the computer program instructions are executed.
As shown in fig. 8, the robot may include a processing device (e.g., a central processor, a graphic processor, etc.) 801, which may perform various appropriate actions and processes according to programs stored in a Read Only Memory (ROM) 802 or programs loaded from a storage device 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic robot 800 are also stored. The processing device 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 808 including, for example, a hard disk; communication means 809. The communication means 809 may allow the electronic robot to communicate wirelessly or by wire with other robots to exchange data. While fig. 8 shows an electronic robot with various devices, it should be understood that not all of the illustrated devices are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the process described above with reference to the flowcharts may be implemented as a robot software program. For example, embodiments of the present disclosure include a robot software program product comprising a computer program embodied on a readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the robot; or may exist alone without being assembled into the robot.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Finally, it should be noted that: the above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (20)

1. A method of controlling a self-walking robot, the method comprising:
Selecting a position as a central point of the area to be explored; at least part of the areas to be explored are areas which are not explored yet when the self-walking robot executes the task;
moving to the central point to search for surrounding obstacles, thereby recording the positions of the detected obstacles and marking the area to be searched as a searched area;
repeating all the steps until the obstacle exploration is completed for all the areas related to the task;
The selecting a position as a center point of the area to be explored specifically includes: and selecting a preset area with the smallest overlapping area, which is overlapped with the explored area, on the boundary of the explored area as an explored area according to the preset area, and taking the geometric center of the explored area as the center point.
2. The method of claim 1, wherein the moving to the center point for obstacle exploration around comprises: and moving to the central point and performing self-rotation to obtain image information in the region to be explored, and judging whether the region to be explored has an obstacle or not according to the image information.
3. The method of claim 2, wherein the moving to the center point and the self-rotating to obtain image information within the region to be explored comprises:
moving to the central point and continuously performing self-rotation for at least one circle to acquire image information in the region to be explored; or alternatively
And moving to the central point and stopping once every preset angle of rotation of the central point to acquire the image information in the area to be explored.
4. A method according to claim 3, wherein the predetermined angle is equal to or less than the angle of view of the means for acquiring the image information.
5. The method as recited in claim 1, further comprising:
after moving to the central point to explore the obstacle around, starting walking in the explored area, traversing the explored area, and executing obstacle avoidance operation when the obstacle is detected again in the walking traversing process; or alternatively
After the obstacle exploration is completed on all areas related to the current task, starting to walk, traversing all areas related to the current task, and executing obstacle avoidance operation when the obstacle is detected again in the walking traversing process; or alternatively
If the area related to the task comprises more than 2 partitions, after the exploration is completed in each pair of partitions, the user starts to walk and traverses the current partition; after the exploration and walking of the current partition are completed, the exploration and walking of the next partition are started until all the areas related to the task are traversed, and obstacle avoidance operation is executed when the obstacle is detected again in the walking traversing process.
6. The method as recited in claim 5, further comprising:
in the process of executing the task, marking the area where the self-walking robot has walked as a walked area;
The walking in the explored area specifically comprises the following steps: the walking action traverses at least the area except the walking area in the area to be explored.
7. The method according to claim 1, wherein selecting a location as a center point of the area to be explored, specifically comprises:
when a task starts to be executed, a point is arbitrarily selected as a central point of the area to be explored, and a preset area is constructed by taking the preset area as an area to be explored.
8. The method of claim 1, wherein the region to be explored comprises any one of the following shapes: circular, rectangular or polygonal.
9. A method according to claim 2 or 3, further comprising:
After judging that the obstacle exists in the area to be explored through the image information, carrying out type identification on the obstacle;
And when walking in the explored area, identifying and confirming the type of the obstacle again within a certain distance range from the detected obstacle.
10. A self-walking robot control device, comprising:
A central point selection unit for selecting a position as a central point of the area to be explored; at least part of the areas to be explored are areas which are not explored yet when the self-walking robot executes the task;
An obstacle exploration recording unit for moving to the center point to explore the surrounding obstacle, thereby recording the position of the detected obstacle and marking the area to be explored as an explored area;
The repeated execution unit is used for repeatedly executing all the steps until the obstacle exploration is completed on all the areas related to the task;
The selecting a position as a center point of the area to be explored specifically includes: and selecting a preset area with the smallest overlapping area, which is overlapped with the explored area, on the boundary of the explored area as an explored area according to the preset area, and taking the geometric center of the explored area as the center point.
11. The apparatus of claim 10, wherein the moving to the center point for obstacle exploration around comprises: and moving to the central point and performing self-rotation to obtain image information in the region to be explored, and judging whether the region to be explored has an obstacle or not according to the image information.
12. The apparatus of claim 11, wherein the moving to the center point and the self-rotating to obtain image information within the region to be explored comprises:
moving to the central point and continuously performing self-rotation for at least one circle to acquire image information in the region to be explored; or alternatively
And moving to the central point and stopping once every preset angle of rotation of the central point to acquire the image information in the area to be explored.
13. The apparatus of claim 12, wherein the predetermined angle is equal to or less than a field angle of a device for acquiring the image information.
14. The apparatus of claim 10, wherein after moving to the center point to explore obstacles around, starting to walk within the explored area, traversing the explored area; or after the obstacle exploration is completed on all the areas related to the current task, starting to walk, and traversing all the areas related to the current task; or if the area related to the task comprises more than 2 partitions, after the exploration is completed in each pair of partitions, starting to walk, and traversing the current partition; after the exploration and walking of the current partition are completed, the exploration and walking of the next partition are started until all the areas related to the current task are traversed;
And the apparatus further comprises: and the obstacle avoidance operation execution unit is used for executing obstacle avoidance operation when the obstacle is detected again in the walking traversal process.
15. The apparatus as recited in claim 14, further comprising:
The walking area marking unit is used for marking the area which the self-walking robot has walked in the process of executing the task as a walking area;
The walking in the explored area specifically comprises the following steps: the walking action traverses at least the area except the walking area in the area to be explored.
16. The apparatus of claim 10, wherein the selecting a location as a center point of the area to be explored specifically comprises:
and selecting a preset area with the smallest overlapping area, which is overlapped with the explored area, on the boundary of the explored area as an explored area according to the preset area, and taking the geometric center of the explored area as the center point.
17. The apparatus of claim 10, wherein the region to be explored comprises any one of the following shapes: circular, rectangular or polygonal.
18. The apparatus according to claim 11 or 12, further comprising:
And the type recognition unit is used for recognizing the type of the obstacle after judging that the obstacle exists in the area to be explored through the image information, and recognizing and confirming the type of the obstacle again within a certain distance range from the detected obstacle when walking in the explored area.
19. A self-walking robot comprising a processor and a memory connected to the processor, the memory storing instructions loadable by the processor and performing the method of any of claims 1-9.
20. A computer readable storage medium having instructions stored thereon, wherein the instructions are loadable by a processor and perform the method of any of claims 1-9.
CN202010382048.9A 2020-05-08 2020-05-08 Self-walking robot control method, device, self-walking robot and storage medium Active CN113625700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010382048.9A CN113625700B (en) 2020-05-08 2020-05-08 Self-walking robot control method, device, self-walking robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010382048.9A CN113625700B (en) 2020-05-08 2020-05-08 Self-walking robot control method, device, self-walking robot and storage medium

Publications (2)

Publication Number Publication Date
CN113625700A CN113625700A (en) 2021-11-09
CN113625700B true CN113625700B (en) 2024-07-02

Family

ID=78377163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010382048.9A Active CN113625700B (en) 2020-05-08 2020-05-08 Self-walking robot control method, device, self-walking robot and storage medium

Country Status (1)

Country Link
CN (1) CN113625700B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355934A (en) * 2021-12-31 2022-04-15 南京苏美达智能技术有限公司 Obstacle avoidance method and automatic walking equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106855411A (en) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 A kind of robot and its method that map is built with depth camera and obstacle avoidance system
WO2018214825A1 (en) * 2017-05-26 2018-11-29 杭州海康机器人技术有限公司 Method and device for assessing probability of presence of obstacle in unknown position

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104865965B (en) * 2015-05-20 2017-12-26 深圳市锐曼智能装备有限公司 The avoidance obstacle method and system that robot depth camera is combined with ultrasonic wave
CN105467992B (en) * 2015-11-20 2019-11-19 纳恩博(北京)科技有限公司 The determination method and apparatus in mobile electronic device path
CN110488809A (en) * 2019-07-19 2019-11-22 上海景吾智能科技有限公司 A kind of indoor mobile robot independently builds drawing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106855411A (en) * 2017-01-10 2017-06-16 深圳市极思维智能科技有限公司 A kind of robot and its method that map is built with depth camera and obstacle avoidance system
WO2018214825A1 (en) * 2017-05-26 2018-11-29 杭州海康机器人技术有限公司 Method and device for assessing probability of presence of obstacle in unknown position

Also Published As

Publication number Publication date
CN113625700A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN114521836B (en) Automatic cleaning equipment
US20230225576A1 (en) Obstacle avoidance method and apparatus for self-walking robot, robot, and storage medium
CN110623606B (en) Cleaning robot and control method thereof
CN109947109B (en) Robot working area map construction method and device, robot and medium
EP4137905A1 (en) Robot obstacle avoidance method, device, and storage medium
CN114468898B (en) Robot voice control method, device, robot and medium
CN111990930B (en) Distance measuring method, distance measuring device, robot and storage medium
CN110136704B (en) Robot voice control method and device, robot and medium
CN109920424A (en) Robot voice control method and device, robot and medium
CN114504276A (en) Autonomous mobile robot and pile searching method and control device thereof
WO2023130704A1 (en) Robot mapping method and device, robot, and storage medium
CN114601399B (en) Control method and device of cleaning equipment, cleaning equipment and storage medium
CN114010102B (en) Cleaning robot
CN113625700B (en) Self-walking robot control method, device, self-walking robot and storage medium
KR20110085499A (en) Robot cleaner and controlling method thereof
CN112022026A (en) Self-propelled robot and obstacle detection method
CN217792839U (en) Automatic cleaning equipment
EP4332501A1 (en) Distance measurement method and apparatus, and robot and storage medium
CN114879691A (en) Control method for self-propelled robot, storage medium, and self-propelled robot
CN114601373B (en) Control method and device of cleaning robot, cleaning robot and storage medium
CN217982190U (en) Self-walking equipment
CN213216762U (en) Self-walking robot
CN116149307A (en) Self-walking equipment and obstacle avoidance method thereof
CN116942017A (en) Automatic cleaning device, control method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220418

Address after: 102200 No. 8008, floor 8, building 16, yard 37, Chaoqian Road, Changping Park, Zhongguancun Science and Technology Park, Changping District, Beijing

Applicant after: Beijing Stone Innovation Technology Co.,Ltd.

Address before: No. 6016, 6017 and 6018, Block C, No. 8 Heiquan Road, Haidian District, Beijing 100085

Applicant before: Beijing Roborock Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant