CN111897336B - Robot edge behavior ending judging method, chip and robot - Google Patents

Robot edge behavior ending judging method, chip and robot Download PDF

Info

Publication number
CN111897336B
CN111897336B CN202010764289.XA CN202010764289A CN111897336B CN 111897336 B CN111897336 B CN 111897336B CN 202010764289 A CN202010764289 A CN 202010764289A CN 111897336 B CN111897336 B CN 111897336B
Authority
CN
China
Prior art keywords
robot
edge
axis
preset
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010764289.XA
Other languages
Chinese (zh)
Other versions
CN111897336A (en
Inventor
赖峥嵘
唐伟华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN202010764289.XA priority Critical patent/CN111897336B/en
Publication of CN111897336A publication Critical patent/CN111897336A/en
Application granted granted Critical
Publication of CN111897336B publication Critical patent/CN111897336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0202Control of position or course in two dimensions specially adapted to aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a judging method for ending a robot edge behavior, a chip and a robot, wherein the judging method comprises the following steps: controlling the robot to walk along the working area from the preset edge starting point, when the position relation of the current position of the robot relative to the preset edge starting point meets the position posture superposition condition, if the length of the robot edge walking path meets the path repetition condition relative to the perimeter of the edge coverage rectangle framed in real time, and when the area occupied by the robot edge walking path meets the area coverage condition relative to the area of the edge coverage rectangle framed in real time, determining that the robot edge action is finished. Compared with a single judgment condition in the prior art, the method improves the accuracy of judging the end along the wall, and is beneficial to avoiding follow-up missing sweeping or repeated sweeping.

Description

Robot edge behavior ending judging method, chip and robot
Technical Field
The invention belongs to the technical field of robot complete edgewise walking, and particularly relates to a judging method for ending of robot edgewise behaviors, a chip and a robot.
Background
The sweeping robot often cleans corner areas in the room by traveling along the walls of the room. Because the layout of the wall body of the room of different families and furniture in the room are various, complex robot cleaning environments are easy to form, meanwhile, the accumulated error of the sensor is larger and larger in the process of walking along the wall of the robot, so that the position information positioned in real time in the map and the actual position information of the robot have larger errors, and the judgment of a complete edge round or a complete edge multiple rounds is inaccurate.
Disclosure of Invention
In order to solve the problem of inaccurate judgment of one circle of complete edge or multiple circles of complete edge, the invention discloses the following technical scheme:
A judging method for the end of robot edge behavior includes: controlling the robot to walk along the working area from the preset edge starting point, when the position relation of the current position of the robot relative to the preset edge starting point meets the position posture superposition condition, if the length of the robot edge walking path meets the path repetition condition relative to the perimeter of the edge coverage rectangle framed in real time, and when the area occupied by the robot edge walking path meets the area coverage condition relative to the area of the edge coverage rectangle framed in real time, determining that the robot edge action is finished. Compared with the prior art, the technical scheme combines the motion position posture of the robot and the edge coverage area to judge that the robot ends along the wall, overcomes the misjudgment of the robot on one or more circles of complete walking along the global working area under the working scene of the complex obstacle wall structure, improves the accuracy of judging the end along the wall compared with the single judgment condition in the prior art, improves the user experience, and is beneficial to avoiding follow-up missed scanning or repeated cleaning.
Further, the specific judging method for meeting the position and posture superposition condition comprises the following steps: judging that the difference value between the rotation angle recorded by the current position of the robot and the rotation angle recorded by the preset edge starting point is larger than or equal to the preset positive integer multiple of 360 degrees, and judging that the linear distance between the current position of the robot and the preset edge starting point is smaller than or equal to the preset superposition error distance; the position relation comprises an angle relation of the current position of the robot relative to the preset edge starting point and a linear distance between the current position of the robot relative to the preset edge starting point; the preset positive integer is the number of turns of the robot walking along the edge in the working area. According to the technical scheme, the change conditions of the edge position and the edge angle posture of the robot are used as judging conditions for one or more edge finishing, positioning mark errors caused by map drift are eliminated in the direction that the robot can return to the original preset edge starting point, and the robot is prevented from endless walking along the edge.
Further, the specific steps of the judging method include: step 1, judging whether the difference value between the rotation angle recorded by the robot at the current position and the rotation angle recorded at the preset edge starting point is larger than or equal to 360 degrees of preset positive integer times, if yes, entering step 2, otherwise, determining that the edge behavior of the robot is not finished; step 2, judging whether the linear distance between the current position of the robot and a preset edgewise starting point is smaller than or equal to a preset superposition error distance, if so, entering step 3, otherwise, determining that the edgewise behavior of the robot is not finished; step 3, judging whether the length of the robot edgewise walking path meets the path repetition condition relative to the perimeter of the real-time framed edgewise coverage rectangle, if so, entering step 4, otherwise, determining that the robot edgewise behavior is not finished; and 4, judging whether the area of the area surrounded by the robot edgewise walking path meets the area coverage condition relative to the area of the real-time framed edgewise coverage rectangle, if so, determining that the robot edgewise behavior is finished, otherwise, determining that the robot edgewise behavior is not finished.
Compared with the prior art, the technical scheme gives priority to whether the edge direction of the robot is the same as the original edge direction at the preset edge starting point, namely judging whether the robot has a movement trend of returning to the preset edge starting point or not, and judging whether the current position of the robot coincides with the preset edge starting point within the allowable error range of map drift, so that the position judgment of returning to the preset edge starting point is realized; and then judging whether the robot edge walking path can traverse the regional outline corners and the repeated traversing degree by acquiring the perimeter difference value of the path length of the robot edge walking and the real-time framed edge coverage rectangle, and judging the integrity of the regional range covered by the robot edge walking by acquiring the difference value of the regional area surrounded by the robot edge walking path and the real-time framed edge coverage rectangle on the basis. In summary, the technical scheme sequentially combines the edge direction angle, the edge position, the edge walking distance and the edge coverage area of the robot to realize multi-angle evaluation of the edge walking integrity and coverage rate of the robot, so that the edge ending of the robot is judged in an accurate manner, and the problem of misjudgment possibly caused by the layout of the indoor environment on one or more circles of complete walking of the robot along the working area is solved.
Further, the method for framing the edge coverage rectangle in real time comprises the following steps: in the process of robot edgewise walking, recording and updating the maximum coordinate points traversed by the robot edgewise walking in each global coordinate axis direction of the grid map in real time, wherein the maximum coordinate points are the grid coordinate points traversed by the robot edgewise walking in real time furthest from the preset edgewise starting point in the corresponding global coordinate axis direction; then, respectively making a straight line perpendicular to the corresponding global coordinate axis through the recorded maximum coordinate points, so that the straight lines intersect to frame the edge coverage rectangle for covering the current edge path of the robot; wherein, the robot builds the grid map in real time in the process of walking along the edge in the working area. The technical scheme ensures that the edge coverage rectangle can be effectively framed before the judgment of the path repetition condition is performed.
Further, the maximum coordinate point comprises a grid point of the maximum ordinate traversed in the positive direction of the Y axis, a grid point of the minimum ordinate traversed in the negative direction of the Y axis, a grid point of the maximum abscissa traversed in the positive direction of the X axis and a grid point of the minimum abscissa traversed in the negative direction of the X axis; making a first boundary line perpendicular to the positive direction of the Y axis through the grid point of the maximum ordinate traversed in the positive direction of the Y axis, making a second boundary line perpendicular to the negative direction of the Y axis through the grid point of the minimum ordinate traversed in the negative direction of the Y axis, making a third boundary line perpendicular to the positive direction of the X axis through the grid point of the maximum abscissa traversed in the positive direction of the X axis, and making a fourth boundary line perpendicular to the negative direction of the X axis through the grid point of the minimum abscissa traversed in the negative direction of the X axis, so that the first boundary line, the second boundary line, the third boundary line and the fourth boundary line intersect to define one edge coverage rectangle for covering the current edge path of the robot; the difference between the maximum ordinate in the positive direction of the Y axis and the minimum ordinate in the negative direction of the Y axis is the width of the edge coverage rectangle, and the difference between the maximum abscissa in the positive direction of the X axis and the minimum abscissa in the negative direction of the X axis is the length of the edge coverage rectangle. The edge coverage rectangle framed by the technical scheme can completely reflect the coverage condition of the robot on the working area of the edge walking, and the technical scheme also simplifies the calculation of the perimeter and the area by utilizing the most value coordinate points on each global coordinate axis.
Further, the judging method for meeting the path repetition condition specifically includes: judging whether the length of the robot edgewise walking path is larger than or equal to the preset positive integer times of the perimeter of the real-time framed edgewise covering rectangle, if so, determining that the length of the robot edgewise walking path meets the path repetition condition, otherwise, determining that the robot edgewise behavior is not finished; the length of the robot edgewise walking path is the distance the robot walks along from the preset edgewise starting point. According to the technical scheme, whether the robot edge walking path can traverse the regional outline corners and the repeated traversing degree is judged by acquiring the path length of the robot edge walking and the perimeter difference of the real-time framed edge coverage rectangle, and the influence on the whole edge and edge-following behavior of the judging robot on the perimeter of the indoor environment (particularly the wall body protruding to the middle of a room and an isolated barrier) is fully considered, so that the judging method is suitable for the layout of a household environment, and the problem of missing sweeping in the edge-following cleaning process is effectively avoided.
Further, the judging method for meeting the area coverage condition specifically includes: judging whether the area of the area occupied by the robot edgewise walking path is smaller than or equal to the area of the real-time framed edgewise coverage rectangle, if so, determining that the area of the area occupied by the robot edgewise walking path meets the area coverage condition, otherwise, determining that the robot edgewise behavior is not finished; wherein the robot does not calculate the area of the region for the repeatedly traversed region. According to the technical scheme, from the perspective of the coverage area of the robot walking, the influence of the coverage area of the working area with various scale shapes on the judgment of the complete edge of the robot for one or more circles and the end of the edge-following action is fully considered, and the problem of repeated cleaning in the edge-following cleaning process is effectively avoided.
Further, the area occupied by the robot along the walking path is expressed as: the robot is arranged to walk along the Y-axis or X-axis of the grid map and occupies the sum of the number of grids. According to the technical scheme, the rectangular map area enclosed by the robot along the walking path is calculated from the vertical coordinate axis direction or the horizontal coordinate axis direction, and the coverage condition of the robot on the working area of the robot along the walking can be completely reflected.
A chip is internally provided with a judging and identifying program firmware which is used for controlling a robot to execute the judging method. The accuracy rate (relative to single condition judgment) of the robot to the end along the wall is improved, and meanwhile user experience and cleaning efficiency are improved.
A robot equipped with a code wheel for collecting the length of the robot's edgewise travel path; the robot is provided with a gyroscope and is used for collecting the rotation angle generated in the process of walking along the edge of the robot, and the robot is internally provided with the chip. The robot is prevented from leaking and sweeping due to the fact that the robot does not judge to completely follow the wall for one circle, and therefore the cleaning work efficiency of the robot is improved.
Drawings
Fig. 1 is a flowchart of a method for judging the end of the robot edge behavior according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a working scenario in which a robot provided by the embodiment of the present invention walks completely along an edge in a working area and returns to a preset edge starting point M.
Detailed Description
The following describes the technical solution in the embodiment of the present invention in detail with reference to the drawings in the embodiment of the present invention.
In the embodiment of the invention, a robot searches for a wall obstacle from any position in a room, when a wall is detected, the advancing direction of the robot is adjusted to be parallel to the wall, then the position of the detected wall is recorded as a preset edge starting point, the coordinates of the robot and the advancing angle of the robot (the angle of the advancing direction of the robot relative to the X-axis direction or the angle of the advancing direction of the robot relative to the Y-axis direction) at the moment are recorded in a map constructed in real time, and then the robot is kept to be parallel to the wall from the preset edge starting point to perform edge walking (edge walking), wherein the advancing direction of the robot is adjusted to be parallel to the extending direction of the wall.
In order to solve the problem of inaccurate judgment of one circle or multiple circles of complete edges, the embodiment of the invention discloses a judgment method for ending the robot edge behavior, which basically comprises the following judgment thinking: controlling the robot to walk along the working area from a preset edge starting point, when the position relation of the current position of the robot relative to the preset edge starting point meets the position posture superposition condition, if the length of the robot edge walking path meets the path repetition condition relative to the perimeter of the edge coverage rectangle framed in real time, and when the area occupied by the robot edge walking path meets the area coverage condition relative to the area of the edge coverage rectangle framed in real time, determining that the edge coverage of the robot is finished, namely finishing the edge coverage of the robot; wherein, the real-time framed edge coverage rectangle is: the robot frames and updates in real time in the process of walking along the wall. In this embodiment, only if the foregoing position and posture overlapping condition, path repetition condition, and area coverage condition are satisfied at the same time, it is determined that the robot's edge behavior ends, and the robot has completely walked along the wall one or more circles in the current working area, and determines that the robot's edge behavior ends. Compared with the prior art, the embodiment of the invention judges that the robot finishes along the wall by combining the movement position gesture of the robot and the edge coverage, particularly, the invention can judge whether the robot completely rolls along the wall or not by calculating the distance of the machine from the current position to the recorded starting position (position gesture superposition condition) and combining the gyroscope angle (position gesture superposition condition) and the area (area coverage condition) actually swept along the edge, thereby overcoming the misjudgment of the robot on the complete running along the global working area or circles under the working scene of the complex obstacle wall structure, improving the accuracy of judging the finish along the wall and the accuracy of the complete running along the wall or circles, simultaneously improving the user experience, avoiding the missing sweeping caused by the incomplete running along the wall and simultaneously improving the cleaning efficiency compared with the single judgment condition in the prior art.
Preferably, the specific judging method for meeting the position and posture superposition condition comprises the following steps: judging that the difference value between the rotation angle recorded by the current position of the robot and the rotation angle recorded by the preset edge starting point is larger than or equal to the preset positive integer multiple of 360 degrees, and judging that the linear distance between the current position of the robot and the preset edge starting point is smaller than or equal to the preset superposition error distance; the positional relationship includes an angular rotation relationship of the current position of the robot relative to the preset edge starting point (i.e., a rotation angle of the robot occurring cumulatively from the preset edge starting point), and a linear distance between the current position of the robot relative to the preset edge starting point. In the embodiment, the change conditions of the edge position and the edge angle posture of the robot are used as judging conditions for one or more edge finishing, and the positioning mark errors caused by map drift are eliminated in the direction that the robot can return to the original preset edge starting point, so that the robot is prevented from endless walking along the edge.
There may be in embodiments of the invention: the error of the visual sensor for positioning of the robot is accumulated, so that a map drift error may occur on the global map coordinate system shown in fig. 2, so that the current position of the robot may have a map position drift phenomenon, and as long as the length of the connecting line segment between the preset edge starting point M and the current position of the robot is within a certain map drift error range, the current position of the robot and the preset edge starting point can be considered to coincide. Meanwhile, as the gyroscope of the robot also generates a certain zero drift phenomenon, when the difference value between the rotation angle recorded by the current position of the robot and the rotation angle recorded by the preset edge starting point is greater than or equal to a certain error value of 360 degrees, which is a preset positive integer multiple, the fact that the current advancing direction of the robot is the same as the trend of the edge direction at the preset edge starting point is confirmed, and when the difference value between the rotation angle recorded by the general real-time record and the rotation angle recorded by the preset edge starting point is just equal to the preset positive integer multiple of 360 degrees, the conclusion can be obtained, wherein the preset positive integer multiple is the number of turns required by the robot to walk along the edge, and the preset positive integer multiple is preferably 1 time or 2 times.
As an embodiment, the specific steps of the method for judging the end of the robot edge behavior disclosed in the embodiment of the present invention, as shown in fig. 1, include:
Step S101, controlling the robot to walk along the edge in the working area from a preset edge starting point, and then entering step S102; as shown in fig. 2, walking along the wall is performed from a preset edge starting point M along the direction indicated by an arrow P, and the rotation angle of the robot and the travelled mileage data are recorded in real time.
Step S102, judging whether the difference value between the rotation angle recorded by the robot at the current position and the rotation angle recorded at the preset edge starting point is larger than or equal to 360 degrees of preset positive integer times, if yes, entering step S103, otherwise, entering step S107 to determine that the edge behavior of the robot is not finished; the preset positive integer is the number of turns of the robot walking along the edge in the working area, and the preset positive integer is preferably 1; when the difference value between the rotation angle recorded by the current position of the robot and the rotation angle recorded by the preset edge starting point is an integer multiple of 360 degrees, the movement direction of the robot can be proved to be the same as the edge direction of the preset edge starting point, but misjudgment exists when the robot is judged to walk along the outline of the working area completely by only going up from the movement angle gesture of the robot. Preferably, before the robot walks along the edge to return to the preset edge starting point, a turn or a trapped in one of local areas, such as the bottom of a dining table in a restaurant (with more dirt), may occur, where the robot performs repeated edge cleaning operations, so that the angle rotated by the robot may reach or even exceed 360 degrees, so it is necessary to go to step S103 to continue the judgment.
Step S103, judging whether the linear distance between the current position of the robot and the preset edgewise starting point is smaller than or equal to the preset superposition error distance, if yes, entering step S104, otherwise, entering step S107 to determine that the edgewise behavior of the robot is not finished. In this step, although errors of map position drift are considered, when the robot walks along the wall to approach a circle, the robot detects an isolated obstacle distributed in the middle of the room, such as an obstacle distributed at the position D in fig. 2, and then walks along line segments DE, EF and FG to bypass the obstacle, during this obstacle-bypassing walking process, the rotation angle recorded at the position F of the robot and the rotation angle recorded at the preset edge starting point M may reach 360 degrees, and the linear distance between the position F of the robot and the preset edge starting point M may be reduced to be less than or equal to the preset overlapping error distance, and within the map drift error range, it is easy to misjudge that the robot has returned to the preset edge starting point M, but in reality, the robot at the position F does not walk along the wall completely for a circle, and it is required to enter step S104 to continue to judge.
Step S104, judging whether the length of the robot edgewise walking path meets the path repetition condition relative to the perimeter of the real-time framed edgewise coverage rectangle, if so, entering step S105, otherwise, entering step S107 to determine that the robot edgewise behavior is not finished. Specifically, the method for judging that the path repetition condition is satisfied specifically includes: judging whether the length of the robot edge walking path is larger than or equal to the preset positive integer times of the perimeter of the edge coverage rectangle framed in real time, if so, determining that the length of the robot edge walking path meets the path repetition condition, and entering into step S105, otherwise, determining that the robot edge behavior is not finished; the length of the robot edgewise walking path is the distance the robot walks along from the preset edgewise starting point M, and the preset positive integer multiple is set to 2 times in this embodiment, that is, the robot is configured to walk along the edge for 2 circles in the working area.
When the preset positive integer is 1, the condition that the robot walks along the edge for one circle in the working area is judged, namely whether the length of the robot walking path along the edge is larger than or equal to the perimeter of the edge coverage rectangle framed in real time is judged. The robot needs to bypass the detected obstacles in the process of edge walking, such as after the robot finishes the wall AH, HC and CD, the robot walks along the obstacle edges DE, EF and FG of fig. 2 in sequence and then edges to the wall GB, when the robot continues to walk along the edge for a circle in the working area and returns to the preset edge starting point M, the length of the robot edge walking path is increased by the sum of the lengths of the obstacle edge line segment DE, the obstacle edge line segment EF and the obstacle edge line segment FG compared with the actual wall contour perimeter of the working area, so that the length of the robot edge walking path is longer than the edge covering rectangular O1O2O3B of the actual wall contour of the frame covering working area; when the robot walks along the edge for one circle in the working area, the working scene is repeated, and the length of the robot walking path along the edge is 2 times that of the actual wall contour perimeter of the working area plus 2 times that of the sum of the lengths of the barrier edge line segment DE, the barrier edge line segment EF and the barrier edge line segment FG, so that the length of the robot walking path along the edge is 2 times that of the edge covering rectangle O1O2O3B which frames the actual wall contour covering the working area. However, if the robot is caught by the obstacle and is turned or slipped and cannot continue to travel along the wall, the robot still does not travel one or two turns along the wall to the preset coasting starting point M on the basis of satisfying the judgment conditions of step S102 to step S104, and the area occupied by the coasting path of the robot exceeds the area of the actual working area, so it is necessary to enter step S105 to continue to judge whether the robot coasting is finished, which is understood as follows: judging whether the edge-following behavior of the robot is finished. According to the method, whether the robot edge walking path can traverse the regional outline corners and the repeated traversing degree is judged by acquiring the path length of the robot edge walking and the perimeter difference of the real-time framed edge coverage rectangle, and the influence on the whole edge or two circles of the judging robot and the edge behavior is fully considered, particularly on the wall body protruding to the middle of a room and an isolated barrier, so that the judging method is suitable for the layout of a household environment, and the problem of missing sweeping in the edge sweeping process is effectively avoided.
As one embodiment, the method for framing the edge coverage rectangle in real time includes: in the process of robot edgewise walking, recording and updating the maximum coordinate points traversed by the robot edgewise walking in the directions of all global coordinate axes of the grid map in real time, wherein the maximum coordinate points are the grid coordinate points traversed by the robot edgewise walking furthest from the preset edgewise starting point in the directions of the corresponding global coordinate axes respectively; then, respectively making straight lines perpendicular to the global coordinate axes through the recorded maximum coordinate points, so that the straight lines perpendicular to the global coordinate axes intersect to frame a covering rectangle along the edge; wherein, the robot builds the grid map in real time in the process of walking along the edge in the working area. Thereby ensuring that the edge coverage rectangle can be effectively framed before the determination of the path repetition condition is performed.
Specifically, the maximum coordinate point includes a grid point of a maximum ordinate traversed in a positive Y-axis direction, a grid point of a minimum ordinate traversed in a negative Y-axis direction, a grid point of a maximum abscissa traversed in a positive X-axis direction, and a grid point of a minimum abscissa traversed in a negative X-axis direction, so that a difference value between the maximum ordinate in the positive Y-axis direction and the minimum ordinate in the negative Y-axis direction is a width of the edge coverage rectangle, and a difference value between the maximum abscissa in the positive X-axis direction and the minimum abscissa in the negative X-axis direction is a length of the edge coverage rectangle; wherein the global coordinate axis of the grid map includes a Y-axis and an X-axis. The maximum coordinate point comprises a grid point of a maximum ordinate traversed in the positive direction of the Y axis, a grid point of a minimum ordinate traversed in the negative direction of the Y axis, a grid point of a maximum abscissa traversed in the positive direction of the X axis and a grid point of a minimum abscissa traversed in the negative direction of the X axis; making a first boundary line perpendicular to the positive direction of the Y axis through the grid point of the maximum ordinate traversed in the positive direction of the Y axis, making a second boundary line perpendicular to the negative direction of the Y axis through the grid point of the minimum ordinate traversed in the negative direction of the Y axis, making a third boundary line perpendicular to the positive direction of the X axis through the grid point of the maximum abscissa traversed in the positive direction of the X axis, and making a fourth boundary line perpendicular to the negative direction of the X axis through the grid point of the minimum abscissa traversed in the negative direction of the X axis, so that the first boundary line, the second boundary line, the third boundary line and the fourth boundary line intersect to define one edge coverage rectangle for covering the current edge path of the robot; the difference between the maximum ordinate in the positive direction of the Y axis and the minimum ordinate in the negative direction of the Y axis is the width of the edge coverage rectangle, and the difference between the maximum abscissa in the positive direction of the X axis and the minimum abscissa in the negative direction of the X axis is the length of the edge coverage rectangle. The edge coverage rectangle framed by the embodiment can completely reflect the coverage condition of the robot on the working area of the edge walking, and the technical scheme also simplifies the calculation of the perimeter and the area by utilizing the most value coordinate points on each global coordinate axis.
As shown in fig. 2, the robot starts from the preset edge starting point M, performs the wall-following walking along the arrow direction P, detects an obstacle when the robot walks along the wall to the position D, and then starts to perform the edge-following behavior along the edge DE of the obstacle, and when the robot walks along the edge to the position B around the obstacle, the maximum coordinate points recorded by the robot are B, A, H and C, respectively, wherein H and a are grid points of the maximum ordinate of the traversal of the robot in the positive direction of the Y axis; b is the grid point of the minimum ordinate traversed by the robot in the Y-axis negative direction, B is the grid point of the maximum abscissa traversed by the robot in the X-axis positive direction, C is the grid point of the minimum abscissa traversed by the robot in the X-axis negative direction, namely A and H are both grid coordinate points farthest from the preset edge starting point M traversed by the robot in the Y-axis positive direction, C is the grid coordinate point farthest from the preset edge starting point M traversed by the robot in the X-axis negative direction, B is the grid coordinate point farthest from the preset edge starting point M traversed by the robot in the Y-axis negative direction, and B is the grid coordinate point farthest from the preset edge starting point M traversed by the robot in the X-axis positive direction.
Then, a first boundary line AH perpendicular to the Y-axis positive direction is made through the grid point a or H of the maximum ordinate traversed in the Y-axis positive direction, a second boundary line BO3 perpendicular to the Y-axis negative direction is made through the grid point B of the minimum ordinate traversed in the Y-axis negative direction, a third boundary line BO1 perpendicular to the X-axis positive direction is made through the grid point B of the maximum abscissa traversed in the X-axis positive direction, and a fourth boundary line O2O3 perpendicular to the X-axis negative direction is made through the grid point of the minimum abscissa traversed in the X-axis negative direction, such that these first boundary line AH, second boundary line BO3, third boundary line BO1 and fourth boundary line O2O3 perpendicular to the global coordinate axis intersect to frame one said edge coverage rectangle BO1O2O3 for covering the current edge path of the robot including edge paths MA, AH, HC, CD, DE, EF, FG and GB. Wherein, the difference between the ordinate of A and the ordinate of B is the width O1B of the edge coverage rectangle, and the difference between the abscissa of B and the abscissa of O3 is the length BO3 of the edge coverage rectangle.
In this embodiment, when the robot slips or is restricted by the obstacle distributed in isolation and is stranded and rotated in the quadrangular region HABC, when the robot walks to the position F along the edge, the rotation angle recorded at the position F of the robot and the rotation angle recorded at the preset starting point M along the edge may reach 360 degrees, and the linear distance between the position F of the robot and the preset starting point M along the edge may be reduced to be smaller than or equal to the preset overlay error distance, because of the factors of slipping or stranded and rotated of the robot, the length of the robot edge walking path is greater than 1 time or 2 times the perimeter of the rectangle covered by the edge defined by the real-time frame, the area occupied by the robot edge walking path is also increased, so that the area of the edge covered rectangle defined by the first boundary line, the second boundary line, the third boundary line and the fourth boundary line is also increased, and particularly when the robot walks to the position B along the edge but also along the wall in the working region along the edge, the length of the robot edge walking path is greater than the perimeter of the rectangle covered by the rectangle O3, and the robot has a total length of the edge covered by the rectangle O1 time or 2 times the robot is greater than the area occupied by the robot's 1 time or 2 times the perimeter of the rectangle covered by the robot's edge. It is necessary to proceed to step S105.
Step 105, judging whether the area of the area occupied by the robot along the walking path meets the area coverage condition relative to the area of the edge coverage rectangle framed in real time, if yes, entering step 106 to determine that the robot edge coverage is finished, determining that one or two circles of working area are completely finished, and finishing the robot to walk along the wall, otherwise entering step 107 to determine that the robot edge coverage is not finished, and not finishing the robot to walk along the wall. Specifically, the judging method for meeting the area coverage condition specifically includes: judging whether the area of the area occupied by the robot edgewise walking path is smaller than or equal to the area of the real-time framed edgewise coverage rectangle, if so, determining that the area of the area surrounded by the robot edgewise walking path meets the area coverage condition, otherwise, determining that the robot edgewise behavior is not finished. On the basis that the judging conditions of the steps S102 to S104 are met, when the robot continues to judge that the robot walks along the edge to the return position M (or the approaching position M), the area occupied by the current walking path of the robot is smaller than or equal to the area of the real-time framed edge coverage rectangle, the edge behavior of the robot is determined to be ended, and the robot walks along the wall completely for one or two circles. It should be noted that, the robot does not calculate the area of the repeatedly traversed area during the process of edgewise walking, obstacle-detouring walking and skidding in the quadrangular region HABC, and does not add them to the area occupied by the currently calculated robot edgewise walking path, so in the scene of robot skidding in the quadrangular region HABC, as long as the skidding distance of the robot is smaller than the length of the edgewise coverage rectangle BO1O2O3, the area occupied by the robot edgewise walking path is smaller than or equal to the area of the edgewise coverage rectangle BO1O2O3 framed in real time, which is not limited to whether the robot is in the working area along a wall or a plurality of weeks. According to the embodiment, from the perspective of the coverage area of the robot walking, the influence of the coverage area of the working area with various scale shapes on the judgment of the complete circle of the robot and the end of the edge-following action is fully considered, and the problem of repeated cleaning in the edge-following cleaning process is effectively avoided.
The area occupied by the robot along the walking path is expressed as: the robot is arranged to walk along the Y-axis or X-axis of the grid map and occupies the sum of the number of grids. According to the technical scheme, the rectangular map area enclosed by the robot along the walking path is calculated from the vertical coordinate axis direction or the horizontal coordinate axis direction, and the coverage condition of the robot on the working area of the robot along the walking can be completely reflected.
Compared with the prior art, the method and the device have the advantages that whether the edge direction of the robot is the same as the initial edge direction at the preset edge starting point or not is preferentially considered, namely whether the robot has a movement trend of returning to the preset edge starting point or not is judged, whether the current position of the robot coincides with the preset edge starting point or not within the error range allowed by map drift is judged, and position judgment of returning to the preset edge starting point is achieved; and then judging whether the robot edge walking path can traverse the regional outline corners and the repeated traversing degree by acquiring the perimeter difference value of the path length of the robot edge walking and the real-time framed edge coverage rectangle, and judging the integrity of the regional range covered by the robot edge walking by acquiring the difference value of the regional area surrounded by the robot edge walking path and the real-time framed edge coverage rectangle on the basis. In summary, the technical scheme sequentially combines the edge direction angle, the edge position, the edge walking distance and the edge coverage area of the robot to realize multi-angle evaluation of the edge walking integrity and coverage rate of the robot, so that the edge ending of the robot is judged in an accurate manner, and the problem of misjudgment possibly caused by the layout of the indoor environment on one or more circles of complete walking of the robot along the working area is solved.
In the foregoing embodiments, where the robot performs edge cleaning along the wall of the room, for convenience of description and understanding, the actual wall outer wall or wall-leaning obstacle is regarded as a top projection line of the wall and the wall-leaning wall hitch obstacle on the floor of the room in fig. 2. When there is a wall obstacle in close contact with the wall outer wall, the cleaning robot regards the surface of the wall obstacle, which is not in contact with the wall outer wall, as the wall outer wall when traveling. The judging method disclosed in the embodiment can be applied to a multi-room situation. Because the walls of the multiple rooms are continuous when the door is opened, the robot travels along the walls of the rooms, with the bounding rectangle of coverage increasing, the robot periodically reciprocates, eventually traversing the multiple rooms.
A chip is internally provided with a judging and identifying program firmware which is used for controlling a robot to execute the judging method. The accuracy rate (relative to single condition judgment) of the robot to the end along the wall is improved, and meanwhile user experience and cleaning efficiency are improved.
A robot equipped with a code wheel for collecting the length of the robot's edgewise travel path; the robot is provided with a gyroscope and is used for collecting the rotation angle generated in the process of walking along the edge of the robot, and the robot is internally provided with the chip. The robot is prevented from leaking and sweeping due to the fact that the robot does not judge to completely follow the wall for one circle, and therefore the cleaning work efficiency of the robot is improved.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same; while the invention has been described in detail with reference to the preferred embodiments, those skilled in the art will appreciate that: modifications may be made to the specific embodiments of the present invention or equivalents may be substituted for part of the technical features thereof; without departing from the spirit of the invention, it is intended to cover the scope of the invention as claimed.

Claims (7)

1. The method for judging the end of the robot edge behavior is characterized by comprising the following steps:
Controlling the robot to walk along the working area from a preset edge starting point, when the position relation of the current position of the robot relative to the preset edge starting point meets the position posture superposition condition, if the length of the robot edge walking path meets the path repetition condition relative to the perimeter of the edge coverage rectangle framed in real time, and when the area occupied by the robot edge walking path meets the area coverage condition relative to the area of the edge coverage rectangle framed in real time, determining that the robot edge action is finished;
The method for framing the edge coverage rectangle in real time comprises the following steps:
In the process of robot edgewise walking, recording and updating the maximum coordinate points traversed by the robot edgewise walking in each global coordinate axis direction of the grid map in real time, wherein the maximum coordinate points are the grid coordinate points traversed by the robot edgewise walking in real time furthest from the preset edgewise starting point in the corresponding global coordinate axis direction;
Then, respectively making a straight line perpendicular to the corresponding global coordinate axis through the recorded maximum coordinate points, so that the straight lines intersect to frame the edge coverage rectangle for covering the current edge path of the robot;
wherein, the robot builds a grid map in real time in the process of walking along the edge in the working area;
the maximum coordinate point comprises a grid point of a maximum ordinate traversed in the positive direction of the Y axis, a grid point of a minimum ordinate traversed in the negative direction of the Y axis, a grid point of a maximum abscissa traversed in the positive direction of the X axis and a grid point of a minimum abscissa traversed in the negative direction of the X axis;
Making a first boundary line perpendicular to the positive direction of the Y axis through the grid point of the maximum ordinate traversed in the positive direction of the Y axis, making a second boundary line perpendicular to the negative direction of the Y axis through the grid point of the minimum ordinate traversed in the negative direction of the Y axis, making a third boundary line perpendicular to the positive direction of the X axis through the grid point of the maximum abscissa traversed in the positive direction of the X axis, and making a fourth boundary line perpendicular to the negative direction of the X axis through the grid point of the minimum abscissa traversed in the negative direction of the X axis, so that the first boundary line, the second boundary line, the third boundary line and the fourth boundary line intersect to define one edge coverage rectangle for covering the current edge path of the robot;
the difference between the maximum ordinate in the positive direction of the Y axis and the minimum ordinate in the negative direction of the Y axis is the width of the edge coverage rectangle, and the difference between the maximum abscissa in the positive direction of the X axis and the minimum abscissa in the negative direction of the X axis is the length of the edge coverage rectangle.
2. The judgment method according to claim 1, wherein the specific judgment method for satisfying the position posture coincidence condition includes:
Judging that the difference value between the rotation angle recorded by the current position of the robot and the rotation angle recorded by the preset edge starting point is larger than or equal to the preset positive integer multiple of 360 degrees, and judging that the linear distance between the current position of the robot and the preset edge starting point is smaller than or equal to the preset superposition error distance;
The position relation comprises an angle relation of the current position of the robot relative to the preset edge starting point and a linear distance between the current position of the robot relative to the preset edge starting point;
The preset positive integer is the number of turns of the robot walking along the edge in the working area.
3. The method according to claim 1, wherein the method for determining that the path repetition condition is satisfied specifically includes:
Judging whether the length of the robot edgewise walking path is larger than or equal to the preset positive integer times of the perimeter of the real-time framed edgewise covering rectangle, if so, determining that the length of the robot edgewise walking path meets the path repetition condition, otherwise, determining that the robot edgewise behavior is not finished;
the length of the robot edgewise walking path is the distance the robot walks along from the preset edgewise starting point.
4. The method according to claim 3, wherein the method for determining that the area coverage condition is satisfied specifically comprises:
Judging whether the area of the area occupied by the robot edgewise walking path is smaller than or equal to the area of the real-time framed edgewise coverage rectangle, if so, determining that the area of the area occupied by the robot edgewise walking path meets the area coverage condition, otherwise, determining that the robot edgewise behavior is not finished; wherein the robot does not calculate the area of the region for the repeatedly traversed region.
5. The method of determining according to claim 4, wherein the area of the area occupied by the robot along the walking path is expressed as: the robot is arranged to walk along the Y-axis or X-axis of the grid map and occupies the sum of the number of grids.
6. A chip with built-in judgment and identification program firmware, wherein the judgment and identification program firmware is used for controlling a robot to execute the judgment method of any one of claims 1 to 5.
7. A robot equipped with a code wheel for collecting the length of the robot's edgewise travel path; the robot is provided with a gyroscope and is used for collecting a rotation angle generated in the process of walking along the edge of the robot; the chip of claim 6 is arranged in the robot.
CN202010764289.XA 2020-08-02 2020-08-02 Robot edge behavior ending judging method, chip and robot Active CN111897336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010764289.XA CN111897336B (en) 2020-08-02 2020-08-02 Robot edge behavior ending judging method, chip and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010764289.XA CN111897336B (en) 2020-08-02 2020-08-02 Robot edge behavior ending judging method, chip and robot

Publications (2)

Publication Number Publication Date
CN111897336A CN111897336A (en) 2020-11-06
CN111897336B true CN111897336B (en) 2024-06-18

Family

ID=73183893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010764289.XA Active CN111897336B (en) 2020-08-02 2020-08-02 Robot edge behavior ending judging method, chip and robot

Country Status (1)

Country Link
CN (1) CN111897336B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114557641B (en) * 2022-02-25 2023-07-25 杭州萤石软件有限公司 Robot cleaning control method, robot, and machine-readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106142104A (en) * 2015-04-10 2016-11-23 科沃斯机器人股份有限公司 Self-movement robot and control method thereof
CN107368079A (en) * 2017-08-31 2017-11-21 珠海市微半导体有限公司 Robot cleans the planing method and chip in path
CN108508891A (en) * 2018-03-19 2018-09-07 珠海市微半导体有限公司 A kind of method of robot reorientation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008003979A (en) * 2006-06-26 2008-01-10 Matsushita Electric Ind Co Ltd Self-propelled equipment and program therefor
CN107703930B (en) * 2017-10-11 2019-11-05 珠海市一微半导体有限公司 The continuous of robot sweeps control method
CN108919814A (en) * 2018-08-15 2018-11-30 杭州慧慧科技有限公司 Grass trimmer working region generation method, apparatus and system
CN108196555B (en) * 2018-03-09 2019-11-05 珠海市一微半导体有限公司 The control method that autonomous mobile robot is walked along side
CN109029243B (en) * 2018-07-04 2021-02-26 南京理工大学 Improved agricultural machinery working area measuring terminal and method
CN109528090A (en) * 2018-11-24 2019-03-29 珠海市微半导体有限公司 The area coverage method and chip and clean robot of a kind of robot
CN109828575A (en) * 2019-02-22 2019-05-31 山东省计算中心(国家超级计算济南中心) A kind of paths planning method effectively improving agricultural machinery working efficiency
CN110543174A (en) * 2019-09-10 2019-12-06 速感科技(北京)有限公司 Method for establishing passable area graph, method for processing passable area graph, device and movable equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106142104A (en) * 2015-04-10 2016-11-23 科沃斯机器人股份有限公司 Self-movement robot and control method thereof
CN107368079A (en) * 2017-08-31 2017-11-21 珠海市微半导体有限公司 Robot cleans the planing method and chip in path
CN108508891A (en) * 2018-03-19 2018-09-07 珠海市微半导体有限公司 A kind of method of robot reorientation

Also Published As

Publication number Publication date
CN111897336A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
EP3764186B1 (en) Method for controlling autonomous mobile robot to travel along edge
CN108507578B (en) Navigation method of robot
CN112137529B (en) Cleaning control method based on dense obstacles
WO2021248846A1 (en) Robot edge treading areal sweep planning method, chip, and robot
US11175670B2 (en) Robot-assisted processing of a surface using a robot
WO2021135645A1 (en) Map updating method and device
CN111596662B (en) Method for judging one circle along global working area, chip and visual robot
CN109240312B (en) Cleaning control method and chip of robot and cleaning robot
CN111857127B (en) Clean partition planning method for robot walking along edge, chip and robot
Rekleitis et al. Multi-robot collaboration for robust exploration
CN107368079A (en) Robot cleans the planing method and chip in path
CN109997089A (en) Floor treatment machine and floor treatment method
CN111580525A (en) Judgment method for returning to starting point in edgewise walking, chip and visual robot
WO2014135008A1 (en) Edge-to-center cleaning method used by robot cleaner
CN112799398A (en) Cleaning path planning method based on path finding cost, chip and cleaning robot
CN113110497B (en) Edge obstacle detouring path selection method based on navigation path, chip and robot
CN113190010B (en) Edge obstacle detouring path planning method, chip and robot
CN107678429B (en) Robot control method and chip
CN109683619A (en) A kind of robot path planning method and system based on graph pararneterization
CN113475977B (en) Robot path planning method and device and robot
CN112764418B (en) Cleaning entrance position determining method based on path searching cost, chip and robot
CN111552290B (en) Method for robot to find straight line along wall and cleaning method
CN111897336B (en) Robot edge behavior ending judging method, chip and robot
CN112180924A (en) Movement control method for navigating to dense obstacles
WO2023231757A1 (en) Map area contour-based setting method and robot edge end control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: 519000 room 105-514, No. 6, Baohua Road, Hengqin new area, Zhuhai City, Guangdong Province (centralized office area)

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant