CN112438658A - Cleaning area dividing method for cleaning robot and cleaning robot - Google Patents

Cleaning area dividing method for cleaning robot and cleaning robot Download PDF

Info

Publication number
CN112438658A
CN112438658A CN201910807893.3A CN201910807893A CN112438658A CN 112438658 A CN112438658 A CN 112438658A CN 201910807893 A CN201910807893 A CN 201910807893A CN 112438658 A CN112438658 A CN 112438658A
Authority
CN
China
Prior art keywords
grid
cleaning
line segment
cleaning robot
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910807893.3A
Other languages
Chinese (zh)
Inventor
周娴玮
吴俍儒
郑卓斌
王立磊
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bona Robot Co ltd
South China Normal University
Original Assignee
Guangdong Bona Robot Co ltd
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bona Robot Co ltd, South China Normal University filed Critical Guangdong Bona Robot Co ltd
Priority to CN201910807893.3A priority Critical patent/CN112438658A/en
Publication of CN112438658A publication Critical patent/CN112438658A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a region dividing method and a cleaning robot, which improve the flexibility of the cleaning robot by detecting a wall body and judging the position relation with a global map, and realizing the division of sub-regions for cleaning by taking the wall body as a reference standard and based on the global map, for example, setting the sub-regions to be cleaned at one time to be larger in an open living room, and setting the sub-regions to be cleaned at one time to be smaller in the space-limited regions such as a bedroom and the like.

Description

Cleaning area dividing method for cleaning robot and cleaning robot
Technical Field
The invention relates to a cleaning area dividing method of a cleaning robot and the cleaning robot, and belongs to the field of robot application.
Background
The cleaning robot is more and more widely applied in daily life, and the cleaning robot moves in a limited space and cleans the ground, so that the tedious labor of manual cleaning, mopping and the like is replaced. However, as requirements under different application scenarios are higher and higher, and especially more demands are made on operation accuracy and operation efficiency, great challenges are set for map building and operation control of cleaning robots, most of existing cleaning robots build a map through multiple sensors such as infrared, ultrasonic, laser radar, and a camera, and in order to improve efficiency in a cleaning process, a region division or rasterization method is adopted to achieve independent small region cleaning which is completed successively, for example, the map rasterization method disclosed in 201811542832.0.
The existing map area dividing method as described above is to divide the map into a plurality of blocks according to a certain rule or specification on the basis of the global map, for example, divide the map into a plurality of square areas of 2m × 2m which are connected with each other, or divide the map into a plurality of sub-areas which are connected with each other according to a certain rule, and then finish cleaning in the set area one by one. According to the technical scheme, the map can be subdivided so that the cleaning robot can complete cleaning activities in sequence, but the function of flexibly dividing the area according to the environment map of an actual scene is lacked.
Disclosure of Invention
The invention provides a region dividing method and a cleaning robot, which improve the flexibility of the cleaning robot by detecting a wall body and judging the position relation with a global map, and realizing the divided region cleaning by taking the wall body as a reference standard and based on the global map, for example, setting larger sub-regions to be cleaned at one time in a relatively open living room, and setting smaller sub-regions to be cleaned at one time in a space-limited region such as a bedroom and the like.
The technical scheme of the invention is that a cleaning area division method of a cleaning robot comprises the following steps: s1: acquiring a grid map containing obstacle distribution information;
s2: acquiring wall information in a motion space;
s3: and dividing a cleaning area by taking the barrier grid corresponding to the wall as a side line in the grid map.
Preferably, the acquiring wall information in the motion space includes:
s21: acquiring a three-dimensional point cloud picture in a motion space, wherein the three-dimensional point cloud picture is a data set consisting of a plurality of point cloud data;
s22: generating a plurality of 3D line segments according to the three-dimensional point cloud image fitting;
s23: and determining a positioning line segment corresponding to the wall body in the 3D line segment, and determining a first barrier grid corresponding to the wall body in the grid map.
Preferably, the acquiring wall information in the motion space includes: determining an acquisition range according to the measuring range of the three-dimensional point cloud picture acquisition equipment of the cleaning robot; searching the 3D line segment with the longest length in the acquisition range; and mapping the longest 3D line segment to a grid map, and determining the line segment generated after the 3D line segment is mapped as a positioning line segment when the line segment is superposed with the first obstacle grid.
Preferably, in the acquisition range, the longest point cloud line segment in each direction is respectively calculated as a candidate line segment; selecting a longest candidate line segment among the candidate line segments as the longest 3D line segment.
Preferably, the positioning line segment is an intersection line of a ceiling and a wall.
Preferably, the method further comprises the step of S31: determining a first edge segment by using a first continuous barrier grid on the same straight line where the first barrier grid is located, and determining a second edge segment by using a second continuous barrier grid which is perpendicular to the first edge segment and adjacent to the first continuous barrier grid or the first barrier grid;
s32: determining a rectangular area by taking the first edge line segment and the second edge line segment as adjacent edges;
s33: and dividing the overlapped part of the rectangular area and the non-obstacle grid area in the grid map into a cleaning area.
Preferably, the length of the first edge segment is the maximum span of the first continuous barrier grid, and the length of the second edge segment is the maximum span of the second continuous barrier grid; alternatively, the first and second electrodes may be,
the length of the first edge segment is less than the maximum span of the first continuous barrier grid and the length of the second edge segment is less than the maximum span of the second continuous barrier grid; alternatively, the first and second electrodes may be,
the length of the first edge segment is less than the maximum span of the first continuous barrier grid, or the length of the second edge segment is less than the maximum span of the second continuous barrier grid.
Preferably, a plurality of cleaning regions determined a plurality of times by the cleaning robot during the cleaning activity do not overlap each other.
Preferably, the region division method further includes a correction method: determining grid coordinates and coordinate differences of terminal position points at two ends recorded by the first continuous barrier grid or the second continuous barrier grid as (delta X, delta Y); determining the side length of the grid unit to be M; and determining the grid coordinate difference value corresponding to the coordinate difference value as R, and subtracting R from the grid coordinate to obtain the newly determined grid coordinate, wherein R is (delta X/M, delta Y/M).
The invention also provides a cleaning robot, which comprises a camera module and a control module, wherein the camera module is used for collecting images in the motion space, the control module is used for data processing and motion control, and the cleaning robot completes the area division method according to the instruction of the control module.
Preferably, the cleaning robot includes a zigzag pattern in the divided cleaning regions, and performs error correction after traveling in the same direction for more than a set distance or a set time.
The invention has the technical effects that: the cleaning robot detects images of a running space, determines the position of a wall body in a global map through the mapping relation between a characteristic point cloud distribution diagram and the grid map, and divides a cleaning area again on the basis of the global map by taking the detected wall body as a reference so as to realize cleaning activity, so that the flexibility and the adaptability of the cleaning robot are improved, and the traditional fixed grid cleaning mode one by one is changed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it should be obvious for those skilled in the art that other embodiments obtained by the drawings should be included in the technical solutions of the present invention without creative efforts.
FIG. 1 is a flow chart of a region partitioning method of the present invention;
FIG. 2 is a flow chart of a method of obtaining a wall;
FIG. 3 is a flow chart of a region partitioning method;
FIG. 4 is a schematic view of a first aspect of the embodiment;
FIG. 5 is a schematic view of a second aspect of the embodiment;
FIG. 6 is a schematic view showing a third case of the embodiment;
FIG. 7 is a schematic view of a cleaning robot detecting motion space;
fig. 8 is a method for judging a wall through a grid map.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiment is only one embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 to 3 show flowcharts of the implementation method of the present invention and embodiments.
Firstly, the method comprises the following steps: and constructing a grid map according to the recorded environmental barrier distribution information.
In the invention, the construction mode of the global map is a method (SLAM) for simultaneously positioning and constructing the map, the method comprises the steps of obtaining sensing data of an environment by the SLAM device, wherein the sensing data comprises image data, point cloud data and inertial navigation data, establishing the global map of the environment according to the image data and the inertial navigation data, and displaying information such as the position of a space to be cleaned and obstacles in the global map.
The grid map is formed by rasterizing the global map, and comprises barrier grids corresponding to barriers and blank grids corresponding to blank areas, wherein the specific method of rasterizing is to divide the global map into grids which are connected with each other in a mutually-connected mode at equal intervals, such as square grids with the length of 1m multiplied by 1m, and barrier information or operational information in the global map is corresponding to related grids, so that the global map is composed of the barrier grids and the blank grids, and in the identification of the map, the barrier grids and the blank grids respectively represent that the cleaning robot is not allowed to pass and the operation is not allowed to pass. Specifically, the map is firstly rasterized, that is, the map is divided into a plurality of cells to form an initial grid map, the cells are squares, the map is divided into a plurality of cells to form the initial grid map, all the cells containing obstacles are defined as obstacle grids (obstacle areas), and the initial position of the cleaning robot in the initial grid map is defined. In the walking process of the cleaning robot, the cleaning robot records the walking distance and direction through the code disc, the current position of the cleaning robot in the initial grid map is updated in real time, the cleaning robot detects whether obstacles exist in a walking area through the detection unit while the driving unit drives the walking unit to walk, and if the obstacles do not exist, a blank grid is defined. The rasterization of the map may be established by automatic operation of the cleaning robot, or may be established by analysis based on the global map, and the final result is to rasterize the environment information.
II, secondly: and acquiring a three-dimensional point cloud picture of the motion space, wherein the three-dimensional point cloud picture is a data set consisting of a plurality of point cloud data.
In the invention, the environmental image data is acquired through image information which is shot to the surroundings through the camera device when the cleaning robot is at the current position, for example, a plurality of images which are shot by the camera based on different angles after the cleaning robot rotates for one circle, and for example, an optional camera is arranged on the cleaning robot for shooting the images, the camera can rotate around the central point of the camera in the original position, specifically, can horizontally rotate around the vertical axis direction of the coordinate system of the camera, and the rotation of the camera can be realized by controlling the driving motor through software by the mobile equipment.
In the present invention, a specific implementation manner of extracting the environment image may be that the feature point of the current position is obtained through a circumferential environment image of the camera. Specifically, a circumferential environment image of the device at the current position is acquired by rotating the camera of the device, and the circumferential environment image set refers to an image acquired by the rotatable camera of the device in a circumferential direction of the position, because the acquired images are acquired at set angle intervals, images with different angles are acquired after the camera rotates for one circle, and all the images are stored, namely the image set of the device at the position. For example, in the first position, the camera rotates according to a set angle, an environment image of the current angle is collected once after the camera rotates by the set angle, the environment image is stored as Ai, and i represents the number of times that the camera rotates relative to the initial position; the camera continues to rotate until the camera rotates back to the initial angle, namely faces the right front of the movement of the equipment, and an image set Ai (i: 1-n) is obtained; wherein i represents the number of times the camera has been rotated with respect to the initial position, and n represents the total number of times the camera has completed 360 degrees of rotation according to the set rotation angle. Similarly, the camera of the device acquires a circumferential environment image set of the position in a rotating manner at a second position, wherein the second position is the position reached by the device after the set offset time; in this position, the camera of the device performs image acquisition according to the same method and rotation angle, and obtains an ambient image set Bi (i: 1-n) of the device in the second position, where i and n correspond to i and n, respectively, in the first position, and the meaning is the same, but the first position is different from the second position, so that the extractable image feature is also different. The embodiment adopts the rotary camera, and under other conditions, a multi-camera mode can be adopted, for example, synchronous shooting by arranging a plurality of cameras with different angles on the periphery of the body of the cleaning robot is realized.
In the invention, the characteristic points of the environment image are subjected to characteristic extraction on each frame of the gray level image by utilizing an ORB characteristic extraction algorithm to obtain the ORB characteristic points of each frame of the two-dimensional image, the image information acquired by the existing camera is basically colorful, firstly, the acquired image information is preprocessed, and the colorful image is converted into the gray level image, thereby facilitating the subsequent calculation processing. ORB (organized FAST and rotaed BRIEF) is an algorithm for FAST feature point extraction and description. This algorithm was proposed by scholars such as ruble in 2011, and its implementation can be referred to as "ruble ethane, et al," ORB: "IEEE International Conference on Computer Vision IEEE, 2012: 2564 and 2571. The ORB algorithm is divided into two parts, which are feature point extraction and feature point description, the feature point extraction is developed by fast (features from accessed segmented test) algorithm, and the feature point description is improved according to brief (binary Robust Independent element features) feature description algorithm. The ORB feature is to combine the detection method of FAST feature points with BRIEF feature descriptors and make improvements and optimization on the original basis. Converting image information shot by the cleaning robot in the moving process into a two-dimensional point cloud map, and if a distance measuring and calculating device of the cleaning robot, such as a laser radar, is combined to determine the distance information of an obstacle, constructing the three-dimensional point cloud (data) map according to dual information of the laser radar and a camera; or the construction of the three-dimensional point cloud map can be completed through the binocular camera. Therefore, the method has higher precision in the aspects of drawing construction and positioning, and the positioning is more accurate.
Thirdly, the method comprises the following steps: and determining a wall in the motion space according to the three-dimensional point cloud image, and determining a first barrier grid corresponding to the wall in the grid map.
In the invention, the acquisition range is determined according to the measuring range of the three-dimensional point cloud picture acquisition equipment of the cleaning robot; searching a positioning line section with the longest length in an acquisition range; and mapping the positioning line segment to a grid map, and determining the positioning line segment as a wall body when the line segment generated after the mapping of the positioning line segment is superposed with the first obstacle grid. The generation principle is that the data of the three-dimensional point cloud picture gives depth information in the current motion space, and indicates the length, distance change and the like of the obstacle. Based on the characteristics of different obstacles, such as a long extending length of a wall, the three-dimensional point cloud picture is presented as a long boundary or an object, while a common obstacle is single, and whether the wall is included in the three-dimensional point cloud picture can be determined according to the characteristics.
In some embodiments, the walls in the motion space may be determined in a three-dimensional point cloud graph by: firstly, determining a search range according to the range of the acquisition equipment of the three-dimensional point cloud picture; then, searching for the 3D line segment with the longest length in the search range; and finally, determining the 3D line segment with the longest length as a wall in the motion space. That is, in the search range, the longest 3D line segment in the three-dimensional point cloud image is found through a traversal method, and the corresponding barrier grid is determined as a wall. The process of determining the 3D line segment with the longest length as the wall body comprises the steps of mapping the line segment into a grid map, judging whether the line segment is overlapped with an obstacle grid of the grid map, judging the line body if the line segment is overlapped with the obstacle grid of the grid map, and continuously repeating the method until the line body is found if the line segment is not overlapped. It should be noted that mapping means projecting a line in a three-dimensional space onto the plane of the grid map, and it is not determined that the line partially overlaps with the barrier grid, unless it is strictly overlapping.
In addition, another scheme for determining the wall body is to collect the intersection line of the indoor ceiling and the wall body, and the specific implementation method is to extract and analyze a picture which is shot by the cleaning robot and contains the intersection line of the ceiling and the wall body, judge that if the picture frame is determined to be the key frame through detection, the visual instant positioning and map building system projects a point line in the map to the picture frame according to the camera pose of the picture frame to generate a corresponding point cloud, and generate a straight line in the map according to the point cloud fitting. For example, if the current frame is detected to be determined to be a key frame, the visual instant positioning and mapping system projects key points in a plurality of key frames to the current key frame, determines the depth of the projection points by using neighborhood information around the projection points, generates a semi-dense depth map, and obtains a point cloud map corresponding to the picture frame. The visual instant positioning and mapping system adopts RANSAC (Random Sample Consensus) algorithm to fit a 3D straight line in a three-dimensional space by using a 3D point cloud of a current picture frame. And mapping the data to a grid map, judging whether the data are overlapped with the barrier grids of the grid map or not, judging the data are the wall if the data are overlapped, and continuously repeating the method until the wall is found if the data are not overlapped. For the vSLAM method of visual instant positioning and mapping, see the part of the disclosure of the CN107909612A specification. It should be noted that the plurality of 3D line segments generated by fitting the three-dimensional point cloud image are any plurality of line segments within the visual detection range, and have different three-dimensional forms according to the actual detection method.
As shown in fig. 7 and 8, the cleaning robot 101 uses a camera, a laser radar, or other sensor to capture information in a moving space, and the intersection line of a wall and a ceiling is 201 to construct a point cloud distribution map 202. The treatment process comprises the following steps: determining the search range as the diagram frame selection area; different 3D line segments L (including L-1, L-2, L-3 and L-4) in the fitting frame selection area are compared one by one in a traversal mode to select the 3D line segment L-4 with the longest length; mapping the determined L-4 to a grid map which is basically parallel to the running ground to form L-4 ', judging whether the L-4' falls into all barrier grids, if so, indicating that the L-4 is near the intersection line of the ceiling and the wall, and determining the position of the wall, namely a positioning line segment; if not, the wall is not found, and the steps are further circulated.
After the wall body is determined by the method, a first obstacle grid on a corresponding grid map is found, wherein the first obstacle grid is usually a plurality of continuous grids, and the first obstacle grid is called as a first continuous obstacle grid on the same continuous straight line with the first obstacle grid. In this implementation, after the intersection line of the wall and the ceiling is determined, the position of the obstacle grid in the global map, that is, the position of the cleaning robot in the global map, needs to be determined. One method is that an inertial navigation system arranged in a cleaning robot body determines the direction and the distance, and then a corresponding barrier grid is found in a global map, so that the establishment of a first continuous barrier grid is completed. Another method is to process the environmental information captured by the camera to determine the location against the history information, such as by capturing a fixed-sign obstacle such as a sofa, a tea table, etc. to determine the orientation, and then looking up the global map information to find the associated first continuous obstacle grid, shown schematically as 301 in the fixed-sign obstacle map for a sofa, a tea table, etc.
In addition, when the cleaning robot runs in a motion space, the camera can shoot images of the motion space in real time, and besides the point cloud picture is generated, the images of the binocular camera can also play a role in positioning obstacles and matching and comparing environments. Specifically, the principle of positioning the obstacle is that the binocular camera can acquire the size, shape, height and distance from the cleaning robot according to the existing positioning method, and in order to further understand the working principle of the binocular camera, the implementation process of the positioning method of the binocular camera is specifically described: the binocular camera respectively and independently obtains two image information of the same barrier at different angles at the same time for stereo matching, an image processing system can carry out data processing on the two images, the specific process is that binarization processing, Gaussian blur and canny operator contour detection are carried out on an original image, the contour of the barrier is found, the barrier is completely framed out by a minimum quadrangle, the pixel size of the barrier is obtained by calculating four vertex coordinates of the quadrangle, the width and the height of the barrier can be calculated according to the triangle principle, and the size measurement of an allowed space is realized; the pixel coordinates of the center point of the object are obtained by calculating the coordinates of four vertexes of the quadrangle, the distance between the center point of the obstacle and the camera is obtained by a distance measurement principle, and the processed data are used for positioning. Specifically, the principle of environment matching comparison is that a visual sensor of the sweeper starts to collect pictures of the surrounding environment when the sweeper starts to move, and pictures with abundant reference objects capable of being used as features are selected to be stored as key frame pictures; simultaneously recording gyroscope information and odometer information between two key frame pictures at adjacent time so as to ensure that an error is not too large due to too long time interval between the collected key frame pictures; establishing a bag-of-words model acceleration feature matching, training each feature in the key frame picture into a visual dictionary in a one-to-one correspondence mode, calculating the mapping of feature points of the current frame picture in the visual dictionary, finding the key frame picture most similar to the current frame picture by utilizing the visual dictionary, and judging to form a visual loop; then, according to the picture position information between the current frame picture and the historical frame picture forming a visual loop with the current frame picture, the real position information between the current frame picture and the historical frame picture can be quickly calculated; the sweeper can automatically finish calibration when detecting that the current frame picture and the historical frame picture form a visual loop, the odometer data and the gyroscope data of the current frame picture are updated according to the real position information and the gyroscope information of the historical frame obtained through calculation, and the sweeper can correct errors and achieve accurate positioning.
Fourthly, the method comprises the following steps: and determining a first edge segment and a second edge segment.
And determining a first edge segment by using a first continuous barrier grid on the same straight line where the first barrier grid is located. Since the obstacle grids are usually marked grids which are mutually connected, after a first obstacle grid corresponding to the wall is determined, other obstacle grids on the same straight line can also be determined, and until all the obstacle grids which are in the same straight line and adjacent to each other are determined, a first edge segment is calculated, wherein the meaning of the first edge segment corresponding to the map is the whole detected continuous wall. For the convenience of subsequent area division, the length of the first edge segment is determined as the maximum span of the first continuous barrier grid, which has the advantage that the maximum span can be determined as one edge length of the disposable cleaning area. It will be appreciated that it is also possible to select a partial length in the barrier grid as the length of the first edge segment, which is characterized in that the length of a side of the defined disposable cleaning area is less adaptable and that a wall body of the same type needs to be divided into a plurality of cleaning areas for separate cleaning.
A second edge segment is determined with a second continuous barrier grid perpendicular to the first edge and adjacent to the first continuous barrier grid or the first barrier grid. The walls are not independent, and thus have a second wall that is connected and not collinear, but after rasterization is shown on the map as a second continuous grid of obstacles that are perpendicular to each other. A second edge segment is determined from the first edge segment, the second edge segment being determined for a second continuous barrier grid perpendicular to the first edge and adjacent to the first continuous barrier grid or the first barrier grid. And for the convenience of subsequent area division, the length of the second side line segment is determined as the maximum span of the second continuous barrier grid, which has the advantage that the other side length of the disposable cleaning area can be determined as the maximum span. It will be appreciated that it is also possible to select a part of the length of the second continuous barrier grid as the length of the second side line segment, which is characterized in that the other side length of the defined disposable cleaning area is less adaptable and needs to be divided into a plurality of cleaning areas for cleaning the same wall separately.
Fifthly: the cleaning area is divided.
And determining a rectangular area by taking the first edge line segment and the second edge line segment as two adjacent edges, and dividing the overlapped part of the rectangular area and the non-obstacle grid area of the corresponding rectangular area in the grid map into a cleaning area. And determining a rectangular area according to the first edge line segment and the second edge line segment which are determined to be of the length and perpendicular to each other as the side length, wherein the overlapped part of the rectangular area and a blank grid in the grid map is a planned cleaning area. The significance of this is that the maximum theoretical operating range is determined first from the two walls, however, some conditions in real space make the partial region not to allow passage (obstacle grid), and the space which is finally allowed to pass is determined after the rectangular region and the blank grid are overlapped. The first situation is that the blank grid area comprises a rectangular area, and at the moment, the rectangular area is a disposable area for planning cleaning, and the cleaning robot finishes efficient cleaning work in the area; and secondly, the blank grid area is only partially overlapped with the rectangular area, and the overlapped area is determined as an area allowing cleaning, and the cleaning robot can complete efficient one-time cleaning work in the area. In the invention, the cleaning robot has high flexibility, for example, the sub-area which is cleaned once is set to be larger in a relatively open living room, and the sub-area which is cleaned once is set to be smaller in a space-limited area such as a bedroom, and the sub-area which is cleaned once is automatically set according to the wall length of the corresponding environment, so that the cleaning robot is favorable for adaptively planning a cleaning area, and the cleaning efficiency is greatly improved compared with the grid cleaning in the prior art.
The specific implementation manner of this embodiment is the method shown in fig. 4 to 6, and fig. 4, 5 and 6 are control methods under different calculation manners in the same map environment, for example, in a relatively open living room, a sub-area to be cleaned at one time is set to be larger, and in a space-limited area such as a bedroom, a sub-area to be cleaned at one time is set to be smaller. In the above-mentioned figures, the global map is rasterized to form a grid map, which includes a wall grid W and a blank grid, and it should be noted that there may be other obstacle grids (not shown in the figure) in the blank grid, which correspond to short obstacles such as furniture and living goods. In the different figures a and B are defined as a first edge segment and a second edge segment, respectively, which are shown at the blank grid for illustrative clarity, in practice between the wall grid W and the blank grid, and the rectangular area formed by the enclosure is used to enclose the blank grid. In a traditional cleaning mode, a cleaning robot cleans blank grids one by one, and cleans the blank grids in the next adjacent grid after the current grid is cleaned in a bow-shaped mode, an N-shaped mode, a key cleaning mode and other cleaning modes.
The first case, shown in fig. 4, is where the blank grid coincides entirely with the rectangular area. When the cleaning robot determines that the wall grid W corresponding to the A-1 is the first barrier grid through the method, the adjacent vertical longer wall grid W is determined to be the second continuous barrier grid and the corresponding B-1 on the basis of the global map, at this time, a rectangular area I is determined, all the rectangular area I falls into the blank grid, therefore, the overlapped part is the area for running and cleaning, the cleaning robot completes free continuous cleaning in the rectangular area I, and the cleaning efficiency is improved compared with the cleaning one by one. When the cleaning robot determines that the wall grid W corresponding to the A-2 is the first barrier grid, the adjacent vertical longer wall grid W is determined to be the second continuous barrier grid and the corresponding B-2 on the basis of the global map, and a rectangular area II is determined at the moment.
The second case, shown in fig. 5, is that the area of the rectangular area is larger than the area of the blank grid. When the cleaning robot determines that the wall grid W corresponding to the A-3 is the first barrier grid through the method, the adjacent vertical longer wall grid W is determined to be the second continuous barrier grid and the corresponding B-3 on the basis of the global map, at the moment, a rectangular area (c) is determined, as the part of the rectangular area (c) is overlapped with the blank grid, and as the overlapped barrier area is not allowed to pass, the overlapped part is an area for running and cleaning, and the cleaning robot completes free continuous cleaning in the rectangular area (c).
The third case is shown in fig. 6, where the area of the rectangular area is smaller than the area of the blank grid. When the cleaning robot determines that the wall grid W corresponding to the A-4 is the first barrier grid through the method, the adjacent vertical longer wall grid W is determined to be the second continuous barrier grid and the corresponding B-4 on the basis of the global map, at this time, a rectangular area is determined, as the rectangular area is overlapped with a part of the blank grid, the overlapped part is an area for running and cleaning, and the cleaning robot completes free continuous cleaning in the rectangular area. When the cleaning robot determines that the wall grid W corresponding to a-5 is the first barrier grid by the method, it determines that the adjacent vertical longer wall grid W is the second continuous barrier grid and the corresponding B-5 on the basis of the global map, and at this time, determines the rectangular area (c), since the rectangular area (c) is overlapped with the partial blank grid, the overlapped part is the area for running and cleaning, and the cleaning robot completes free continuous cleaning in the rectangular area (c). When the cleaning robot determines that the wall grid W corresponding to the a-6 is the first barrier grid by the method, it determines that the adjacent vertical longer wall grid W is the second continuous barrier grid and the corresponding B-6 on the basis of the global map, but because the rectangular area (r) and the rectangular area (c) are detected to be cleaned, the rectangular area (c) is determined at this time in order to avoid repeated cleaning, and because the rectangular area (c) is overlapped with a part of the blank grid, the overlapped part is the area for cleaning, and the cleaning robot completes free continuous cleaning in the rectangular area (c). In this case, the cleaning robot may avoid repeated cleaning according to the recorded operation sequence and cleaning path to further improve reliability and flexibility, but it is understood that the cleaning task is judged to be performed in a case where the entire global map is completed regardless of the operation.
In the three cases of fig. 4 to 6, different judgment results are obtained when different wall bodies are detected, and the cleaning area is determined according to the planning method of the present invention, the actually divided cleaning areas can be flexibly configured according to the situation of the global map on the different judgment results, so that the flexibility and the adaptability of the cleaning robot are enhanced, and the traditional fixed one-by-one grid cleaning mode is changed, so that the purpose that the sub-area which is cleaned once is set to be larger in a relatively open living room, and the sub-area which is cleaned once is set to be smaller in the area with space limitation such as a bedroom, etc., is achieved. In many cases, the arrow shown in the figure is a rough operation process and is not limited to one-time cleaning, for example, all cleaning tasks are completed by circularly reciprocating in a direction parallel to B-1 for many times in a rectangular area (r), and each reciprocating process is separated by the width of one machine body. In many cases, the robot can be accelerated to increase the cleaning speed when the running distance is long, and in addition, cleaning modes such as a bow shape, an N shape, an important cleaning mode and the like can be arranged to realize the divided rectangular area cleaning.
It should be noted that, in the above embodiment, the rectangular area is determined by using the edge segment with the maximum span length, and in other embodiments, the cleaning area may be determined by using a shorter edge segment, and the process of implementing the planning is consistent with the above. In particular, the length of the first edge segment is less than the maximum span of the first continuous barrier grid, and the length of the second edge segment is less than the maximum span of the second continuous barrier grid; alternatively, the length of the first edge segment is less than the maximum span of the first continuous barrier grid, or the length of the second edge segment is less than the maximum span of the second continuous barrier grid. The determined cleaning area is also divided on the basis of the detected wall body on the basis of the global map, and the number of the generated cleaning areas is large because the divided area is relatively small and the number of times of division is large.
In the cleaning area divided by the area division method, due to the complexity of cleaning a global map or the fact that the area of a determined area cannot completely cover a blank grid, a plurality of cleaning areas may exist, and in order to ensure that cleaning is not repeated, a control module of the cleaning robot can generate cleaning records, so that the plurality of cleaning areas generated repeatedly are prevented from being overlapped. For the blank grids which are not cleaned all the time, some blank grids are not detected because the wall body can not be detected, some blank grids are not detected because the area of the planned area is small, and the blank grids can be cleaned after the cleaning robot memorizes the non-cleaned area and the whole cleaning is finished or in the process of operation, so that the cleaning activity of the blank grids of the whole global map is realized.
In the present invention, the divided cleaning regions have a large area, and thus, walking errors need to be eliminated during operation because the walking errors are accumulated as the walking distance or time increases. The correction method is provided, whether the walking time of the robot reaches the preset time or not is judged, and when the walking time of the robot reaches the preset time, the robot is shown to have been walking for a long time, the accumulated walking error is large, and error correction is needed, so that the robot can be determined to meet the preset positioning condition, and subsequent correction operation is carried out. The preset time can be set according to specific design requirements, preferably, the preset time is set to be within a range from 10 minutes to 20 minutes, and the preset time can be set to be 15 minutes in the embodiment. If the walking time of the robot does not reach the preset time, but the walking path reaches the first preset length, the fact that the walking path of the robot is long can be shown, the accumulated walking error is large, and error correction can be carried out. The situation is suitable for the situation that the running process of the robot is smooth or the robot slips less along the edge, in the situation, the robot can walk for a long distance only in a short time, but the robot can walk for a long distance, and a great walking error can be generated when errors are accumulated little by little when the robot walks for a long distance. Therefore, when the walking path of the robot reaches the first preset length, it can be determined that the robot meets the preset positioning condition, and subsequent correction operation is required. The first preset length may also be set according to specific design requirements, preferably set to be within a range of 30 meters to 50 meters, and set to be 40 meters in this embodiment. In addition, whether error correction is needed or not can be judged by combining the walking time with the walking path length, so that whether the walking error accumulated by the robot reaches the degree needing to be corrected or not can be determined more accurately. According to the method, whether the robot meets the condition that error correction is needed or not is judged according to the walking time and/or the walking distance, the judgment mode is simple and convenient, data processing is easy, the judgment result is accurate, and the method is suitable for general use.
One technical solution that can be adopted by the error correction method is a wall edge reference method. Before the cleaning robot carries out correction, the accurate and reliable reference wall edge is screened out, and the basis for effectively carrying out correction is provided. The method for screening the reference wall edge comprises the following steps: the method comprises the following steps: the cleaning robot rotates in situ, the position coordinates of each detection point are determined according to the distance value detected by the distance sensor and the angle value detected by the angle sensor, and then the step two is carried out; step two: judging whether the slope of a straight line formed by two adjacent detection points is within a preset error range or not based on the position coordinates of the detection points, if so, determining that the edge corresponding to the detection point on the straight line with the slope within the preset error range is a straight edge, and entering the third step, otherwise, determining that the edge corresponding to the detection point on the straight line with the slope not within the preset error range is not a straight edge; step three: analyzing the image shot by the vision sensor of the robot in the rotation process, analyzing a straight edge in the image, and then entering the step four; step four: and taking the straight edge corresponding to the detection point on the straight line with the slope within the preset error range, which corresponds to the longest straight edge in the image, as the reference wall edge. The cleaning robot screens the detected straight edge as a wall edge, and the screened straight edge is a reference wall edge.
Another error correction method can also be implemented by adopting a speed monitoring mode. The first step is as follows: the cleaning robot comprises a gyroscope, the gyroscope is provided with an x-axis angular speed change sensor, a y-axis angular speed change sensor and a z-axis angular speed change sensor and an acceleration change sensor, and x-axis angular speed change, y-axis angular speed change and z-axis angular speed change and acceleration change of the cleaning robot can be monitored; the angular velocity change of the x axis, the y axis and the z axis refers to the three-axis angular velocity change, and the acceleration change refers to the change of the linear acceleration of the x axis, the y axis and the z axis. The second step is that: the gyroscope sends the monitored changes of the angular speed and the acceleration of the movement of the three axes x, y and z to the control unit, the control unit judges whether the automatic ground cleaning robot deviates from the set route according to the obtained changes of the angular speed and the acceleration of the movement of the three axes x, y and z, preferably, the control unit obtains a course angle through the data calculation, and judges whether the cleaning robot deviates from the set route through judging the changes of the course angle. The third step: the control unit adjusts the deviation of the automatic floor cleaning robot from the set route, for example, comparing the change of the course angle, if the course angle is increased, the cleaning robot is deflected to the right, and the speed of the left and right driving motors needs to be adjusted (the left driving motor is properly reduced, and the right driving motor is properly increased); if the heading angle decreases, indicating that the automated floor cleaning robot is leaning to the left, the left and right drive motor speeds need to be adjusted (left drive motor is increased appropriately, right drive motor is decreased appropriately). The embodiment judges whether the cleaning robot deviates from the set route through the connection between the gyroscope and the control unit so as to correct the deviation of the route in the straight line operation caused by the error between the two groups of driving motors.
In addition, since the cleaning robot is a global map constructed based on the previous cleaning, local changes may be generated under some special conditions, for example, a user changes an originally fixed obstacle such as a tea table, a sofa, and the like, that is, an original obstacle grid becomes a blank grid and/or a blank grid becomes an obstacle grid, and thus the global map may also be corrected during operation. And correcting the map information stored by the robot based on the difference value between the current positioning data of the robot and the positioning parameters corresponding to the first/second continuous barrier grids. Because the robot is repositioned, only the current coordinate position is corrected, and the originally stored grid map still has errors, and even more serious errors are caused after the robot is repositioned, the relevant map information of the grid map also needs to be corrected after the robot is repositioned. The correction mode can be that the state information of the grid cells is kept unchanged, and only the grid coordinate values of the grid cells are corrected correspondingly. Therefore, the corrected grid map can be used for accurate navigation, and the walking efficiency of the robot is improved. Determining the coordinate difference value (delta X, delta Y) of the terminal position points at the two ends recorded by the first/second continuous barrier grids; determining the side length of the grid unit to be M; and determining the grid coordinate difference value corresponding to the coordinate difference value as R, wherein R is (delta X/M, delta Y/M), subtracting R from the grid coordinate of the grid unit in the grid map to serve as the newly determined grid coordinate, and keeping the state information of the grid unit unchanged. For example, if the current coordinate value detected by the robot at one end point is (46, 58), and the recorded coordinate value of the terminal position point of the second end point is (26, 38), the coordinate difference value is (46-26, 58-38). Since the side length of each grid cell is 20, the resulting grid coordinate difference R is ((46-26)/20, (58-38)/20) — (1, 1). Finally, the grid coordinates of the grid cells in the grid map are subtracted by (1, 1), so that the grid coordinates can be used as the newly determined grid coordinates. At this time, the state information of the grid cell before and after the change of the grid coordinate value remains unchanged. According to the embodiment, the information of the grid map can be quickly and effectively adjusted in a position point coordinate value conversion mode, so that the latest and most accurate grid map is quickly determined again, and more accurate data is provided for subsequent navigation of the robot.
The embodiments of the present invention disclosed herein are merely exemplary for the purpose of clearly illustrating the invention, and should not be considered as limiting the scope of the invention, and certainly not limiting the scope of the invention as claimed herein, it will be apparent to those skilled in the art that equivalent changes, modifications, variations, etc. made in the claims herein are intended to be included within the scope of the invention as defined in the appended claims.

Claims (11)

1. A cleaning area dividing method of a cleaning robot, comprising:
s1: acquiring a grid map containing obstacle distribution information;
s2: acquiring wall information in a motion space;
s3: and dividing a cleaning area by taking the barrier grid of the corresponding wall as a side line in the grid map.
2. The cleaning area division method according to claim 1, wherein acquiring wall information in the movement space includes:
s21: acquiring a three-dimensional point cloud picture in a motion space, wherein the three-dimensional point cloud picture is a data set consisting of a plurality of point cloud data;
s22: generating a plurality of 3D line segments according to the three-dimensional point cloud image fitting;
s23: and determining a positioning line segment corresponding to the wall body in the 3D line segment, and determining a first barrier grid corresponding to the wall body in the grid map.
3. The cleaning area division method according to claim 2, wherein acquiring wall information in the movement space includes:
determining an acquisition range according to the measuring range of the three-dimensional point cloud picture acquisition equipment of the cleaning robot;
searching the 3D line segment with the longest length in the acquisition range;
and mapping the longest 3D line segment to a grid map, and determining the line segment generated after the 3D line segment is mapped as a positioning line segment when the line segment is superposed with the first obstacle grid.
4. The cleaning region dividing method according to claim 3, characterized in that: in the acquisition range, respectively calculating the longest point cloud line segment in each direction as a candidate line segment; selecting a longest candidate line segment among the candidate line segments as the longest 3D line segment.
5. The cleaning area dividing method according to claim 2, characterized in that: the positioning line section is correspondingly an intersection line of the ceiling and the wall body.
6. The cleaning area dividing method according to claim 2, further comprising:
s31: determining a first edge segment by using a first continuous barrier grid on the same straight line where the first barrier grid is located, and determining a second edge segment by using a second continuous barrier grid which is perpendicular to the first edge segment and adjacent to the first continuous barrier grid or the first barrier grid;
s32: determining a rectangular area by taking the first edge line segment and the second edge line segment as adjacent edges;
s33: and dividing the overlapped part of the rectangular area and the non-obstacle grid area in the grid map into a cleaning area.
7. The cleaning area dividing method according to claim 6, characterized in that: the length of the first side line segment is the maximum span of the first continuous barrier grid, and the length of the second side line segment is the maximum span of the second continuous barrier grid; alternatively, the first and second electrodes may be,
the length of the first edge segment is less than the maximum span of the first continuous barrier grid and the length of the second edge segment is less than the maximum span of the second continuous barrier grid; alternatively, the first and second electrodes may be,
the length of the first edge segment is less than the maximum span of the first continuous barrier grid, or the length of the second edge segment is less than the maximum span of the second continuous barrier grid.
8. The cleaning area dividing method according to claim 6, characterized in that: a plurality of cleaning areas determined by the cleaning robot for a plurality of times in the cleaning activity are not overlapped with each other.
9. The cleaning region dividing method according to claim 6, further comprising a correction method of:
determining grid coordinates and coordinate differences of terminal position points at two ends recorded by the first continuous barrier grid or the second continuous barrier grid as (delta X, delta Y);
determining the side length of the grid unit to be M;
and determining the grid coordinate difference value corresponding to the coordinate difference value as R, and subtracting R from the grid coordinate to obtain the newly determined grid coordinate, wherein R is (delta X/M, delta Y/M).
10. The utility model provides a cleaning robot, includes camera module and control module, and the camera module is used for collecting the image in the motion space, and control module is used for data processing and motion control, its characterized in that: the cleaning robot performs the area division method according to any one of claims 1 to 9 according to an instruction of the control module.
11. The cleaning robot of claim 10, wherein: the cleaning robot includes a zigzag pattern in the divided cleaning regions, and performs error correction after traveling in the same direction for a set distance or a set time.
CN201910807893.3A 2019-08-29 2019-08-29 Cleaning area dividing method for cleaning robot and cleaning robot Pending CN112438658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910807893.3A CN112438658A (en) 2019-08-29 2019-08-29 Cleaning area dividing method for cleaning robot and cleaning robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910807893.3A CN112438658A (en) 2019-08-29 2019-08-29 Cleaning area dividing method for cleaning robot and cleaning robot

Publications (1)

Publication Number Publication Date
CN112438658A true CN112438658A (en) 2021-03-05

Family

ID=74741978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910807893.3A Pending CN112438658A (en) 2019-08-29 2019-08-29 Cleaning area dividing method for cleaning robot and cleaning robot

Country Status (1)

Country Link
CN (1) CN112438658A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113768419A (en) * 2021-09-17 2021-12-10 安克创新科技股份有限公司 Method and device for determining sweeping direction of sweeper and sweeper
CN115316887A (en) * 2022-10-17 2022-11-11 杭州华橙软件技术有限公司 Robot control method, robot, and computer-readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113768419A (en) * 2021-09-17 2021-12-10 安克创新科技股份有限公司 Method and device for determining sweeping direction of sweeper and sweeper
CN115316887A (en) * 2022-10-17 2022-11-11 杭州华橙软件技术有限公司 Robot control method, robot, and computer-readable storage medium
CN115316887B (en) * 2022-10-17 2023-02-28 杭州华橙软件技术有限公司 Robot control method, robot, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
KR102447461B1 (en) Estimation of dimensions for confined spaces using a multidirectional camera
CN110801180B (en) Operation method and device of cleaning robot
Veľas et al. Calibration of rgb camera with velodyne lidar
Liu et al. Indoor localization and visualization using a human-operated backpack system
JP6202544B2 (en) Robot positioning system
US20170287166A1 (en) Camera calibration method using a calibration target
CN110134117B (en) Mobile robot repositioning method, mobile robot and electronic equipment
CN111182174B (en) Method and device for supplementing light for sweeping robot
CN108481327A (en) A kind of positioning device, localization method and the robot of enhancing vision
Kühner et al. Large-scale volumetric scene reconstruction using lidar
CN112438658A (en) Cleaning area dividing method for cleaning robot and cleaning robot
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
Lui et al. Robust egomotion estimation using ICP in inverse depth coordinates
Koch et al. Wide-area egomotion estimation from known 3d structure
CN111609854A (en) Three-dimensional map construction method based on multiple depth cameras and sweeping robot
Hebert et al. Progress in 3–D Mapping and Localization
Runceanu et al. Indoor point cloud segmentation for automatic object interpretation
Strand et al. Using an attributed 2D-grid for next-best-view planning on 3D environment data for an autonomous robot
Blaer et al. Two stage view planning for large-scale site modeling
Hemmat et al. Improved ICP-based pose estimation by distance-aware 3D mapping
Liu et al. Processed RGB-D slam using open-source software
Ozkan et al. Surface profile-guided scan method for autonomous 3D reconstruction of unknown objects using an industrial robot
Lieret et al. Automated exploration, capture and photogrammetric reconstruction of interiors using an autonomous unmanned aircraft
Triebel et al. First steps towards a robotic system for flexible volumetric mapping of indoor environments
CN112783147A (en) Trajectory planning method and device, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination