CN112015175A - Room partitioning method, system, terminal, and medium for mobile robot - Google Patents

Room partitioning method, system, terminal, and medium for mobile robot Download PDF

Info

Publication number
CN112015175A
CN112015175A CN202010808090.2A CN202010808090A CN112015175A CN 112015175 A CN112015175 A CN 112015175A CN 202010808090 A CN202010808090 A CN 202010808090A CN 112015175 A CN112015175 A CN 112015175A
Authority
CN
China
Prior art keywords
room
map
closed
mobile robot
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010808090.2A
Other languages
Chinese (zh)
Inventor
王洋涛
屠成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huaxin Information Technology Co Ltd
Original Assignee
Shenzhen Huaxin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huaxin Information Technology Co Ltd filed Critical Shenzhen Huaxin Information Technology Co Ltd
Priority to CN202010808090.2A priority Critical patent/CN112015175A/en
Publication of CN112015175A publication Critical patent/CN112015175A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a room segmentation method, a system, a terminal and a medium for a mobile robot, and solves the problems that in the prior art, the mobile robot can encounter a plurality of outstanding wall body barriers to interfere during working, and a plurality of problems can occur only through on-site evasion, so that the robot motion planning efficiency and the user experience are reduced.

Description

Room partitioning method, system, terminal, and medium for mobile robot
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a room segmentation method, system, terminal, and medium for a mobile robot.
Background
Along with the improvement of quality of life, the robot is used by a large amount, walks through indoor position automatically through ultrasonic wave and collision response to walk between the room, present robot adopts ultrasonic wave and collision response to dodge collision and turn the removal by oneself behind the automatic obstacle of listening, but mobile robot can meet a lot of outstanding wall body obstacles at the during operation and disturb, only can appear many problems through on-the-spot dodging, lead to reducing robot motion planning efficiency and user experience.
Content of application
In view of the above-mentioned shortcomings of the prior art, an object of the present application is to provide a room segmentation method, system, terminal and medium for a mobile robot, which are used to solve the problems that the mobile robot encounters many protruding wall obstacles during working and many problems occur only through on-site evasion, resulting in reduction of robot motion planning efficiency and user experience in the prior art.
To achieve the above and other related objects, the present application provides a room dividing method for a mobile robot, including: acquiring a map constructed according to the current environment; calculating an outer contour of a room for a navigable area in the map; expanding the obstacle line segments in the map, and detecting boundary line segments which can close the room by utilizing the expanded obstacle lines; overlapping the outer contour of the room and the boundary line segment to obtain one or more closed areas; detecting one or more closed contours according to the closed area; and screening the closed contour to obtain a room segmentation result.
In an embodiment of the present application, the method further includes: preprocessing a map constructed according to the current environment; wherein, the pretreatment mode comprises the following steps: and carrying out denoising processing and/or boundary smoothing processing on the map.
In an embodiment of the present application, the manner of calculating the outline of the room for the passable area in the map includes: a topology-based boundary computation method computes an outline of a room for a navigable area in the map.
In an embodiment of the present application, the expanding the obstacle line segments in the map and detecting the boundary line segments that can close the room by using the expanded obstacle line segments includes: dilating the obstacle line segment in the map; and detecting a line segment for filling a gap to close the room according to the expanded obstacle line segment.
In an embodiment of the present application, the method for obtaining a room segmentation result by filtering the closed contour includes: and removing the closed contour which does not conform to the size, the hierarchy, the shape and the position relation of the room to obtain a room segmentation result.
In an embodiment of the present application, the room segmentation result includes: the method comprises the steps of dividing outlines in rooms, marking outer outlines of the rooms and marking grid point maps of different rooms.
To achieve the above and other related objects, the present application provides a room dividing system for a mobile robot, comprising: the acquisition module is used for acquiring a map constructed according to the current environment; an outer contour module for calculating an outer contour of a room for a passable area in the map; a closed line segment detection module for expanding the barrier line segments in the map and detecting boundary line segments that can close the room by using the expanded barrier lines; the overlapping module is used for overlapping the outer contour of the room and the boundary line segment to obtain one or more closed areas; a closed contour module for obtaining one or more closed contours based on the closed region detection; and the segmentation result acquisition module is used for screening the closed contour to acquire a room segmentation result.
To achieve the above and other related objects, the present application provides a room dividing terminal for a mobile robot, comprising: one or more interfaces for communicating with external devices; a memory for storing a computer program; a processor running the computer program to perform the room segmentation method for a mobile robot.
To achieve the above and other related objects, the present application provides a computer-readable storage medium storing a computer program which, when executed, implements the room segmentation method for a mobile robot.
As described above, the room division method, system, terminal, and medium for a mobile robot according to the present application have the following advantageous effects: according to the method and the device, the room map is built through data, the machine highlights wall barriers and removes interference through image preprocessing, the line segments of the wall of the room on the map and the independent areas of the room areas on the map are reused to calculate the closed room blocks, different rooms can be divided according to the walls, the rooms are divided more reasonably, and the motion planning efficiency and the user experience are improved.
Drawings
Fig. 1 is a flowchart illustrating a room segmentation method for a mobile robot according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a room segmentation system for a mobile robot according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a room splitting terminal for a mobile robot according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "over," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.
The application provides a room segmentation method for a mobile robot, which is used for solving the problems that the mobile robot in the prior art can encounter interference of a plurality of protruding wall body barriers during working, and a plurality of problems can occur only through on-site evasion, so that the robot motion planning efficiency and the user experience are reduced.
The following detailed description of the embodiments of the present application will be made with reference to fig. 1 so that those skilled in the art described in the present application can easily implement the embodiments. The present application may be embodied in many different forms and is not limited to the embodiments described herein.
As shown in fig. 1, a flow chart of a room segmentation method for a mobile robot in one embodiment is shown, which includes the following steps;
step S11: and acquiring a map constructed according to the current environment.
Optionally, a binary grid map integrally constructed according to the current environment is obtained.
Optionally, the binary grid point map is converted into a mat image meeting the Opencv standard.
Optionally, the mobile robot constructs a map of the current environment according to one or more of odometer, monocular/binocular vision, 3D lidar, 2D lidar and RGB-D/TOF sensors.
Step S12: an outer contour of a room is calculated for a navigable area in the map.
Optionally, the edge of the passable area in the map is calculated to obtain the outer contour of the room.
Optionally, the manner of calculating the outline of the room for the passable area in the map includes: a topology-based boundary computation method computes an outline of a room for a navigable area in the map.
Optionally, the passable area in the map appears as a white area in the map, that is, the border of the white area in the map is calculated by using a boundary calculation method based on a topological structure, so as to obtain the outer contour of the room.
Optionally, the room outer contour is stored.
Step S13: and expanding the obstacle line segments in the map, and detecting boundary line segments which can enable the room to be closed by utilizing the expanded obstacle lines.
Optionally, the obstacle line segments in the map are expanded, and the expanded obstacle lines are used to detect boundary line segments meeting the room closing criterion.
Optionally, the expanding the obstacle line segment in the map and detecting a boundary line segment that can close the room by using the expanded obstacle line segment includes: dilating the obstacle line segment in the map; and detecting a line segment for filling a gap to close the room according to the expanded obstacle line segment.
Specifically, the detection success probability of the same wall is improved through line segment expansion, so that the door is completely filled, and a room is closed. And a method capable of filling gaps is adopted in the detection line segments, and the wall on the same side is ensured to be completely detected by controlling parameters.
Step S14: and superposing the outer contour of the room and the boundary line segment to obtain one or more closed areas.
Optionally, the outline of each room is superimposed on the detected boundary line segment, so that the outlines of the rooms are relatively complete, and the integrity of the edge is ensured.
Optionally, the detected boundary line segments correspond to walls and doors in the room, and a complete room inner space is obtained by coinciding with the outer contour of the room, wherein the room inner space is divided into several sub-rooms.
Step S15: and detecting one or more closed contours according to the closed area.
Alternatively, the room profile is detected by enclosed areas, all of which are detected as one independent profile.
Optionally, the manner of obtaining one or more closed contours according to the closed region detection includes: each closed region is detected as a closed contour corresponding thereto. Wherein the closed contour is a contour of a sub-room in the room.
Step S16: and screening the closed contour to obtain a room segmentation result.
Optionally, the method for obtaining the room segmentation result by screening the closed contour includes: and removing the closed contour which does not conform to the size, the hierarchy, the shape and the position relation of the room to obtain a room segmentation result.
For example, if the size of the closed contour is larger than the size of the room, which is an unsatisfactory case, the closed contour is removed.
Optionally, the room segmentation result includes: the method comprises the steps of dividing outlines in rooms, marking outer outlines of the rooms and marking grid point maps of different rooms.
The indoor segmentation contour is obtained from the screened closed contour, the outer contour of the room is obtained through the steps, and different rooms are marked on the map according to the information.
Optionally, the room segmentation result includes a room outline point set, and a grid point map labeling different rooms.
Optionally, the method further includes: preprocessing a map constructed according to the current environment; wherein, the pretreatment mode comprises the following steps: and carrying out denoising processing and/or boundary smoothing processing on the map.
Optionally, the drying treatment comprises: and clearing the scattered black and white spots in the map to reduce the probability of detecting false contours.
Optionally, the boundary smoothing process includes: smoothing the boundaries of the map to improve accuracy of contour detection.
In principle, similar to the above-described embodiments, the present application provides a room segmentation system for a mobile robot.
Specific embodiments are provided below in conjunction with the attached figures:
fig. 2 is a schematic structural diagram showing a room division system for a mobile robot in an embodiment of the present application.
The system comprises:
an obtaining module 21, configured to obtain a map constructed according to a current environment;
an outer contour module 22 for calculating an outer contour of a room for a passable area in the map;
a closed line segment detection module 23, configured to expand the barrier line segments in the map, and detect a boundary line segment that can close the room by using the expanded barrier line;
an overlapping module 24, configured to overlap the outer contour of the room with the boundary line segment to obtain one or more enclosed areas;
a closed contour module 25, configured to obtain one or more closed contours according to the closed region detection;
and a segmentation result obtaining module 26, configured to filter the closed contour to obtain a room segmentation result.
Optionally, the obtaining module 21 is connected to the outer contour module 22, the outer contour module 22 is connected to the closed line segment detecting module 23, the closed line segment detecting module 23 is connected to the overlapping module 24, the overlapping module 24 is connected to the closed contour module 25, and the closed contour module 25 is connected to the segmentation result obtaining module 26.
Optionally, the obtaining module 21 obtains a binary grid map integrally constructed according to the current environment.
Optionally, the obtaining module 21 converts the binary grid point map into a mat image meeting the Opencv standard.
Optionally, the mobile robot constructs a map of the current environment according to one or more of odometer, monocular/binocular vision, 3D lidar, 2D lidar and RGB-D/TOF sensors.
Optionally, the outer contour module 22 calculates an edge of a passable area in the map to obtain an outer contour of the room.
Optionally, the manner of calculating the outer contour of the room for the passable area in the map by the outer contour module 22 includes: a topology-based boundary computation method computes an outline of a room for a navigable area in the map.
Optionally, the passable area in the map appears as a white area in the map, that is, the border of the white area in the map is calculated by using a boundary calculation method based on a topological structure, so as to obtain the outer contour of the room.
Optionally, the outer contour module 22 stores the room outer contour.
Optionally, the obstacle line segments in the map are expanded, and the expanded obstacle lines are used to detect boundary line segments meeting the room closing criterion.
Optionally, the closed line detecting module 23 expands the obstacle line segment in the map, and detects the boundary line segment that can close the room by using the expanded obstacle line, including: dilating the obstacle line segment in the map; and detecting a line segment for filling a gap to close the room according to the expanded obstacle line segment.
Specifically, the closed line segment detection module 23 increases the detection success probability of the same wall through line segment expansion, so that the door is completely filled, and the room is closed. And a method capable of filling gaps is adopted in the detection line segments, and the wall on the same side is ensured to be completely detected by controlling parameters.
Optionally, the overlapping module 24 overlaps the outer contour of the room with the detected boundary line segment, so that the contour in each room will be relatively complete, so as to ensure the integrity of the edge.
Optionally, the boundary line segments detected by the superimposing module 24 correspond to walls and doors in the room, and a complete room inner space is obtained by coinciding with the outer contour of the room, wherein the room inner space is divided into several sub-rooms.
Optionally, the closed contour module 25 detects the room contour through the closed areas, and all the closed areas are detected as an independent contour.
Optionally, the manner of obtaining one or more closed contours by the closed contour module 25 according to the detection of the closed region includes: each closed region is detected as a closed contour corresponding thereto. Wherein the closed contour is a contour of a sub-room in the room.
Optionally, the manner of obtaining the room segmentation result by the segmentation result obtaining module 26 by filtering the closed contour includes: and removing the closed contour which does not conform to the size, the hierarchy, the shape and the position relation of the room to obtain a room segmentation result.
For example, if the size of the closed contour is larger than the size of the room, which is an unsatisfactory case, the closed contour is removed.
Optionally, the room segmentation result includes: the method comprises the steps of dividing outlines in rooms, marking outer outlines of the rooms and marking grid point maps of different rooms.
The indoor segmentation contour is obtained from the screened closed contour, the outer contour of the room is obtained through the steps, and different rooms are marked on the map according to the information.
Optionally, the room segmentation result includes a room outline point set, and a grid point map labeling different rooms.
As shown in fig. 3, a schematic structural diagram of a room dividing terminal 30 for a mobile robot in the embodiment of the present application is shown.
The room division terminal 30 for a mobile robot includes:
one or more interfaces 31 for communicating with external devices; wherein different interfaces may transmit the different room segmentation results or to different external devices.
The memory 32 is used to store computer programs; the processor 33 runs a computer program to implement the room segmentation method for a mobile robot as described in fig. 1.
Optionally, the number of the memories 32 may be one or more, the number of the processors 33 may be one or more, and one is taken as an example in fig. 3.
Optionally, the external device may be an external terminal, for example, any one of a mobile terminal and a control terminal of the robot, which is not limited in this application.
Optionally, the processor 33 in the room segmentation terminal 30 for the mobile robot loads one or more instructions corresponding to the processes of the application program into the memory 32 according to the steps described in fig. 1, and the processor 33 runs the application program stored in the memory 32, so as to implement various functions in the room segmentation method for the mobile robot described in fig. 1.
Optionally, the memory 32 may include, but is not limited to, a high speed random access memory, a non-volatile memory. Such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the Processor 31 may include, but is not limited to, a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Optionally, the Processor 33 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The present application also provides a computer-readable storage medium storing a computer program which, when executed, implements the room segmentation method for a mobile robot as shown in fig. 1. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disc-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be a product that is not accessed by the computer device or may be a component that is used by an accessed computer device.
In summary, the method, the system, the terminal and the medium for dividing the room of the mobile robot solve the problems that in the prior art, the mobile robot encounters a lot of interference of protruding wall body obstacles during working and a lot of problems occur only through on-site evasion, so that the robot motion planning efficiency and the user experience are reduced. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. A room segmentation method for a mobile robot, the method comprising:
acquiring a map constructed according to the current environment;
calculating an outer contour of a room for a navigable area in the map;
expanding the obstacle line segments in the map, and detecting boundary line segments which can close the room by utilizing the expanded obstacle lines;
overlapping the outer contour of the room and the boundary line segment to obtain one or more closed areas;
detecting one or more closed contours according to the closed area;
and screening the closed contour to obtain a room segmentation result.
2. The room segmentation method for a mobile robot according to claim 1, wherein the method further comprises: preprocessing a map constructed according to the current environment; wherein, the pretreatment mode comprises the following steps: and carrying out denoising processing and/or boundary smoothing processing on the map.
3. The room segmentation method for a mobile robot according to claim 1, wherein the manner of calculating the outline of the room for the passable area in the map comprises: a topology-based boundary computation method computes an outline of a room for a navigable area in the map.
4. The room segmentation method for a mobile robot according to claim 1, wherein the expanding the obstacle line segment in the map and detecting a boundary line segment that closes the room using the expanded obstacle line comprises:
dilating the obstacle line segment in the map;
and detecting a line segment for filling a gap to close the room according to the expanded obstacle line segment.
5. The room segmentation method for mobile robots of claim 1 wherein the manner of deriving one or more closed contours from the closed region detection comprises: each closed region is detected as a closed contour corresponding thereto.
6. The room segmentation method for a mobile robot according to claim 1, wherein the filtering of the closed contour to obtain the room segmentation result comprises:
and removing the closed contour which does not conform to the size, the hierarchy, the shape and the position relation of the room to obtain a room segmentation result.
7. The room segmentation method for a mobile robot according to claim 1, wherein the room segmentation result includes: the method comprises the steps of dividing outlines in rooms, marking outer outlines of the rooms and marking grid point maps of different rooms.
8. A room segmentation system for a mobile robot, comprising:
the acquisition module is used for acquiring a map constructed according to the current environment;
an outer contour module for calculating an outer contour of a room for a passable area in the map;
a closed line segment detection module for expanding the barrier line segments in the map and detecting boundary line segments that can close the room by using the expanded barrier lines;
the overlapping module is used for overlapping the outer contour of the room and the boundary line segment to obtain one or more closed areas;
a closed contour module for obtaining one or more closed contours based on the closed region detection;
and the segmentation result acquisition module is used for screening the closed contour to acquire a room segmentation result.
9. A room segmentation terminal for a mobile robot, comprising:
one or more interfaces for communicating with external devices;
a memory for storing a computer program;
a processor for running the computer program to perform the room segmentation method for a mobile robot of any one of claims 1 to 7.
10. A computer storage medium, characterized in that a computer program is stored which, when running, implements the room segmentation method for a mobile robot according to any one of claims 1 to 7.
CN202010808090.2A 2020-08-12 2020-08-12 Room partitioning method, system, terminal, and medium for mobile robot Pending CN112015175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010808090.2A CN112015175A (en) 2020-08-12 2020-08-12 Room partitioning method, system, terminal, and medium for mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010808090.2A CN112015175A (en) 2020-08-12 2020-08-12 Room partitioning method, system, terminal, and medium for mobile robot

Publications (1)

Publication Number Publication Date
CN112015175A true CN112015175A (en) 2020-12-01

Family

ID=73505973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010808090.2A Pending CN112015175A (en) 2020-08-12 2020-08-12 Room partitioning method, system, terminal, and medium for mobile robot

Country Status (1)

Country Link
CN (1) CN112015175A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114903384A (en) * 2022-06-13 2022-08-16 苏州澜途科技有限公司 Work scene map area segmentation method and device for cleaning robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947109A (en) * 2019-04-02 2019-06-28 北京石头世纪科技股份有限公司 Robot working area map construction method and device, robot and medium
WO2020077850A1 (en) * 2018-10-18 2020-04-23 深圳乐动机器人有限公司 Method and apparatus for dividing and identifying indoor region, and terminal device
CN111127500A (en) * 2019-12-20 2020-05-08 深圳市银星智能科技股份有限公司 Space partitioning method and device and mobile robot
CN111328386A (en) * 2017-09-12 2020-06-23 罗博艾特有限责任公司 Exploration of unknown environments by autonomous mobile robots
US20200215694A1 (en) * 2019-01-03 2020-07-09 Ecovacs Robotics Co., Ltd. Dynamic region division and region passage identification methods and cleaning robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111328386A (en) * 2017-09-12 2020-06-23 罗博艾特有限责任公司 Exploration of unknown environments by autonomous mobile robots
WO2020077850A1 (en) * 2018-10-18 2020-04-23 深圳乐动机器人有限公司 Method and apparatus for dividing and identifying indoor region, and terminal device
US20200215694A1 (en) * 2019-01-03 2020-07-09 Ecovacs Robotics Co., Ltd. Dynamic region division and region passage identification methods and cleaning robot
CN109947109A (en) * 2019-04-02 2019-06-28 北京石头世纪科技股份有限公司 Robot working area map construction method and device, robot and medium
CN111127500A (en) * 2019-12-20 2020-05-08 深圳市银星智能科技股份有限公司 Space partitioning method and device and mobile robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114903384A (en) * 2022-06-13 2022-08-16 苏州澜途科技有限公司 Work scene map area segmentation method and device for cleaning robot

Similar Documents

Publication Publication Date Title
US11709058B2 (en) Path planning method and device and mobile device
JP6561199B2 (en) Urban road recognition method, apparatus, storage medium and equipment based on laser point cloud
CN112180931B (en) Cleaning path planning method and device of sweeper and readable storage medium
Moras et al. Credibilist occupancy grids for vehicle perception in dynamic environments
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN111609852A (en) Semantic map construction method, sweeping robot and electronic equipment
CN112171675B (en) Obstacle avoidance method and device for mobile robot, robot and storage medium
CN113568415B (en) Mobile robot, edgewise moving method thereof and computer storage medium
CN110262487B (en) Obstacle detection method, terminal and computer readable storage medium
CN111679661A (en) Semantic map construction method based on depth camera and sweeping robot
CN113741438A (en) Path planning method and device, storage medium, chip and robot
CN112327326A (en) Two-dimensional map generation method, system and terminal with three-dimensional information of obstacles
CN114431771B (en) Sweeping method of sweeping robot and related device
CN115248447A (en) Road edge identification method and system based on laser point cloud
CN110956161A (en) Autonomous map building method and device and intelligent robot
CN111178215A (en) Sensor data fusion processing method and device
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
CN113282088A (en) Unmanned driving method, device and equipment of engineering vehicle, storage medium and engineering vehicle
CN111640323A (en) Road condition information acquisition method
CN113673274A (en) Road boundary detection method, road boundary detection device, computer equipment and storage medium
CN111609853A (en) Three-dimensional map construction method, sweeping robot and electronic equipment
CN112015175A (en) Room partitioning method, system, terminal, and medium for mobile robot
CN111309011A (en) Decision-making method, system, equipment and storage medium for autonomously exploring target
CN114820657A (en) Ground point cloud segmentation method, ground point cloud segmentation system, ground modeling method and medium
CN115240160A (en) Obstacle avoidance method, device and equipment for edge pasting cleaning of unmanned sweeper

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination