CN113749585A - Semantic-based self-adaptive sweeping method for sweeping robot - Google Patents

Semantic-based self-adaptive sweeping method for sweeping robot Download PDF

Info

Publication number
CN113749585A
CN113749585A CN202010465573.7A CN202010465573A CN113749585A CN 113749585 A CN113749585 A CN 113749585A CN 202010465573 A CN202010465573 A CN 202010465573A CN 113749585 A CN113749585 A CN 113749585A
Authority
CN
China
Prior art keywords
semantic
sweeping
global map
clustering
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010465573.7A
Other languages
Chinese (zh)
Other versions
CN113749585B (en
Inventor
张希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Fotile Kitchen Ware Co Ltd
Original Assignee
Ningbo Fotile Kitchen Ware Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Fotile Kitchen Ware Co Ltd filed Critical Ningbo Fotile Kitchen Ware Co Ltd
Priority to CN202010465573.7A priority Critical patent/CN113749585B/en
Publication of CN113749585A publication Critical patent/CN113749585A/en
Application granted granted Critical
Publication of CN113749585B publication Critical patent/CN113749585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a semantic-based sweeping robot self-adaptive sweeping method, which comprises the steps of obtaining a global map of a scene to be swept, identifying different object types in the global map by using different voices, filling numerical values corresponding to different semantics into grids corresponding to the object types, classifying and aggregating grids with the same semantics by using a classification and aggregation method, and dividing each classified and aggregated area according to a grid outline block, so that a sweeping robot can automatically execute corresponding sweeping work modes aiming at different semantic sweeping areas in a target global map, the sweeping robot can automatically recognize different sweeping areas and adjust the sweeping work modes, and the intelligent automatic sweeping efficiency of the sweeping robot is improved.

Description

Semantic-based self-adaptive sweeping method for sweeping robot
Technical Field
The invention relates to the field of sweeping robots, in particular to a semantic-based sweeping robot self-adaptive sweeping method.
Background
Sweeping robots have become automatic cleaning equipment for more and more families. When the sweeping robot starts to work, the sweeping robot can complete the sweeping work aiming at the indoor area under manual control or automatically according to a preset sweeping mode. For example, the existing sweeping robot generally divides the area to be swept into a kitchen, a living room and a bedroom according to the room type, i.e. the sweeping mode of the sweeping robot has a sweeping kitchen mode, a sweeping living room mode and a sweeping bedroom mode. After the user places the sweeping robot in the room with the corresponding type, the user can start the corresponding sweeping mode on the sweeping robot according to the type of the current room, so that the sweeping robot can automatically complete sweeping for different rooms.
The chinese patent application CN110377014A discloses a general sweeping robot sweeping path planning method, which includes the following steps: (1) starting, and controlling the pose of the robot based on a closed loop; (2) searching for an object with laser reflection properties; (3) extracting and partitioning regions between the laser-reflective substances; (4) cleaning each subarea in sequence; (5) checking whether all the partitions are cleaned or not, if so, entering the next step, and if not, returning to the step (2); (6) and returning to the charging point. According to the sweeping path planning method of the sweeping robot in the patent application scheme, a mode of collecting environmental information and executing a sweeping task is adopted, so that time and battery energy consumption are saved, and the method is more humanized; in addition, the robot cleaning method performs subarea cleaning by combining with the structured geometric information of the target area, so that the cleaning direction of the robot is determined according to the geometric characteristics of each subarea, the energy consumption of the robot is reduced, the system is more robust, the cleaning robot can walk along the surface of a reflecting object when encountering the laser reflecting object, all areas near the laser reflecting object can be cleaned, and the target area is ensured to be completely cleaned. The invention patent application CN110377014A is to make the sweeping robot sweep the area near all laser reflection objects by searching for the objects with laser reflection properties and then partitioning according to the areas between the extracted laser reflection substances.
However, the invention patent application CN110377014A has some problems in the actual cleaning process: the sweeping robot always cleans the areas among the laser reflecting substances according to the existing fixed cleaning mode in the household cleaning process, and the actual space scene of the areas among the laser reflecting substances is not considered. If the sweeping robot utilizes the preset same sweeping working mode, and the zones between different laser reflecting substances in different room scenes in a house are swept according to the zone sweeping mode based on the zones between the laser reflecting substances, for example, the zones between objects such as a dining table, a chair, a carpet, a closestool and the like are swept, the sweeping robot cannot adaptively divide the zones between the objects in the sweeping process, cannot be adjusted to the corresponding sweeping mode, and finally cannot achieve a good cleaning effect.
Disclosure of Invention
The invention aims to solve the technical problem of providing a self-adaptive sweeping method of a sweeping robot based on semantics in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a self-adaptive sweeping method of a sweeping robot based on semantics is characterized by comprising the following steps:
step 1, acquiring a global map of a scene area to be cleaned; wherein the global map is a rasterized map;
step 2, identifying and obtaining different object types in the global map;
step 3, respectively identifying various object types in the global map by using different semantics to obtain a semantization global map subjected to semantization identification processing;
step 4, corresponding numerical values are given to different semantics, the numerical values corresponding to the semantics are respectively filled into grids corresponding to the object types in the semantic global map, and the numerical global map after numerical value filling processing is obtained;
step 5, performing classification and aggregation processing on the regions with the same semantics in the numerical global map respectively to obtain a classified and aggregated global map; the regions with the same semantics in the aggregated global map are regions of the same type;
step 6, carrying out block division on grids in the same type of regions in the aggregated global map according to grid outlines to form a global map with different semantic cleaning regions, and taking the global map with different semantic cleaning regions as a target global map;
and 7, respectively executing corresponding cleaning working modes by the cleaning robot aiming at different semantic cleaning areas in the target global map.
According to the self-adaptive sweeping robot sweeping method based on the semantics, the global map of a scene to be swept is obtained, different object types in the global map are identified by different voices, then numerical values corresponding to different semantics are filled into grids corresponding to the object types, the grids with the same semantics are classified and aggregated by a classification and aggregation method, and the regions after classification and aggregation are divided according to the grid outline blocks, so that the sweeping robot can automatically execute corresponding sweeping working modes aiming at different semantic sweeping regions in a target global map, the sweeping robot can automatically recognize and adjust the sweeping working modes aiming at different sweeping regions, and the intelligent automatic sweeping efficiency of the sweeping robot is improved.
In order to meet the actual needs of the areas to be cleaned in different cleaning scenes, in the semantic-based sweeping robot adaptive cleaning method, in step 7, the sweeping robot executes different cleaning work modes for different semantic cleaning areas in the target global map.
For the to-be-cleaned scene of the sweeping robot in the invention, optionally, the to-be-cleaned scene area is at least one of a living room, a bedroom, a kitchen, a dining room and a toilet.
In order to finish the cleaning of a scene to be cleaned in the shortest time, in the semantic-based sweeping robot adaptive cleaning method, in step 7, an optimal cleaning path is respectively planned for different semantic cleaning areas in the target global map, and the sweeping robot cleans the corresponding semantic cleaning areas according to the optimal cleaning paths.
Further, in the semantic-based adaptive cleaning method for the cleaning robot, in the step 5, the aggregated global map is obtained in the following steps a 1-a 4:
a1, acquiring the total number of semantic large classes in the semantic global map; the total quantity of the obtained Semantic large classes is marked as M, and the mth Semantic large class is marked as Semanticm,1≤m≤M;
A2, respectively acquiring the number of objects identified by each semantic in the semantic global map; among them, Semantic major class SemanticmThe number of objects identified within the semantically global map is labeled NSemanticm,NSemanticm≥1;
Step a3, randomly selecting a first preset number of clustering barycentric coordinates aiming at any semantic major category; wherein, aiming at any Semantic large class SemanticmThe first preset number is NSemanticm
Step a4, calculating the distance between all grid coordinates marked with any semantic major category and each clustering gravity center coordinate;
step a5, dividing each grid coordinate into clusters formed by cluster gravity centers closest to the grid coordinate to obtain a first preset number of cluster combinations;
step a6, calculating the average value of all coordinates in all cluster combinations, and taking the obtained average value as the gravity center of the cluster combination corresponding to any semantic major category;
step a7, randomly selecting the first preset number of clustering barycentric coordinates again for any semantic major category, executing the step a 4-the step a6, and obtaining the barycentric of the clustering combination corresponding to any semantic major category again;
step a8, when the distance between the gravity center obtained in the step a6 and the gravity center obtained in the step a7 is smaller than a preset error, the step a9 is executed; otherwise, the step a7 is executed, and when the distance between the gravity center obtained in the step a6 and the gravity center obtained again is smaller than the preset error, the step a9 is executed;
step a9, calculating the distance between the barycentric coordinates of two clusters:
when the distance between any two clustering barycentric coordinates is smaller than a preset clustering distance parameter value, taking the clustering combination where the any two clustering barycentric coordinates are located as the same clustering combination, combining all coordinate data in the same clustering combination, taking the average value of all the combined coordinate data as the barycentric coordinate of the same clustering combination, obtaining an updated clustering barycentric coordinate combination, and turning to the step 10; otherwise, go to step a 9;
step a10, calculating the distance between the barycentric coordinates of all clusters after updating again:
when the distance between any two clustering barycentric coordinates is larger than a preset clustering distance parameter value, turning to a step a 11; otherwise, taking the cluster combination where any two cluster barycentric coordinates are located as the same cluster combination, combining all coordinate data in the same cluster combination, taking the average value of all the combined coordinate data as the barycentric coordinate of the same cluster combination, obtaining an updated cluster barycentric coordinate combination, and turning to step a 9;
step a11, the sort aggregation process is stopped.
Compared with the prior art, the invention has the advantages that: according to the self-adaptive sweeping robot sweeping method based on the semantics, the global map of a scene to be swept is obtained, different object types in the global map are identified by different voices, then numerical values corresponding to different semantics are filled into grids corresponding to the object types, the grids with the same semantics are classified and aggregated by a classification and aggregation method, and the regions after classification and aggregation are divided according to the grid outline blocks, so that the sweeping robot can automatically execute corresponding sweeping work modes aiming at different semantic sweeping regions in a target global map, the sweeping robot can automatically recognize different sweeping regions and adjust the sweeping work modes, and the intelligent automatic sweeping efficiency of the sweeping robot is improved.
Drawings
Fig. 1 is a schematic flow chart of a semantic-based self-adaptive cleaning method for a sweeping robot in an embodiment of the invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
As shown in fig. 1, the embodiment provides a semantic-based adaptive cleaning method for a cleaning robot. Here, the present embodiment is described with a living room and a restaurant in a home as a scene area to be cleaned. Specifically, the semantic-based sweeping robot self-adaptive sweeping method comprises the following steps:
step 1, acquiring a global map of a scene area to be cleaned; wherein, the global map is a rasterized map; the global map can be acquired by adopting sensing devices such as a laser radar or a depth/binocular camera; that is, a rasterized global map for the living room and the restaurants may be obtained in step 1;
step 2, identifying and obtaining different object types in the global map; here, tables, chairs, and carpets in the global map can be identified by the identification process for the acquired global map of the living room and the dining room;
step 3, respectively identifying various object types in the global map by using different semantics to obtain a semantization global map subjected to semantization identification processing;
and aiming at the dining table, the chair and the carpet identified in the global map, different semantics are respectively adopted for identification. For example, if the definition 1 is a table and chair object and the definition 2 is a ground attachment, the table and the chair identified in the global map are identified by the definition 1, and the carpet identified in the global map is identified by the definition 2;
step 4, corresponding numerical values are given to different semantics, the numerical values corresponding to the semantics are respectively filled into grids corresponding to the object types in the semantic global map, and the numerical global map after numerical value filling processing is obtained;
for example, after defining semantic 1, the embodiment further makes the assigned numerical value corresponding to semantic 1 be 1, and the assigned numerical value corresponding to semantic 2 be 2, and then fills the numerical value "1" into the grid occupied by the dining table in the global map, and simultaneously fills the grid occupied by the chair in the global map with the numerical value "1"; in addition, the value "2" is filled into the grid occupied by the carpet in the global map;
step 5, classifying and aggregating regions with the same semantics in the digitized global map in the step 4 respectively to obtain an aggregated global map after classification and aggregation; the regions with the same semantics in the aggregated global map are regions of the same type;
specifically, it can be known in this embodiment that the dining table area and the chair area in the numerical global map, where the numerical values are all "1", have the same semantic 1, and the carpet area in the numerical global map, where the numerical value is "2", has the semantic 2, so that the area with the numerical value of "1" belongs to the same cluster, and the area with the numerical value of "2" belongs to a single cluster, so as to obtain the aggregated global map in this embodiment;
step 6, carrying out block division on grids in the same type of regions in the aggregated global map according to grid outlines to form a global map with different semantic cleaning regions, and taking the global map with the different semantic cleaning regions as a target global map;
that is, after the first cluster with the numerical value of "1" is obtained by executing the step 5, the first cluster is divided along the contour of the dining table area grid and the chair area grid, so that an area to be cleaned, which comprises the dining table area grid with the semantic 1 and the chair area grid with the semantic 1, is formed; along the outline of the carpet area grid with the value of "2", another area B to be cleaned is formed;
and 7, respectively executing corresponding cleaning working modes by the cleaning robot aiming at different semantic cleaning areas in the target global map. After the two different areas a and B to be cleaned are obtained in step 6, the sweeping robot of this embodiment performs cleaning treatment on the area a to be cleaned and the area B to be cleaned respectively by using the preset cleaning operation mode.
Of course, here the sweeping robot in step 7 performs different sweeping modes of operation for different semantic sweeping areas within the target global map. For example, in this embodiment:
when the sweeping robot obtains that the area A to be swept is a restaurant according to the semantic-based processing mode, the sweeping robot starts a suction and mopping integrated sweeping working mode, and the executed optimal sweeping path adopts a Y-shaped reciprocating mode;
when the sweeping robot obtains the carpet of which the area B to be swept is the living room according to the semantic-based processing mode, the sweeping robot only starts a strong suction mode, and the executed optimal sweeping path adopts a roundabout Chinese character 'gong' mode.
In step 7, according to the cleaning requirement, respectively planning an optimal cleaning path for each different semantic cleaning area in the target global map, and respectively cleaning the corresponding semantic cleaning area by the cleaning robot according to each optimal cleaning path.
It should be noted that, in order to accurately perform classification and aggregation processing on the obtained digitized global map to obtain an aggregated global map for a scene area to be cleaned, the embodiment processes the obtained aggregated global map in the following steps a1 to a 4:
a1, acquiring the total number of semantic large classes in the semantic global map; the total quantity of the obtained Semantic large classes is marked as M, and the mth Semantic large class is marked as Semanticm,1≤m≤M;
Specifically, in this embodiment, the semantic major categories of the scene area to be cleaned are respectively semantic 1 and semantic 2, that is, the first semantic major category is semantic 1, and the second semantic major category is semantic 2, so that the total number M of the acquired semantic major categories is 2;
step a2, respectively obtaining objects marked by each semantic in the semantic global mapThe number of bodies; among them, Semantic major class SemanticmObject quantity tagging identified within a semantically-oriented global map
Figure BDA0002512523390000061
Because the object types identified by the semantic 1 are dining tables and chairs, and the object types identified by the semantic 2 are carpets, the number of objects identified by the semantic 1 in the semantic global map is 2, and the number of objects identified by the semantic 2 in the semantic global map is 1;
step a3, randomly selecting a first preset number of clustering barycentric coordinates aiming at any semantic major category; wherein, aiming at any Semantic large class SemanticmThe first preset number is
Figure BDA0002512523390000062
For classification and clustering, the semantic 1 is used as an example here to randomly select two cluster barycentric coordinates, and the two cluster barycentrics selected at this time are respectively marked as
Figure BDA0002512523390000063
And
Figure BDA0002512523390000064
step a4, calculating the distance between all grid coordinates marked with any semantic major category and each clustering gravity center coordinate;
assume that, in the scene area to be cleaned in this embodiment, the total number of grids occupied by the dining table in the global map is I, and the ith grid coordinate mark occupied by the dining table is denoted by I
Figure BDA0002512523390000065
1≤i≤I;
Assuming that the total number of grids occupied by the chair in the global map is J, the jth grid coordinate mark occupied by the chair is
Figure BDA0002512523390000066
1≤j≤J;
Respectively calculating grid coordinates of the dining table according to the grid coordinates of the dining table and the grids of the chair, which are marked with the semantic 1
Figure BDA0002512523390000067
With the above two cluster barycenters (cluster barycenter G)1And G2) Distance between coordinates and calculating each grid coordinate of the chair
Figure BDA0002512523390000068
With the above two cluster barycenters (cluster barycenter G)1And G2) The distance between the coordinates;
supposing grid coordinates of dining table
Figure BDA0002512523390000069
And cluster center of gravity G1Distance between coordinates is marked
Figure BDA00025125233900000610
Grid coordinate of dining table
Figure BDA00025125233900000611
And cluster center of gravity G2Distance between coordinates is marked
Figure BDA00025125233900000612
Suppose chair grid coordinates
Figure BDA00025125233900000613
And cluster center of gravity G1Distance between coordinates is marked
Figure BDA00025125233900000614
Grid coordinate of dining table
Figure BDA00025125233900000615
And cluster center of gravity G2Distance between coordinates is marked
Figure BDA00025125233900000616
Step a5, dividing each grid coordinate into clusters formed by cluster gravity centers closest to the grid coordinate to obtain a first preset number of cluster combinations;
for example, assume grid coordinates
Figure BDA0002512523390000071
The cluster gravity center closest to the cluster is G1Grid coordinate
Figure BDA0002512523390000072
The cluster gravity center closest to the cluster is G2Grid coordinate
Figure BDA0002512523390000073
The cluster gravity center closest to the cluster is G1Grid coordinate
Figure BDA0002512523390000074
The cluster gravity center closest to the cluster is G2Then the grid coordinates are calculated
Figure BDA0002512523390000075
And grid coordinates
Figure BDA0002512523390000076
Division into clustering centers of gravity G1In the formed cluster, grid coordinates are set
Figure BDA0002512523390000077
And grid coordinates
Figure BDA0002512523390000078
Division into clustering centers of gravity G2In the formed cluster; for other grid coordinates of the dining table and the chair, the rest can be done by analogy, and the description is omitted here;
step a6, calculating the average value of all coordinates in all cluster combinations, and taking the obtained average value as the gravity center of the cluster combination corresponding to any semantic major category;
specifically, in this embodiment, the calculation is located at the cluster gravity center G1Formed cluster combination and located at cluster gravity center G2Average value of all coordinates in the formed cluster combination, and then the obtained average value is used as the gravity center of the cluster combination corresponding to the semantic 1
Figure BDA0002512523390000079
Step a7, randomly selecting the gravity center coordinates of the clusters with the first preset number again for any semantic major category, and executing the step a 4-the step a6 to obtain the gravity center of the cluster combination corresponding to any semantic major category again;
after the step a6 is finished, two clustering barycentric coordinates are randomly selected again
Figure BDA00025125233900000710
And
Figure BDA00025125233900000711
and obtaining the gravity center of the cluster combination corresponding to the semantic 1 again according to the mode of the steps a 4-a 6
Figure BDA00025125233900000712
Step a8, when the distance between the gravity center obtained in the step a6 and the gravity center obtained in the step a7 is smaller than a preset error, the step a9 is executed; otherwise, the step a7 is executed, and when the distance between the gravity center obtained in the step a6 and the gravity center obtained again is smaller than the preset error, the step a9 is executed;
in particular, if the center of gravity
Figure BDA00025125233900000713
And center of gravity
Figure BDA00025125233900000714
When the distance between the two clusters is smaller than the preset error, the gravity center coordinate obtained after clustering can be better used as the gravity center of the current cluster, and the initial clustering is performedWhen the class work is finished, the step a9 is carried out; otherwise, the step a7 is shifted again until the center of gravity obtained in the step a6
Figure BDA00025125233900000715
When the distance between the gravity center and the obtained gravity center is smaller than the preset error, the step a9 is carried out;
step a9, calculating the distance between the barycentric coordinates of two clusters:
when the distance between any two clustering barycentric coordinates is smaller than a preset clustering distance parameter value, taking the clustering combination where the any two clustering barycentric coordinates are located as the same clustering combination, combining all coordinate data in the same clustering combination, taking the average value of all the combined coordinate data as the barycentric coordinate of the same clustering combination, obtaining an updated clustering barycentric coordinate combination, and turning to the step 10; otherwise, go to step a 9; when the distance between any two clustering barycentric coordinates is smaller than a preset clustering distance parameter value, the two clustering distances are closer, and the two clusters can be combined into one class for processing, so that the total number of clusters is reduced;
step a10, calculating the distance between the barycentric coordinates of all clusters after updating again:
when the distance between any two clustering barycentric coordinates is larger than a preset clustering distance parameter value, turning to a step a 11; otherwise, taking the cluster combination where any two cluster barycentric coordinates are located as the same cluster combination, combining all coordinate data in the same cluster combination, taking the average value of all the combined coordinate data as the barycentric coordinate of the same cluster combination, obtaining an updated cluster barycentric coordinate combination, and turning to step a 9;
step a11, the sort aggregation process is stopped.
Of course, the scene area to be cleaned in this embodiment may also be at least one of a living room, a bedroom, a kitchen, a dining room, and a bathroom.

Claims (5)

1. A self-adaptive sweeping method of a sweeping robot based on semantics is characterized by comprising the following steps:
step 1, acquiring a global map of a scene area to be cleaned; wherein the global map is a rasterized map;
step 2, identifying and obtaining different object types in the global map;
step 3, respectively identifying various object types in the global map by using different semantics to obtain a semantization global map subjected to semantization identification processing;
step 4, corresponding numerical values are given to different semantics, the numerical values corresponding to the semantics are respectively filled into grids corresponding to the object types in the semantic global map, and the numerical global map after numerical value filling processing is obtained;
step 5, performing classification and aggregation processing on the regions with the same semantics in the numerical global map respectively to obtain a classified and aggregated global map; the regions with the same semantics in the aggregated global map are regions of the same type;
step 6, carrying out block division on grids in the same type of regions in the aggregated global map according to grid outlines to form a global map with different semantic cleaning regions, and taking the global map with different semantic cleaning regions as a target global map;
and 7, respectively executing corresponding cleaning working modes by the cleaning robot aiming at different semantic cleaning areas in the target global map.
2. The semantic-based sweeping robot adaptive sweeping method according to claim 1, characterized in that in step 7, the sweeping robot performs different sweeping modes of operation for different semantic sweeping areas within the target global map.
3. The semantic-based sweeping robot adaptive sweeping method according to claim 1, wherein the scene area to be swept is at least one of a living room, a bedroom, a kitchen, a dining room and a bathroom.
4. The self-adaptive sweeping robot sweeping method based on the semantics as claimed in any one of claims 1 to 3, wherein in step 7, an optimal sweeping path is planned for each different semantic sweeping area in the target global map, and the sweeping robot sweeps the corresponding semantic sweeping area according to each optimal sweeping path.
5. The self-adaptive sweeping method for the semantic-based sweeping robot according to claim 1 or 2, wherein the step 5 is to obtain the aggregated global map according to the following steps a 1-a 4:
a1, acquiring the total number of semantic large classes in the semantic global map; the total quantity of the obtained Semantic large classes is marked as M, and the mth Semantic large class is marked as Semanticm,1≤m≤M;
A2, respectively acquiring the number of objects identified by each semantic in the semantic global map; among them, Semantic major class SemanticmObject quantity tagging identified within the semantically global map
Figure FDA0002512523380000011
Step a3, randomly selecting a first preset number of clustering barycentric coordinates aiming at any semantic major category; wherein, aiming at any Semantic large class SemanticmThe first preset number is
Figure FDA0002512523380000021
Step a4, calculating the distance between all grid coordinates marked with any semantic major category and each clustering gravity center coordinate;
step a5, dividing each grid coordinate into clusters formed by cluster gravity centers closest to the grid coordinate to obtain a first preset number of cluster combinations;
step a6, calculating the average value of all coordinates in all cluster combinations, and taking the obtained average value as the gravity center of the cluster combination corresponding to any semantic major category;
step a7, randomly selecting the first preset number of clustering barycentric coordinates again for any semantic major category, executing the step a 4-the step a6, and obtaining the barycentric of the clustering combination corresponding to any semantic major category again;
step a8, when the distance between the gravity center obtained in the step a6 and the gravity center obtained in the step a7 is smaller than a preset error, the step a9 is executed; otherwise, the step a7 is executed, and when the distance between the gravity center obtained in the step a6 and the gravity center obtained again is smaller than the preset error, the step a9 is executed;
step a9, calculating the distance between the barycentric coordinates of two clusters:
when the distance between any two clustering barycentric coordinates is smaller than a preset clustering distance parameter value, taking the clustering combination where the any two clustering barycentric coordinates are located as the same clustering combination, combining all coordinate data in the same clustering combination, taking the average value of all the combined coordinate data as the barycentric coordinate of the same clustering combination, obtaining an updated clustering barycentric coordinate combination, and turning to the step 10; otherwise, go to step a 9;
step a10, calculating the distance between the barycentric coordinates of all clusters after updating again:
when the distance between any two clustering barycentric coordinates is larger than a preset clustering distance parameter value, turning to a step a 11; otherwise, taking the cluster combination where any two cluster barycentric coordinates are located as the same cluster combination, combining all coordinate data in the same cluster combination, taking the average value of all the combined coordinate data as the barycentric coordinate of the same cluster combination, obtaining an updated cluster barycentric coordinate combination, and turning to step a 9;
step a11, the sort aggregation process is stopped.
CN202010465573.7A 2020-05-28 2020-05-28 Semantic-based self-adaptive sweeping method for sweeping robot Active CN113749585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010465573.7A CN113749585B (en) 2020-05-28 2020-05-28 Semantic-based self-adaptive sweeping method for sweeping robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010465573.7A CN113749585B (en) 2020-05-28 2020-05-28 Semantic-based self-adaptive sweeping method for sweeping robot

Publications (2)

Publication Number Publication Date
CN113749585A true CN113749585A (en) 2021-12-07
CN113749585B CN113749585B (en) 2022-10-21

Family

ID=78782199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010465573.7A Active CN113749585B (en) 2020-05-28 2020-05-28 Semantic-based self-adaptive sweeping method for sweeping robot

Country Status (1)

Country Link
CN (1) CN113749585B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173415A (en) * 2023-11-03 2023-12-05 南京特沃斯清洁设备有限公司 Visual analysis method and system for large-scale floor washing machine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107064955A (en) * 2017-04-19 2017-08-18 北京汽车集团有限公司 barrier clustering method and device
WO2018120489A1 (en) * 2016-12-29 2018-07-05 珠海市一微半导体有限公司 Route planning method for intelligent robot
CN108898605A (en) * 2018-07-25 2018-11-27 电子科技大学 A kind of grating map dividing method based on figure
CN110174888A (en) * 2018-08-09 2019-08-27 深圳瑞科时尚电子有限公司 Self-movement robot control method, device, equipment and storage medium
CN110208819A (en) * 2019-05-14 2019-09-06 江苏大学 A kind of processing method of multiple barrier three-dimensional laser radar data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018120489A1 (en) * 2016-12-29 2018-07-05 珠海市一微半导体有限公司 Route planning method for intelligent robot
CN107064955A (en) * 2017-04-19 2017-08-18 北京汽车集团有限公司 barrier clustering method and device
CN108898605A (en) * 2018-07-25 2018-11-27 电子科技大学 A kind of grating map dividing method based on figure
CN110174888A (en) * 2018-08-09 2019-08-27 深圳瑞科时尚电子有限公司 Self-movement robot control method, device, equipment and storage medium
CN110208819A (en) * 2019-05-14 2019-09-06 江苏大学 A kind of processing method of multiple barrier three-dimensional laser radar data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173415A (en) * 2023-11-03 2023-12-05 南京特沃斯清洁设备有限公司 Visual analysis method and system for large-scale floor washing machine
CN117173415B (en) * 2023-11-03 2024-01-26 南京特沃斯清洁设备有限公司 Visual analysis method and system for large-scale floor washing machine

Also Published As

Publication number Publication date
CN113749585B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
JP7139226B2 (en) Mobile cleaning robot artificial intelligence for situational awareness
CN111657798B (en) Cleaning robot control method and device based on scene information and cleaning robot
CN111328386A (en) Exploration of unknown environments by autonomous mobile robots
CN110174888B (en) Self-moving robot control method, device, equipment and storage medium
CN112315379B (en) Mobile robot, control method and device thereof, and computer readable medium
CN106863305A (en) A kind of sweeping robot room map creating method and device
CN107182036A (en) The adaptive location fingerprint positioning method merged based on multidimensional characteristic
CN109871420A (en) Map generates and partition method, device and terminal device
US11734883B2 (en) Generating mappings of physical spaces from point cloud data
CN113749585B (en) Semantic-based self-adaptive sweeping method for sweeping robot
Lang et al. Semantic maps for robotics
Hübner et al. Voxel-based indoor reconstruction from hololens triangle meshes
Luo et al. Autonomous mobile robot intrinsic navigation based on visual topological map
CN114489058A (en) Sweeping robot, path planning method and device thereof and storage medium
CN116448118B (en) Working path optimization method and device of sweeping robot
KR102472176B1 (en) Autonomous robot, location estimation server of autonomous robot and location estimation or autonomous robot using the same
Genova et al. Learning where to look: Data-driven viewpoint set selection for 3d scenes
CN113099386A (en) Multi-floor indoor position identification method and application thereof in museum navigation
Matsumoto et al. Pose estimation of multiple people using contour features from multiple laser range finders
Southey et al. 3d spatial relationships for improving object detection
Chu et al. ESNI: Domestic robots design for elderly and disabled people
Manfredi et al. Autonomous apartment exploration, modelling and segmentation for service robotics
Hsu et al. A graph-based exploration strategy of indoor environments by an autonomous mobile robot
WO2023115660A1 (en) Method and apparatus for automatically cleaning ground
CN113143114B (en) Sweeper and naming method of sweeping area thereof and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant